The 'Security Digest' Archives (TM)

Archive: About | Browse | Search | Contributions | Feedback
Site: Help | Index | Search | Contact | Notices | Changes

ARCHIVE: TCP-IP Distribution List - Archives (1985)
DOCUMENT: TCP-IP Distribution List for May 1985 (117 messages, 40004 bytes)
SOURCE: http://securitydigest.org/exec/display?f=tcp-ip/archive/1985/05.txt&t=text/plain
NOTICE: securitydigest.org recognises the rights of all third-party works.

START OF DOCUMENT

-----------[000000][next][prev][last][first]----------------------------------------------------
Date:      1 May 85 18:18:11 EDT
From:      Roy <MARANTZ@RUTGERS.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   ARPAnet Ethernet gateway
Anyone have suggestions on hardware/software to interface/gateway an
ehternet and the arpanet.  We are currently using a DEC-2060 with and
AN-20 and ECUs to do this gatewaying and would like some other hardware
to do it.  Is a Fuzzball the right way to go?  Anything else?  As you 
might be able to tell, I don't really know much about what to ask for.
I'd like something that didn't cost more than around $10K.  We have
some random PDP-11s (Q-bus and Unibus), but don't want to get killed
by yearly maintenance.

Another point is, could the box talk to 2 1822 interfaces (like an
IMP would)?  We'd like to connect up our IBM type mainframe and the
ethernet hardware we are waiting for is late (and getting later).  We
could get an IMP interface for it, but could we use it.

Thanks for any and all help.

Roy
-------
-----------[000001][next][prev][last][first]----------------------------------------------------
Date:      Wed, 1 May 85 22:16:39 EDT
From:      Doug Kingston <dpk@BRL.ARPA>
To:        Rich Wales <wales@UCLA-LOCUS.ARPA>
Cc:        TCP-IP@SRI-NIC.ARPA
Subject:   What's in a name? (HELO BRL)
Some of you may be wondering what is up at BRL.ARPA.  Recently
more and more of our hosts have been makeing SMTP connections
and identifying themselves as BRL.ARPA in the HELO command,
when in fact they are BRL-VGR.ARPA, BRL-TGR.ARPA, BRL-SEM.ARPA, ....

BRL has started making changes to support full domains in such a fashion
as to hide the existence of particular hosts from users and allow more
efficient mail handling by eliminating mail bounces. The goal is to have
a single logical mail host whose name in this case is BRL.ARPA.  All
machines accept mail addressed to BRL.ARPA and perform appropriate
redirection immediately.  Users need not know which host a user is on to
mail to him within the domain, only his global BRL mailid.  Since all
mail appears as user@brl.arpa, all machines can fully evaluate all the
@brl.arpa addresses in the letter immediately.  Each machine also
has a set of host aliases that it recognizes as well.  These are in general
HOST SPECIFIC.  These are the names used in the alias files to actually
route mail to a specific host.  But, these are only aliases, and the
system really and truely believes that its official name is BRL.ARPA
This causes the mailer to convert local aliases to BRL.ARPA.  In fact,
the mailer thinks its name is ???.BRL.ARPA, but the ??? (host id) is
surpressed from generated strings  (since subdomains aren't legal yet).

Now, a split has developed between the mail system's idea of the mail
identity for the virtual mail host and the actual SMTP address. Most
mail systems, including ours, simply ignore the contents of the HELO
statement since it is redundant and less reliable than our TCP
getpeername (address) system call.  But, some hosts seem to pay
attention to this and even check it (e.g. UCLA-LOCUS).  This has caused
us to start filling their logs with useless messages announcing the
mismatch.  We will eventually be identifying ourselves with a subdomain
name of the form VGR.BRL.ARPA (or VGR.BRL.MIL!), and in fact the mail
system knows its subdomain identifier already, but it will not generate
it unless it needs to uniquely identify itself.  So far the only cases
of this are error messages and SMTP hello commands.  Both are suppressed
until BRL.ARPA is a registered domain.  RFC822 Received: lines are taken
care of another way which still has the old semantics, but is easy to
change (command line arguement).  While I can generate VGR.BRL.ARPA as a
valid name, I have no easy way to generate BRL-VGR.ARPA without kludging
up the SMTP code for the change-over period.

The questions are:
	Will this policy affect our ability to deliver mail?
	(ignoring the fact that it may cause some mailers to burp
	 into the log files)

	How much trouble does this policy cause people?

We are technically prepared to handle subdomains and to live within
one, and to support the necessary nameserver services locally.  We
will be moving with all due haste to become a subdomain in the near
future to legitimize this state of affairs.

Our goal is to provide the appearance of a single virtual host to our
users and as much as possible to the outside world.  The mail system
changes are only one aspect of this.  Our kernels and gateways are
prepared to support the "always up host".  This facility allows an
address to be designated a virtual host and all packets to this host are
forwarded to an available machine.  Service which are independent
of the host involved (like mail delivery), will see very high reliability
since the gateway can automatically switching to backup machines losing
only existing connections.

Comments are welcome.

				Cheers,
					-Doug-

					Douglas P. Kingston III
					Advanced Computer Systems Team
					Ballistics Research Lab
					Attn: AMXBR-VLD-V (A)
					APG, Md. 21005

					(301) 278-6651
-----------[000002][next][prev][last][first]----------------------------------------------------
Date:      Wed, 1 May 85 22:35:39 edt
From:      Alan Parker <parker@nrl-css>
To:        tcp-ip@nic
Subject:   domain servers & resolvers for Unix
RFC 921 states that two domain servers exist for Unix.   Where can
I find out more about these and other software changes that might
have already been done for Unix for domains?  Thanks.

-Alan
-----------[000003][next][prev][last][first]----------------------------------------------------
Date:      1 May 85 23:05:56 EDT
From:      Charles Hedrick <HEDRICK@RUTGERS.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   TCP hangs
I am trying to track down the cause of communications problems
between our Unix and TOPS-20 machines.  I would appreciate some
help from an expert in interpreting the protocols.

Case 1: FTP hang

From TOPS-20, we open an FTP connection to a Unix FTP server.
We retrieve a file.  Everything is fine.  We now try to retrieve
another file.  The connection hangs.  netstat shows that we
have the following problem:
  on the Unix (server) end, the original data connection is in
	TIME-WAIT state.  By RFC 765, the server is required
	to issue the close.  By RFC 793, the end that does the
	close is required to linger in TIME-WAIT for 2 MSL, which
	is apparently 4 minutes.
  when we try to transfer the second file, the exact same pair
	of sockets is used.  This is specified by RFC 765, which
	specified default socket numbers that are normally
	supposed to be used.  The defaults are fixed for a
	given session.  Unfortunately, one cannot open this pair
	again until the TIME-WAIT is over.
This problem is avoided in a Unix to Unix connection because the
user process requests a new local port to be used.  This appears
to be nonstandard according to RFC 765, but it does fix the
problem.  It is not at all clear to my why the problem does not
occur on TOPS-20 to TOPS-20.  A brief glance through the code
does not show the 2 MSL TIME-WAIT in TOPS-20, so maybe that is
it.  Anyway, if my ideas are right, the TCP and FTP protocols
taken together imply the hang that we are seeing.  Is this right?
If so, is there some recommended solution?  Obviously I can fix
our TOPS-20 FTP to change local port numbers as Unix does, if that
is the right thing to do.

Case 2: LPD hang

Unix has no local line printers.  It routes print requests to TOPS-20.
If TOPS-20 crashes when a connection is open, we seem to have problems.
I do not have as good a handle on this problem as on the previous one.
I'm not sure what is going on.  However I have examined RFC 793 to see
whether there is some obvious way that this is supposed to be handled.
The question is, what is supposed to happen when TOPS-20 reboots and
starts listening on the lpd port.  Presumably it will see normal data
packets, since Unix still thinks the connection is open and
synchronized, and will continue retransmitting.  As I read the protocol,
TOPS-20 should issue a RST in this case.  A quick reading of the TOPS-20
TCP code suggests that it simply drops the packet but does not issue a
reset.  The only case I can find where TOPS-20 sends a reset is when it
receives a packet for a non-existent port.  Am I right  that (if true)
this is a bug?
-------
-----------[000004][next][prev][last][first]----------------------------------------------------
Date:      Thu 2 May 85 06:26:35-MDT
From:      Jay Lepreau <Lepreau@UTAH-20.ARPA>
To:        CERF@USC-ISI.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: Excelan
We have one on an Iris and it seems pretty fast relative to the machine.
However, it does not do ICMP (at least not TSTAMP's).
-------
-----------[000005][next][prev][last][first]----------------------------------------------------
Date:      Thu 2 May 85 10:07:52-PDT
From:      HOSTMASTER@SRI-NIC
To:        parker@NRL-CSS.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA, STAHL@SRI-NIC.ARPA
Subject:   Re: domain servers & resolvers for Unix
Alan,

Contact Ralph Campbell (ralph@berkeley) for info on the BIND domain
server for UNIX systems.  Another server for UNIX, called DRUID, was
developed by Peter Karp (karp@sumex-aim.arpa).  You may want to
contact Peter for more information about that server.  I don't know
its status.

- Mary
-------
-----------[000006][next][prev][last][first]----------------------------------------------------
Date:      Thu, 2 May 85 9:42:37 EDT
From:      Ron Natalie <ron@BRL.ARPA>
To:        Roy <MARANTZ@RUTGERS.ARPA>
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re:  ARPAnet Ethernet gateway
First, I think Dave Mills would agree that FUZZBALLs are not the way
to go.  However, the hardware isn't bad.  For just a few hosts an LSI-11
is more than adequate.  BRL uses 2 PDP-11/34's and 1 11/23 for just the
reason you mentioned: we happen to have them lying around.  We'll be
moving to a 68000 Q-bus processor for a more critical application.  Why
Q-bus?  Because Q-bus is easily mapped into UNIBUS and their exists a
UNIBUS interface for nearly every piece of network equipment in the world,
and at least all the ones at BRL (some of our network hardware was lying
around, too).

Software wise, there are three or so "experimental" versions of the code
notably BRL's, MIT's, and CMU's.  There is also Noel Chiappa's commercial
version which Proteon sells.  BBN also makes gateways but they didn't
seem to interested it either implementing non-mainstream hardware or providing
a development environment so we could do it ourselves.  This is why BRL
wrote their own gateway.

As far as your question about the 1822-kludge.  It works, and both BRL
and (they confessed) BBN has tried it.  You can plug two Local or Distant
host interfaces together (that is you can plug two local hosts together
or two distant hosts together, If you plug local host into distant host,
you're likely to blow out the local host interface) with suitable connector
farbling.  Only slight modifications to the software in the gateway are
required to make it look enough like an imp to fool the host into working.

-Ron "Kludges for a Better Tommorrow" Natalie
-----------[000007][next][prev][last][first]----------------------------------------------------
Date:      2 May 1985 13:23:48 PDT
From:      POSTEL@USC-ISIF.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Subject:   re: FTP Problems & TCP Hangs

Charles Hedrick:

Please read about TCP and about FTP in the "Official ARPA-Internet Protocols"
memo = RFC-944.

--jon.
-------
-----------[000008][next][prev][last][first]----------------------------------------------------
Date:      2 May 85 10:47:23 EDT
From:      Roy <MARANTZ@RUTGERS.ARPA>
To:        ron@BRL.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re:  ARPAnet Ethernet gateway
So if I understand you right you use the PDP-11 cause you have it and 
the interfaces.  Now, we have and 11/23+ (I think it is a +) and a 11/34
with strange peripherals and one (maybe 2 soon) Interlan 10Mb Ethernet
cards but nothing else.  Do you think it would pay to use this stuff or
buy something new.  I don't want to worry (too much) about performance
nor maintenance (hardware or software).  Were you planning to use a
Q-bus to Unibus adapter?  If so anyone's in particular?  I guess I'd like
to buy something if it would 1) work 2) be supported or supportable.
Do you have any feel for the merits of the different implementations?
We have a Pyramid 90x (4.2 and system 5 UNIX), but no other (accessable)
Unix machine.  Could that support this code?  My personal bias is to
find a 68000 box to do what I'd like, but I don't know of 1822 interface
for a Multibus (I guess that is why the Qbus-Unibus idea of yours) and
I would like to avoid developing (porting) alot of code to the box.
Anyway do you have a name of someone at MIT I could talk to about their
gateway?  Thanks alot for you help.

Roy
-------
-----------[000009][next][prev][last][first]----------------------------------------------------
Date:      Thu, 2 May 85 12:30:54 EDT
From:      Ron Natalie <ron@BRL.ARPA>
To:        Roy <MARANTZ@RUTGERS.ARPA>
Cc:        ron@BRL.ARPA, tcp-ip@SRI-NIC.ARPA
Subject:   Re:  ARPAnet Ethernet gateway
PDP-11's are great processors.  We all used them for years because they
were so nice.  Now, the limitted address space has kind of banished them
to mundane tasks, but the 256Kb of memory on the full up 18 bit version
of the PDP-11's is entirely adequate for an IP gateway.  If you count up
the dedicated gateways (after dismissing all the 4.2 sites that are playing
gateway), I think you'll find that the rest are some sort of PDP-11.  Even
the central Milnet-Arpanet gateways are LSI's.

With regard to Q-Bus <-> Unibus adapters, yes we use one.  The gateway on
26.0.0.29 uses it (and it passes traffic for the host BRL.ARPA).  I don't
recall who makes it, I think it's either a DEC or an ABLE.  Off the top of
my head I'd get another Able if I needed one, but I don't know that much
about them.  Since I don't use the extended (22bit) Q-bus, I don't need to
worry (yet) about the mapping.  The 11/23 gateway that I have has memory
and the console card on the Q-bus and has everything else: floppy disks,
Hyperchannel, two LH/DH-11's, and Pronet on the converted UNIBUS.

The largest number of gateways are peoples 4.2 BSD hosts with two interfaces.
While people will admit that this is not ideal, it is easy and cheap since
lots of people have 4.2 machines.  You'll need the EGP implementation that
Kirton did at ISI (no I don't know how to get it), which was done for VAX,
so I don't know how it will work on the Pyramid.  Second, you should get
a fairly recent update of 4.2.  There were numerous bugs in the IP code
that have been reported and fixed.

-Ron
-----------[000010][next][prev][last][first]----------------------------------------------------
Date:      2 May 85 13:03 EDT
From:      Rudy.Nedved@CMU-CS-A.ARPA
To:        Charles Hedrick <HEDRICK@RUTGERS.ARPA>
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: TCP hangs
Charles,

Unless there is something subtle going on, Vince Fuller's FTP support
should avoid this problem since it sends down the PORT command and
uses a new set of sockets. However, it could be that the 4.2BSD Unix
support stuff is ignoring the PORT command or rejecting it.

In general, you will find that the Unix world hacked FTP to work between
Unix machines and that the old TOPS-20 FTP program basically cheated
in the same way. It took a good deal of effort on various people's
parts at CMU to get an FTP/FTPSRV system working on each machine that
did not "cheat" and make problems for non-homogenous machine communication.

If you have VAF's FTP stuff then I suspect you are fighting with 4.2BSD
which has lots of irritating bugs/features.

-RUdy
-----------[000011][next][prev][last][first]----------------------------------------------------
Date:      Thu 2 May 85 14:16:12-EDT
From:      "J. Noel Chiappa" <JNC@MIT-XX.ARPA>
To:        ron@BRL.ARPA, MARANTZ@RUTGERS.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA, JNC@MIT-XX.ARPA
Subject:   Re:  ARPAnet Ethernet gateway
	The 'MIT gateway' to which Ron alluded is the CGW which I
mentioned in a message to this mailing list some time back. I
reproduce it here.

    Date: Mon 22 Apr 85 16:27:38-EST
    From: "J. Noel Chiappa" <JNC@MIT-XX.ARPA>
    Subject: Re: IP gateways
    To: chris@COLUMBIA.ARPA, tcp-ip@SRI-NIC.ARPA
    cc: JNC@MIT-XX.ARPA

	    MIT does have a multi-protocol packet switch. It is called the
    C Gateway [...]

	    The code was originally done for the PDP11. It has been ported to
    the 68K at MIT (this port should not be confused with the Portable C
    Gateway, which is a commercial deriviative I am working on); one of these
    ports was to the Bridge box. However, neither port was ever completed and
    put in service. The only service code at MIT is the PDP11 version.  The
    code (both PDP11 and both 68K versions) is publicly available, however;
    people wanting it should contact Dave Bridgham, dab@mit-borax for more
    details.

		    Noel
    -------

	As an additional detail, the PDP11 version (together with 4.2
workbench for doing PDP11 stand alone software) is available from MIT.
Please contact Shawn Routhier (sar@mit-borax) for it.


	One thing that didn't get mentioned is the 'Port Expander';
this is a box done at SRI which takes one IMP port in and and passes N
out.  Since it was specifically designed to look like an IMP it's
probably a better bet if you want to try and fake some hosts.
	Depending on how much the host software depends on you looking
like an IMP (E.g. Does it expect RFNM's? What about the fact that the
host number is in the NOP's the IMP sends?) simply taking a gateway
and pluggin in another IMP interface to front a host might not work.
Providing RFNM's, for instace, would be pretty tricky. (I know, cause
many years ago in the NCP days I tried to get a port expander working
at MIT and it ain't easy.) It can be a real pain to get working, and
to debug it if it doesn't work you will almost certainly need a wizard
for the IBM system to tell you what it thinks is wrong.
	If you are interested in this you should contact someone at
SRI. The person who did the original version was Jim Mathis,
MATHIS@SRI-KL. The second (complete rewrite) was done by Holly
Nelson, but she is no longer at SRI. Probably Jim Mathis is your
best bet. In general though, I would suggest that you try and avoid
this path if you can.

	Noel
-------
-----------[000012][next][prev][last][first]----------------------------------------------------
Date:      2 May 85 14:31:18 EDT
From:      Charles Hedrick <HEDRICK@RUTGERS.ARPA>
To:        Rudy.Nedved@CMU-CS-A.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: TCP hangs
We are using the Stanford FTP.  It has an option to handle sockets in
the Unix method.  I have just turned that on by default.  That does fix
the problem.  What I was trying to determine is whether this is due to
an oddity in Unix or whether this is an actual problem in the definition
of the protocols.  I believe it is a problem in the protocol, but wanted
to see whether others agree.
-------
-----------[000013][next][prev][last][first]----------------------------------------------------
Date:      Thu, 2 May 85 16:55 EDT
From:      Mike StJohns <StJohns@MIT-MULTICS.ARPA>
To:        Charles Hedrick <HEDRICK@RUTGERS.ARPA>
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: TCP hangs
1) FTP hangs

The TOPS-20 User-FTP program should use the PORT command to specify a
new data port each time just on general principles.  This really should
be part of the standard rather than that bit about defaults.  Most
implementations I am familiar with work this way.  You may be right that
TOPS-20 is violating the TIME-WAIT restrictions.  In fact, many violate
this restriction.  Multics does (or did at one time).

2) RST after a crash.  If the Tops-20 machine receives a packet that
belongs to a non existent connection, it must send a RST packet back
unless the packet belonging to the non existent connection was a RST
packet itself.  (Are you sure the code is for a non existent port,
rather than a non-existent connection?)

Good Luck, Mike
-----------[000014][next][prev][last][first]----------------------------------------------------
Date:      Thu, 2 May 85 19:01:22 edt
From:      James O'Toole <james@gyre>
To:        tcp-ip@sri-nic
Subject:   Sun Network Disk source code available!
*You* can have source code for the Sun Network Disk implementation!

We constructed this source code by decompiling 68000 object code into
canonical-style 4.2bsd Unix kernel source.  (It wasn't easy.)  We are
using this code in all our Sun kernels, and on two of our vaxen.  Raw
and page access to remote network disks doesn't yet work on our vaxen
because we have neglected to make other kernel changes necessary to
support this.

Please note that this has nothing to do with the Sun Network File
System (tm).

We will send you our decompiled source code and some diff listings for
vax kernel sources that we've been forced to change to support the
network disk code.  To obtain this source code just send:

	a) One copy of your SUN source code license.

	b) A letter requesting the "Network Disk Source Code."

	c) A check for $50 made out to "University of Maryland
	Foundation."

to	Diane Etchison (Diane Miller)
	University of Maryland Software Distribution
	Department of Computer Science
	University of Maryland
	College Park, Maryland 20742

  --Jim O'Toole
-----------[000015][next][prev][last][first]----------------------------------------------------
Date:      Thu, 2 May 85 20:57:25 EDT
From:      Ron Natalie <ron@BRL.ARPA>
To:        "J. Noel Chiappa" <JNC@MIT-XX.ARPA>
Cc:        MARANTZ@RUTGERS.ARPA, tcp-ip@SRI-NIC.ARPA
Subject:   Re:  ARPAnet Ethernet gateway
Actually, for IP doing the RFNMs is easy, just send one for every packet
you get.  The hard part of the port fakers was that NCP needed to rely on
all the funny things the IMP did, which IP ignores.

-Ron
-----------[000016][next][prev][last][first]----------------------------------------------------
Date:      Sun, 5 May 85 01:10:58 edt
From:      ukma!david@anl-mcs (David Herron, NPR Lover)
To:        anlams!tcp-ip@sri-nic.ARPA (tcp-ip@sri-nic.ARPA)
Subject:   getting RFC's

I am interested in setting up a proper mail server for this university.
The problem is I don't have direct access to ARPA, so cannot simply
ftp files from SRI-NIC (Where I understand the RFC's are stored).  

How might I get copies?  Preferably machine-readable.

        Thank you,

        David Herron
        cbosgd!ukma!david
	ukma!david@ANL-MCS.ARPA
-----------[000017][next][prev][last][first]----------------------------------------------------
Date:      6 May 1985 11:56:30 PDT
From:      POSTEL@USC-ISIF.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Subject:   re: domain names in SMTP

Bob Stine:

Hmmm.  Looks like a problem.  I am sure the intended rule was

<name> ::= <a> [[ <ldh-str> ] <let-dig> ]

--jon.
-------
-----------[000018][next][prev][last][first]----------------------------------------------------
Date:      6 May 1985  9:28:16 EDT
From:      Bob Stine <stine@EDN-UNIX.ARPA>
To:        tcp-ip at sri-nic.arpa
Subject:   domain names in SMTP
As I read the BNF of section 6.1.3.2 of MIL-STD 1781, the element  
(subdomain) part of a domain name which is in "<name>" format (i.e., 
neither #<number> nor <dotnum>) must be at least 3 characters long.
Is this a correct interpretation?  If so, then domain
names such as 'F.USC-ISI.ARPA' would not be legal...

Thanks,

Bob Stine

-----------[000019][next][prev][last][first]----------------------------------------------------
Date:      Tue 7 May 85 17:40:42-PDT
From:      DDN Reference <NIC@SRI-NIC.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Cc:        nic@SRI-NIC.ARPA
Subject:   Memo from Assistant Secretary of Defense
This is a reproduction of a memo written by Donald Latham of the 
office of the Assistant Secretary of Defense on the subject of
DoD's evaluation of the TP-4 protocols.

                    ASSISTANT SECRETARY OF DEFENSE
                       Washington, D.C. 20301-3040


MEMORANDUM FOR DIRECTOR, DEFENSE COMMUNICATIONS AGENCY

SUBJECT:  National Research Council Report on Transport Protocols
          for DoD Data Networks

     [RFC 942 is] ... the final report on "Transport Protocols for 
Department of Defense Data Networks" from the National Research Council
(Board on Telecommunications and Computer Applications, Commission on 
Engineering and Technical Systems).  The report recommends that DoD 
immediately adopt the International Standards Organization Transport 
Protocol (TP-4) and Internetwork Protocol (IP) as a DoD co-standard to 
the current DoD standard Transmission Control Protocol (TCP) and IP and
move ultimately toward exclusive use of TP-4.

     Whenever international standards are available and can be used to 
support military requirements, they will be implemented as rapidly as 
possible to obtain maximum economic and interoperability benefits.  
However, TP as a proven commercial offering is not available at this 
time.  The progress of TP will be monitored carefully and once 
commercially available, TP will be tested and evaluated for use in 
military applications.

     In order to insure that DoD is in a posture to evaluate TP once it
is in wider use in the commercial sector, request you initiate the 
following actions:

     (1)  develop the DoD military requirement specification for TP to
          insure that industry is aware of DoD needs as TP is 
          commercially implemented.

     (2)  insure that appropriate advisory representation is provided to
          commercial standards working groups that are currently refining
          TP under the auspices of the National Bureau of Standards.

     (3)  insure that the DCA protocol test facility can accommodate TP
          testing as required when commercial implementations are available.

     (4)  develop a transition strategy for Option 2 of the report to 
          include estimated resource requirements.

     (5)  evaluate the detailed recommendations presented in the Report
          (pages 61-64) as they apply to Option 2.

                                           Donald C. Latham

cc:  NBS, Mr. Bob Blanc
-------
-----------[000020][next][prev][last][first]----------------------------------------------------
Date:      9 May 85 10:21:32 EDT
From:      Roy <MARANTZ@RUTGERS.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   Thanks
I'd like to thank everyone for the great responce I've gotten to my
question concerning and Arpanet/Ethernet gateway.

Roy
-------
-----------[000021][next][prev][last][first]----------------------------------------------------
Date:      9 May 1985 12:01:58 EDT
From:      MILLS@USC-ISID.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Subject:   Time telling becoming painful
Folks,

While our cherished fuzzballs don't mind telling time occasionally, the
increasing frequency of TCP requests is beginning to threaten regular operations.
While it is true that DCN1.ARPA is a very good clock, accurate to some small
number of milliseconds relative to NBS, all of the other fuzzthings on DCNet
(128.4) synchronize their clocks to DCN1.ARPA and are thus no more accurate
(typically remaining within some tens of milliseconds from DCN1.ARPA). In
the case of the outback hosts, some of which are connected by wierd links you
wouldn't believe), TCP requests can be particularily intrusive, especially since
these are somewhat starved for resources. Accordingly, we would much
appreciate concentrating the TCP attack on DCN1.ARPA and would like to encourage
the use of UDP in any case, since this is far less intrusive on both the net
and the serving host.

We continue to encourage experiments in time services using the fuzzthings or
anything else that ticks and would like to hear from any other groups
engaged in similar mischief.

Dave
-------
-----------[000022][next][prev][last][first]----------------------------------------------------
Date:      13 May 1985 1340-PDT (Monday)
From:      fouts@AMES-NAS.ARPA (Marty)
To:        tcp-ip@sri-nic.ARPA
Cc:        fouts@AMES-NAS.ARPA
Subject:   "out of kernel" TCP/IP in C for new Unix machine

     I would like to do a port of TCP/IP to a new machine which runs a
variant of System V Unix.  This is a "quick and dirty" port to use as
an intermediate stage until a full BSD4.2 socket port can be done.

     Is there a public domain TCP/IP in C available which is mostly out
of the kernel that I can start with?

Marty Fouts
fouts@ames-nas

----------
-----------[000023][next][prev][last][first]----------------------------------------------------
Date:      13 May 1985 14:53:55 EDT
From:      INCO@USC-ISID.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Subject:   Source for HFP

     Although this is not a TCP/IP oriented question, I thought I
would address it here anyway.  Does anyone have or know of sources
for HFP source code for UNIX written in C?  Thanks.

Steve Sutkowski
Inco at Usc-Isid
-------
-----------[000024][next][prev][last][first]----------------------------------------------------
Date:      Tue, 14 May 85 08:00:47 pdt
From:      billn@sri-unix (Bill Northlich)
To:        tcp-ip@sri-nic
Subject:   tcp on hp-3000?
Looking for pointers to tcp on an hp-3000.  Thanks.
/b
-----------[000025][next][prev][last][first]----------------------------------------------------
Date:      14 May 1985 11:43:03 PDT
From:      POSTEL@USC-ISIF.ARPA
To:        TCP-IP@SRI-NIC.ARPA
Subject:   re: TCP on HP-3000

Bill:

Try to find a copy of IEN-167.  Also you might check with people at HP.
People who might know something are Harold Seunarine and Peter Christy.

--jon.
-------
-----------[000026][next][prev][last][first]----------------------------------------------------
Date:      Tue 14 May 85 11:51:29-PDT
From:      Ole Jorgen Jacobsen <OLE@SRI-NIC.ARPA>
To:        POSTEL@USC-ISIF.ARPA
Cc:        TCP-IP@SRI-NIC.ARPA, OLE@SRI-NIC.ARPA
Subject:   re: TCP on HP-3000
There is an entry for HP-3000 in the TCP-IP Implementation Guide which
can be FTPed from [SRI-NIC]<NETINFO>TCP-IP-IMPLEMENTATIONS.TXT

Ole
-------
-----------[000027][next][prev][last][first]----------------------------------------------------
Date:      14 May 1985 0935-EDT (Tuesday)
From:      jas@proteon.arpa
To:        tcp-ip@sri-nic.ARPA
Cc:        fouts at AMES-NAS.ARPA
Subject:   Out of Kernel TCP/IP
MIT did an out of kernel TCP/IP for V6 UNIX on an 11/45 (or any I&D
machine). The kernel code includes only the minimum functionality
required in the kernel:
	> IP Fragment reassembly
	> IP header checksums
	> Demultiplexing based on IP protocol, and UDP/TCP sockets
	> Completion of IP headers on outgoing packets
	> Local-net header handling
	> Gateway cache for routing outbound packets
All of the rest of the code is in libraries that are linked with
user tasks. There are clean IP and UDP libraries, and a rather
bare-bones TCP library. The following aspects are handled by
user-level code:
	> ICMP redirect processing
	> ICMP echo processing
	> fragmenting outgoing packets
	> Name->address translation using IEN116 nameservers
	> UDP headers & checksums
The user level code includes the following application levels:
	> TFTP file transfer
	> User and server telnet (a psuedo-tty driver for the kernel
	  is provided)
	> Finger user and server
	> User and server SMTP
The device driver is for proNET. Since this is a small-address
space net, no ARP was written.

This code is available from MIT, for something like $40. Contact
lwa@mrclean.

Proteon has made a commercial version of this code, which is currently
running on VENIX/11 (a UNIX derivative resembling a cross between
V6 and V7). Some changes and improvements have been made:
	> Comments (extensive) in kernel code
	> Many fixes to the TCP, so that it can work with the
	  4.2TCP, along with a horde of other bug-fixes.
	> An IEN116 nameserver
	> Port of pty driver to V7 kernel
	> Discard user and server
	> Hostnames user
	> Nicname user
Also, the documentation has been brushed-up, but is not much more
extensive than a complete set of manual pages.

This code is distributed in source form, with different licenses
depending on whether it will be resold. Contact me (John Shriver,
jas@proteon) for details on this version.

This code has proved quite reasonably portable. The MIT version
has probably been sucessfully ported about 3 times. The Ethernet
ports have not had any trouble adding an ARP layer.
-------

-----------[000028][next][prev][last][first]----------------------------------------------------
Date:      Tue, 14-May-85 10:41:22 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Out of Kernel TCP/IP

From: jas@proteon.arpa

MIT did an out of kernel TCP/IP for V6 UNIX on an 11/45 (or any I&D
machine). The kernel code includes only the minimum functionality
required in the kernel:
	> IP Fragment reassembly
	> IP header checksums
	> Demultiplexing based on IP protocol, and UDP/TCP sockets
	> Completion of IP headers on outgoing packets
	> Local-net header handling
	> Gateway cache for routing outbound packets
All of the rest of the code is in libraries that are linked with
user tasks. There are clean IP and UDP libraries, and a rather
bare-bones TCP library. The following aspects are handled by
user-level code:
	> ICMP redirect processing
	> ICMP echo processing
	> fragmenting outgoing packets
	> Name->address translation using IEN116 nameservers
	> UDP headers & checksums
 The user level code includes the following application levels:
	> TFTP file transfer
	> User and server telnet (a psuedo-tty driver for the kernel
 is provided)
	> Finger user and server
	> User and server SMTP
The device driver is for proNET. Since this is a small-address
space net, no ARP was written.

This code is available from MIT, for something like $40. Contact
lwa@mrclean.

Proteon has made a commercial version of this code, which is currently
running on VENIX/11 (a UNIX derivative resembling a cross between
V6 and V7). Some changes and improvements have been made:
	> Comments (extensive) in kernel code
	> Many fixes to the TCP, so that it can work with the
 4.2TCP, along with a horde of other bug-fixes.
	> An IEN116 nameserver
	> Port of pty driver to V7 kernel
	> Discard user and server
	> Hostnames user
	> Nicname user
Also, the documentation has been brushed-up, but is not much more
extensive than a complete set of manual pages.

This code is distributed in source form, with different licenses
depending on whether it will be resold. Contact me (John Shriver,
jas@proteon) for details on this version.

This code has proved quite reasonably portable. The MIT version
has probably been sucessfully ported about 3 times. The Ethernet
ports have not had any trouble adding an ARP layer.
-------

-----------[000029][next][prev][last][first]----------------------------------------------------
Date:      Tue, 14-May-85 12:44:30 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   tcp on hp-3000?

From: billn@sri-unix (Bill Northlich)

Looking for pointers to tcp on an hp-3000.  Thanks.
/b

-----------[000030][next][prev][last][first]----------------------------------------------------
Date:      14 May 1985 13:29:43 EDT
From:      MILLS@USC-ISID.ARPA
To:        billn@SRI-UNIX.ARPA, tcp-ip@SRI-NIC.ARPA
Cc:        MILLS@USC-ISID.ARPA
Subject:   Re: tcp on hp-3000?
In response to the message sent  Tue, 14 May 85 08:00:47 pdt from billn@sri-unix 

/b,

Winston Edmond at BBN did it for DARPA a couple of years back.

Dave
-------
-----------[000031][next][prev][last][first]----------------------------------------------------
Date:      14 May 1985 1818 PST
From:      Ron Tencati <TENCATI@JPL-VLSI.ARPA>
To:        tcp-ip@sri-nic.arpa
Subject:   Network route needed...

Would someone please be so kind as to reply to me directly with an answer
to my problem?  I need to add a route to hosts CSS-S3SUN, CADRE, and CADRE-SOL
which are all of the form 192.12.x.x. My mailer (VMS V3.6, Kashtan code) does
not have a route defined for these hosts, so I get the "network unreachable"
message back.  I need the proper route that I should add to my route table.

Please reply to TENCATI@JPL-VLSI.ARPA, and not to this list.  That should 
minimize getting people pissed off.

Thanks,

Ron Tencati
JPL-VLSI.ARPA
------
-----------[000032][next][prev][last][first]----------------------------------------------------
Date:      Tue, 14 May 85 14:32 EDT
From:      Winston B. Edmond <wbe@bbn-vax.ARPA>
To:        Bill Northlich <billn@sri-unix>, tcp-ip@sri-nic
Subject:   Re: tcp on hp-3000?
    Date: Tue, 14 May 85 08:00:47 pdt
    From: billn@sri-unix (Bill Northlich)
    Subject: tcp on hp-3000?
    
    Looking for pointers to tcp on an hp-3000.  Thanks.

Bill,
   I worked on a tcp/ip/1822 implementation for a DARPA HP/3000.  If
you call me at (617) 497-3416, I'll be happy to describe the state of
that software.  The best times to call are afternoons and evenings EDT.
 -WBE
-----------[000033][next][prev][last][first]----------------------------------------------------
Date:      Tue, 14-May-85 14:37:10 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: tcp on hp-3000?

From: MILLS@USC-ISID.ARPA

In response to the message sent  Tue, 14 May 85 08:00:47 pdt from billn@sri-unix 

/b,

Winston Edmond at BBN did it for DARPA a couple of years back.

Dave
-------

-----------[000034][next][prev][last][first]----------------------------------------------------
Date:      Tue, 14-May-85 16:02:38 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: tcp on hp-3000?

From: Winston B. Edmond <wbe@bbn-vax.ARPA>

    Date: Tue, 14 May 85 08:00:47 pdt
    From: billn@sri-unix (Bill Northlich)
    Subject: tcp on hp-3000?
    
    Looking for pointers to tcp on an hp-3000.  Thanks.

Bill,
   I worked on a tcp/ip/1822 implementation for a DARPA HP/3000.  If
you call me at (617) 497-3416, I'll be happy to describe the state of
that software.  The best times to call are afternoons and evenings EDT.
 -WBE

-----------[000035][next][prev][last][first]----------------------------------------------------
Date:      Tue, 14-May-85 16:57:00 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   re: TCP on HP-3000

From: POSTEL@USC-ISIF.ARPA


Bill:

Try to find a copy of IEN-167.  Also you might check with people at HP.
People who might know something are Harold Seunarine and Peter Christy.

--jon.
-------

-----------[000036][next][prev][last][first]----------------------------------------------------
Date:      Tue, 14-May-85 17:53:10 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   re: TCP on HP-3000

From: Ole Jorgen Jacobsen <OLE@SRI-NIC.ARPA>

SRI International: (415) 859-4536  Home: (415) 325-9427

There is an entry for HP-3000 in the TCP-IP Implementation Guide which
can be FTPed from [SRI-NIC]<NETINFO>TCP-IP-IMPLEMENTATIONS.TXT

Ole
-------

-----------[000037][next][prev][last][first]----------------------------------------------------
Date:      14 May 1985 22:03-EDT
From:      CERF@USC-ISI.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Subject:   TCP on HP3000
BBN did a version for ARPA.  Jack Haverty would probably know who did it
and where the sources are.

Vint
-----------[000038][next][prev][last][first]----------------------------------------------------
Date:      Wed, 15-May-85 02:27:18 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Network route needed...

From: Ron Tencati <TENCATI@JPL-VLSI.ARPA>


Would someone please be so kind as to reply to me directly with an answer
to my problem?  I need to add a route to hosts CSS-S3SUN, CADRE, and CADRE-SOL
which are all of the form 192.12.x.x. My mailer (VMS V3.6, Kashtan code) does
not have a route defined for these hosts, so I get the "network unreachable"
message back.  I need the proper route that I should add to my route table.

Please reply to TENCATI@JPL-VLSI.ARPA, and not to this list.  That should 
minimize getting people pissed off.

Thanks,

Ron Tencati
JPL-VLSI.ARPA
------

-----------[000039][next][prev][last][first]----------------------------------------------------
Date:      Wed, 15-May-85 03:21:04 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   TCP on HP3000

From: CERF@USC-ISI.ARPA

BBN did a version for ARPA.  Jack Haverty would probably know who did it
and where the sources are.

Vint

-----------[000040][next][prev][last][first]----------------------------------------------------
Date:      Wed, 15 May 85 21:31 EDT
From:      Ira Winston <Ira%upenn.csnet@csnet-relay.arpa>
To:        tcp-ip@sri-nic.ARPA
Subject:   IP datagrams on IEEE 802.3/802.2 networks
I am involved in planning a campus-wide IEEE 802.3/802.2 based network.
Does anyone have any ideas as to how IP datagrams will appear on this
type of network?  From what I understand there are two possibilities:

1) Avoid 802.2 and use the 802.3 length field as a type field
2) Strictly adhere to 802.2/802.3 standards using the length field
   as a length field and use the 802.2 SSAP/DSAP for the type field.

Which of these methods is going to become the standard?
-----------[000041][next][prev][last][first]----------------------------------------------------
Date:      Thu, 16 May 85 02:00:27 pdt
From:      engvax!KVC@cit-vax
To:        KVC@cit-vax
I have a problem that I think many others have seen and (hopefully)
addressed.  I've set up a system allowing VMS MAIL to be sent and
received to/from foreign networks (like ARPA and UUCP).  In doing
so, I've tried to conform to RFC-822 (ARPAnet standard for message
headers).

The problem is that I have to be able to handle DECnet addresses in
mail that I'm sending out.  For example, I have one system set up as
a gateway, with the UUCP MAIL software.  All other systems in the net
can send UUCP mail by addressing mail something like:

	gate::uucp%"uucp-address-path"

Now, when I make up return addresses, if the message is being gated
with an address like that above, the return address in the message
looks like:

	reverse-uucp-address-path!node::username

This works very well except for the fact the DECnet node names do not
conform to RFC-822 because of the "::".  Does anyone out there have
any good ideas on how I can mangle the address to conform to RFC-822
but still easily handle DECnet users?  I've thought of things like
having a special cased list of DECnet hosts, but that's stupid.
Also, replacing the DECnet address with something in an "@" form
brings up the problem of left-to-right vs. right-to-left.  e.g.
what does:

	reverse-uucp-address-path!username@decnet-node

mean?  (I want it to be the same as my previous example, but a reply to
that at most sitess would look for a site on the ARPAnet called "decnet-node"
and think it should send the message to:

	reverse-uucp-path!username

which is clearly incorrect.

Doesn't RFC-822 have any mechanism for allowing foreign network addresses
in mail going into and out of the ARPAnet?  Domains seem like they may
help, but how am I supposed to justify registering every little DECnet
with the Internet people as it's own domain?  Also, strictly speaking,
do UUCP address paths conform in some way to RFC-822?  Are they
just lucky in that they use "!" rather than "::"?

ggggrrrrrrrrrrrrrrrrr!!!!!!!!!!!!!!!!!!!!

	/Kevin Carosso               engvax!kcv @ CIT-VAX.ARPA
	 Hughes Aircraft Co.
-----------[000042][next][prev][last][first]----------------------------------------------------
Date:      Thu, 16 May 85 8:53:17 CDT
From:      Linda Crosby <lcrosby@ALMSA-1>
To:        TCP-IP@Sri-Nic
Subject:   HDH?
Seeking information about HDH for Vax 11/750 & 11/780 running BSD4.2
(moving to System V in near future).

Please reply directly to:

	lcrosby@almsa-1

Thank you.  				Linda J. Crosby
					Technical Liaison
					ALMSA-1

-----------[000043][next][prev][last][first]----------------------------------------------------
Date:      Thu, 16 May 85 11:50:03 edt
From:      gc@bnl (Graham Campbell)
To:        info-unix@brl, tcp-ip-unix@sri-nic, unis-wizards@brl
Cc:        gc@bnl
Subject:   MFE and BSD4.2
We have just brought up 4.2 and users trying to access MFE over Milnet/Arpanet
are having trouble with "netty".  An inquiry to MFE resulted in the statement
that it is a known 4.2 problem, but they don't know exactly what it is or
the fix for it.  Does any one know the fix???

Graham Campbell
-----------[000044][next][prev][last][first]----------------------------------------------------
Date:      Thu 16 May 85 12:56:31-CDT
From:      Clive Dawson <AI.CLIVE@MCC.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Cc:        capshaw@MCC.ARPA, jbc@MCC.ARPA
Subject:   Sun Gateway problems
We have a DEC-20 on our corporate Ethernet which cannot establish
connections to hosts on any subnets which are gatewayed by Sun work-
stations.  The strange thing is that if the hosts on those subnets initiate
the connection to the 20, then communcation is established with no problem.

One of the Suns in question is reporting 
	103104 messages < minimum length
and so we have reason to suspect that the problem has to do with the 
length of ICMP packets.  Apparently the Unix 4.2 which runs on the Suns
doesn't accept ICMP packets which are less than 8 bytes long.

Has anybody experienced this problem?  A while back there was some mention
about a large number of patches to the IP stuff in generic 4.2.  I'm hoping
that one of them addresses this.

Many thanks, 

Clive Dawson
-------
-----------[000045][next][prev][last][first]----------------------------------------------------
Date:      Thu, 16-May-85 12:03:00 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   IP datagrams on IEEE 802.3/802.2 networks

From: Ira Winston <Ira%upenn.csnet@csnet-relay.arpa>

I am involved in planning a campus-wide IEEE 802.3/802.2 based network.
Does anyone have any ideas as to how IP datagrams will appear on this
type of network?  From what I understand there are two possibilities:

1) Avoid 802.2 and use the 802.3 length field as a type field
2) Strictly adhere to 802.2/802.3 standards using the length field
   as a length field and use the 802.2 SSAP/DSAP for the type field.

Which of these methods is going to become the standard?

-----------[000046][next][prev][last][first]----------------------------------------------------
Date:      16 May 1985 16:18 PST
From:      Gary Krall <GARY@ACC>
To:        JCP@BRL
Cc:        ROODE@SRI-NIC,TCP-IP@SRI-NIC
Subject:   X.25 interfaces for VAX

ACC is a manufacture of X.25 interfaces for the VAX system.  We are fully
certified with the DDN.  Essentially, the product is a "plug-compatible"
intelligent UNIBUS board which has implemented X.25 Levels 1,2 and 3
in firmware.  The Host interface is a device driver which interfaces
directly to IP.  For additional information contact me via the net,
or by phone at 805-963-9431.

In terms of VAX TCP/IP software there are a number of vendors which
support the ACC board as well as some which we hope to have supportedc
in the near term.

For VMS you can contact the Wollongong Group at 415-962-7100, or
Internet Systems at 312-853-8250.  For ULTIX or 4.2 ACC is in the
process of releasing to beta a network driver to support those
operating systems.  For Unix V5 contact Unisoft in Berkeley, CA,
or Uniq Digital Systems in Batavia, IL.

Regards,

Gary Krall/ACC
------
-----------[000047][next][prev][last][first]----------------------------------------------------
Date:      16 May 1985 16:26 PST
From:      Gary Krall <GARY@ACC>
To:        LCROSBY@ALMSA-1
Cc:        TCP-IP@SRI-NIC
Subject:   HDH support for VAX
Linda,

ACC manufactures an interface which will attach a VAX to a C/30 IMP
supporting the HDH(1822-J) protocol.  Termed the IF-11/HDH, it is
a single hex board which is "plug-compatible" in the DEC UNIBUS.
ACC provides, with the system, a device driver which will run
under 4.2/TCP-IP.

As of this writing thje HDH system is currently not support for
System 5.  We are talking with Uniq Digital Systems and Unisoft
to support our device under thier version of System 5 which
supports TCP/IP.

Hope this gives you an overview.  If you need additional info
please conatct me via the net or by phone at 805-963-9431.

Regards,

Gary Krall/ACC
------
-----------[000048][next][prev][last][first]----------------------------------------------------
Date:      Thu, 16-May-85 13:07:31 EDT
From:      tcp-ip-unix@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   MFE and BSD4.2

From: gc@bnl (Graham Campbell)

We have just brought up 4.2 and users trying to access MFE over Milnet/Arpanet
are having trouble with "netty".  An inquiry to MFE resulted in the statement
that it is a known 4.2 problem, but they don't know exactly what it is or
the fix for it.  Does any one know the fix???

Graham Campbell

-----------[000049][next][prev][last][first]----------------------------------------------------
Date:      Thu, 16 May 85 13:53:44 EDT
From:      Joe Pistritto <jcp@BRL.ARPA>
To:        tcp-ip@sri-nic.ARPA
Subject:   X25 i/f for Vaxen
	Can someone out there who has a Vax interfaced to an Imp
with X.25 hardware forward me the name of the vendor of your hardware
(in the vax), and where you got the software from?

						Thanx,
						-JCP-
-----------[000050][next][prev][last][first]----------------------------------------------------
Date:      Thu, 16-May-85 13:56:27 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   HDH?

From: Linda Crosby <lcrosby@ALMSA-1>

Seeking information about HDH for Vax 11/750 & 11/780 running BSD4.2
(moving to System V in near future).

Please reply directly to:

	lcrosby@almsa-1

Thank you.  				Linda J. Crosby
					Technical Liaison
					ALMSA-1

-----------[000051][next][prev][last][first]----------------------------------------------------
Date:      Thu, 16-May-85 15:31:38 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   X25 i/f for Vaxen

From: Joe Pistritto <jcp@BRL.ARPA>

	Can someone out there who has a Vax interfaced to an Imp
with X.25 hardware forward me the name of the vendor of your hardware
(in the vax), and where you got the software from?

						Thanx,
						-JCP-

-----------[000052][next][prev][last][first]----------------------------------------------------
Date:      16 May 85 16:59:42 EDT (Thu)
From:      Mike Brescia <brescia@bbnccv>
To:        Clive Dawson <AI.CLIVE@mcc.ARPA>
Cc:        tcp-ip@sri-nic.ARPA, capshaw@mcc.ARPA, jbc@mcc.ARPA, brescia@bbnccv
Subject:   Re: Sun Gateway problems
We found that there was a bug in some versions of 4.2BSD which rejected ICMP
packets shorter than 48 bytes (IP header + ICMP + 20 extra bytes).
-----------[000053][next][prev][last][first]----------------------------------------------------
Date:      Thu, 16-May-85 17:01:51 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Sun Gateway problems

From: Clive Dawson <AI.CLIVE@MCC.ARPA>

We have a DEC-20 on our corporate Ethernet which cannot establish
connections to hosts on any subnets which are gatewayed by Sun work-
stations.  The strange thing is that if the hosts on those subnets initiate
the connection to the 20, then communcation is established with no problem.

One of the Suns in question is reporting 
	103104 messages < minimum length
and so we have reason to suspect that the problem has to do with the 
length of ICMP packets.  Apparently the Unix 4.2 which runs on the Suns
doesn't accept ICMP packets which are less than 8 bytes long.

Has anybody experienced this problem?  A while back there was some mention
about a large number of patches to the IP stuff in generic 4.2.  I'm hoping
that one of them addresses this.

Many thanks, 

Clive Dawson
-------

-----------[000054][next][prev][last][first]----------------------------------------------------
Date:      Thu, 16-May-85 18:16:10 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: Sun Gateway problems

From: Mike Brescia <brescia@bbnccv>

We found that there was a bug in some versions of 4.2BSD which rejected ICMP
packets shorter than 48 bytes (IP header + ICMP + 20 extra bytes).

-----------[000055][next][prev][last][first]----------------------------------------------------
Date:      Thu, 16 May 85 19:25:23 edt
From:      Chris Torek <chris@gyre>
To:        AI.CLIVE@MCC.ARPA, tcp-ip@SRI-NIC.ARPA
Cc:        capshaw@MCC.ARPA, jbc@MCC.ARPA
Subject:   Re:  Sun Gateway problems
If it's the ICMP bug Jim and I found, and you have Sun source, you can
fix it by finding modules that call "m_pullup" but have already used
"mtod" on the argument to m_pullup.  Whatever was mtod'ed needs to be
mtod'ed again.

Example:

icmp_xyzzy(m)
	struct mbuf *m;
{
	struct icmp_hdr *ic = mtod(m, struct icmp_hdr *);
	... <code> ...
	if (m->m_len < sizeof (struct icmp_hdr)) {
		if ((m = m_pullup(m, sizeof (struct icmp_hdr))) == NULL) {
			/* forget it */
		}
+-->		ic = mtod(m, struct icmp_hdr *);
|	}
|	... <code using ic->icmp_fields> ...
|
+---ADD THIS LINE

(We've reported the bug to Sun; I don't know when they'll get around to
fixing it.)
-----------[000056][next][prev][last][first]----------------------------------------------------
Date:      Thu, 16-May-85 20:46:19 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  Sun Gateway problems

From: Chris Torek <chris@gyre>

If it's the ICMP bug Jim and I found, and you have Sun source, you can
fix it by finding modules that call "m_pullup" but have already used
"mtod" on the argument to m_pullup.  Whatever was mtod'ed needs to be
mtod'ed again.

Example:

icmp_xyzzy(m)
	struct mbuf *m;
{
	struct icmp_hdr *ic = mtod(m, struct icmp_hdr *);
	... <code> ...
	if (m->m_len < sizeof (struct icmp_hdr)) {
		if ((m = m_pullup(m, sizeof (struct icmp_hdr))) == NULL) {
			/* forget it */
		}
 +-->		ic = mtod(m, struct icmp_hdr *);
|	}
|	... <code using ic->icmp_fields> ...
|
+---ADD THIS LINE

(We've reported the bug to Sun; I don't know when they'll get around to
fixing it.)

-----------[000057][next][prev][last][first]----------------------------------------------------
Date:      Thu, 16-May-85 21:33:45 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   X.25 interfaces for VAX

From: Gary Krall <GARY@ACC>


ACC is a manufacture of X.25 interfaces for the VAX system.  We are fully
certified with the DDN.  Essentially, the product is a "plug-compatible"
intelligent UNIBUS board which has implemented X.25 Levels 1,2 and 3
in firmware.  The Host interface is a device driver which interfaces
directly to IP.  For additional information contact me via the net,
or by phone at 805-963-9431.

In terms of VAX TCP/IP software there are a number of vendors which
support the ACC board as well as some which we hope to have supportedc
in the near term.

For VMS you can contact the Wollongong Group at 415-962-7100, or
Internet Systems at 312-853-8250.  For ULTIX or 4.2 ACC is in the
process of releasing to beta a network driver to support those
operating systems.  For Unix V5 contact Unisoft in Berkeley, CA,
or Uniq Digital Systems in Batavia, IL.

Regards,

Gary Krall/ACC
------

-----------[000058][next][prev][last][first]----------------------------------------------------
Date:      Thu, 16-May-85 22:21:52 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   HDH support for VAX

From: Gary Krall <GARY@ACC>

Linda,

ACC manufactures an interface which will attach a VAX to a C/30 IMP
supporting the HDH(1822-J) protocol.  Termed the IF-11/HDH, it is
a single hex board which is "plug-compatible" in the DEC UNIBUS.
ACC provides, with the system, a device driver which will run
under 4.2/TCP-IP.

As of this writing thje HDH system is currently not support for
System 5.  We are talking with Uniq Digital Systems and Unisoft
to support our device under thier version of System 5 which
supports TCP/IP.

Hope this gives you an overview.  If you need additional info
please conatct me via the net or by phone at 805-963-9431.

Regards,

Gary Krall/ACC
------

-----------[000059][next][prev][last][first]----------------------------------------------------
Date:      Thu 16 May 85 22:54:14-EDT
From:      "J. Noel Chiappa" <JNC@MIT-XX.ARPA>
To:        Ira%upenn.csnet@CSNET-RELAY.ARPA, tcp-ip@SRI-NIC.ARPA
Cc:        JNC@MIT-XX.ARPA
Subject:   Re: IP datagrams on IEEE 802.3/802.2 networks
	I think that there was some compromise worked out where if
the value in the type/length field is larger than the largest legal
Ethernet packet (1500) then it was to be interpreted as a type field.
This lets out most of the common values for that field, including
IP and ARP. This basically legalized the de facto standard, which was
that everyone was ignoring the SAP stuff.
	(The exact use of having an optional length field if you have
to be able to function without it was never made clear. But remember,
being a standards committee means never having to say 'Yow, are we
gratuitously kludgy yet?')

	Noel
-------
-----------[000060][next][prev][last][first]----------------------------------------------------
Date:      Thu, 16-May-85 23:56:14 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: IP datagrams on IEEE 802.3/802.2 networks

From: "J. Noel Chiappa" <JNC@MIT-XX.ARPA>

	I think that there was some compromise worked out where if
the value in the type/length field is larger than the largest legal
Ethernet packet (1500) then it was to be interpreted as a type field.
This lets out most of the common values for that field, including
IP and ARP. This basically legalized the de facto standard, which was
that everyone was ignoring the SAP stuff.
	(The exact use of having an optional length field if you have
to be able to function without it was never made clear. But remember,
being a standards committee means never having to say 'Yow, are we
gratuitously kludgy yet?')

	Noel
-------

-----------[000061][next][prev][last][first]----------------------------------------------------
Date:      17 May 85 10:11:26 PDT (Friday)
From:      Kluger.osbunorth@Xerox.ARPA
To:        Ira%upenn.CSNet@CSNet-Relay.ARPA
Cc:        tcp-ip@sri-nic.Arpa
Subject:   Re: IP datagrams on IEEE 802.3/802.2 networks

A second to Noel's message...

There is a footnote in the adopted version of the IEEE 802.3 specification that says

	"Packets with a length field value greater than those specified in 4.4.2
	 [1518 decimal] may be ignored, discarded, or used in a private manner. 
	 The use of such packets is beyond the scope of this standard."

The "private manner" referred to is (of course) use as a type field.

Larry 
-----------[000062][next][prev][last][first]----------------------------------------------------
Date:      Fri, 17-May-85 14:12:49 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: IP datagrams on IEEE 802.3/802.2 networks

From: Kluger.osbunorth@Xerox.ARPA


A second to Noel's message...

There is a footnote in the adopted version of the IEEE 802.3 specification that says

	"Packets with a length field value greater than those specified in 4.4.2
	 [1518 decimal] may be ignored, discarded, or used in a private manner. 
	 The use of such packets is beyond the scope of this standard."

The "private manner" referred to is (of course) use as a type field.

Larry 

-----------[000063][next][prev][last][first]----------------------------------------------------
Date:      Thu, 23-May-85 07:15:05 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   IP over X.25 info, please...

From: Allan M. Schiffman <Schiffman@SRI-KL>

Given that one want's to connect hosts using TCP/IP over X.25-based
public-packet nets, what hardware/software is available?

I know about the CSNET X25Net software (written at Purdue, right?); but
I wouldn't mind hearing about experiences with it.

I know that some Network companies are preparing gateways to do this; have
you heard of any in particular where the stuff already works?

If there are any other possibilities, I'd be interested.  Not surprisingly,
the hosts to be connected are mostly Vaxen (VMS and BSD) so implementations
could be on these hosts or as (say) ethernet gateways.

Thanks,

-Allan
-------

-----------[000064][next][prev][last][first]----------------------------------------------------
Date:      Thu, 23-May-85 19:13:58 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: IP over X.25 info, please...

From: "J. Noel Chiappa" <JNC@MIT-XX.ARPA>

	BBN does have some IP gateways that handle X.25, but I'm not
sure if they are selling them or if the software is available. Try
contacting Bob Hinden at BBN.
	You might check with Bridge. They have an X.25 <-> Ethernet
gateway for XNS, and some IP products; I'm not sure if that includes
an IP gateway for that configuration. Also, some of the digital
filtering repeaters (such as DEC's) might work over an X.25 line. (My
previous negative comments not withstanding, here is one application;
you have 1' of Ethernet attached to an existing gateway that doesn't
handle X.25, then a filtering repeater, X.25 line, repeater, rest of
Ethernet.)

	Noel
-------

-----------[000065][next][prev][last][first]----------------------------------------------------
Date:      Fri, 24-May-85 19:10:37 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Domain registration

From: Mary Stahl <STAHL@SRI-NIC.ARPA>


A revised version of the questionnaire used for domain registration is
housed online at SRI-NIC.  The filename is NETINFO:DOMAIN-TEMPLATE.TXT
and it may be obtained via FTP.  If you are planning to register a
domain with the NIC, please use this questionnaire instead of the one
originally published in RFC 920 to submit requests.

- Mary Stahl / NIC

-------

-----------[000066][next][prev][last][first]----------------------------------------------------
Date:      Fri, 24-May-85 21:30:41 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: Domain registration

From: Dennis R. Smith <Smith@USC-ECLC.ARPA>

When I set it up, I made nevatia the owner of /u/tb.

-----------[000067][next][prev][last][first]----------------------------------------------------
Date:      Fri, 24-May-85 22:15:12 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: Domain registration

From: Dennis R. Smith <Smith@USC-ECLC.ARPA>

My profuse appologies to all for the misdirected reply preceeding.

-----------[000068][next][prev][last][first]----------------------------------------------------
Date:      28 May 1985 1139-PDT (Tuesday)
From:      stanonik@nprdc (Ron Stanonik)
To:        tcp-ip@sri-nic
Cc:        stanonik@nprdc
Subject:   domain servers
We've recently obtained a copy of BIND, the domain server/resover
for 4.xbsd.  We're able to query servers at berkeley, which is
not surprising since they're probably running similar (if not the
same) code.  We haven't succeeded in querying  the server at sri-nic
(or usc-isib or usc-isif).  Tcp connections are refused (ie, no server
listening to port 53), and no packets are received in response to udp
queries.  Has anyone (particularly using BIND) succeeded querying sri-nic?
Any tricks?  Any common blunders installing BIND?  Is there a better
mailing list for these questions?

Thanks,

Ron Stanonik
stanonik@nprdc
-----------[000069][next][prev][last][first]----------------------------------------------------
Date:      Tue 28 May 85 15:57:42-PDT
From:      Mark Crispin <Crispin@SUMEX-AIM.ARPA>
To:        TOPS-20@SU-SCORE.ARPA, TCP-IP@SRI-NIC.ARPA
Subject:   MILNET/ARPANET performance
Folks -

     I have spent a good bit of time feeling out Telnet
performance to TOPS-20 sites on MILNET, ARPANET, and Canada's
DRENET.  I have observed that this performance between Milnet and
ARPANET is, in a word, terrible.  There are frequent echo delays
of over a minute in duration.  By comparison, ARPANET to DRENET
performance is considerably more tolerable.

     In a number of instances, Telnet performance from an
unloaded TOPS-20 system on ARPANET to another unloaded TOPS-20
system on Milnet has been terrible enough to make serious work
nearly impossible, while access to the Milnet TOPS-20 from the
Milnet TAC was smooth and quite usable.  At times, the delays
have been long enough for the Telnet user program to declare the
connection dead.

     This is a guess, but I believe that the gateways are
throwing out a lot of packets.  Unless they've changed it, all
three networks still endeavor reliable delivery of all 1822
messages so TCP reliable delivery is in theory not resorted to.
It is probably traffic-related, since TCP performance between
ARPANET and DRENET is tolerable in spite of the slow lines at
DRENET.

     Telnet is a worse case test of this, due to its character at
a time nature.  I wonder if the TCP retransmission parameters
need tuning depending upon whether the connection is on a
reliable network (e.g. 1822) or is going through a gateway.

-- Mark --
-------
-----------[000070][next][prev][last][first]----------------------------------------------------
Date:      Tue, 28-May-85 15:21:16 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   domain servers

From: stanonik@nprdc (Ron Stanonik)

We've recently obtained a copy of BIND, the domain server/resover
for 4.xbsd.  We're able to query servers at berkeley, which is
not surprising since they're probably running similar (if not the
same) code.  We haven't succeeded in querying  the server at sri-nic
(or usc-isib or usc-isif).  Tcp connections are refused (ie, no server
listening to port 53), and no packets are received in response to udp
queries.  Has anyone (particularly using BIND) succeeded querying sri-nic?
Any tricks?  Any common blunders installing BIND?  Is there a better
mailing list for these questions?

Thanks,

Ron Stanonik
stanonik@nprdc

-----------[000071][next][prev][last][first]----------------------------------------------------
Date:      Tue 28 May 85 20:39:23-PDT
From:      David Roode <ROODE@SRI-NIC.ARPA>
To:        dpk@BRL.ARPA, Crispin@SUMEX-AIM.ARPA
Cc:        TOPS-20@SU-SCORE.ARPA, TCP-IP@SRI-NIC.ARPA
Subject:   Re:  MILNET/ARPANET performance
Wasn't the goal of the MILNET/ARPANET mailbridges merely to provide
mail service?  People can use a MILNET TAC rather than ARPANET host in
the first place if they want to access a MILNET host--they are located
in more and more locations, and access is essentially added
wherever it is requested.  An awful lot of people continue to use
an ARPANET TAC merely because they happen to know its phone number.
I bet there are people on this list who do not know they can
FTP a list of MILNET TAC dialup numbers (those currently
operational) off of the host SRI-NIC.ARPA with the pathname
NETINFO:TAC-PHONES.LIST  .    In fact, it might be interesting
to see what the effect on load would be if everyone who could used
a TAC on the proper network.
-------
-----------[000072][next][prev][last][first]----------------------------------------------------
Date:      Tue, 28 May 85 17:44 EDT
From:      "Jeffrey I. Schiller" <Schiller@MIT-MULTICS.ARPA>
To:        stanonik@NPRDC.ARPA (Ron Stanonik)
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: domain servers
I have successfully installed BIND here at MIT.  I have no problems
querying the NIC or ISIF (though sometimes they don't answer if they
don't know the name you requested).  I have also found and fixed at
least one bug so far (cache aging) in bind.  Is there a mailing list for
BIND users/hackers to communicate, I would like to know this too.

                    -Jeff
-----------[000073][next][prev][last][first]----------------------------------------------------
Date:      28 May 1985 18:02:42 EDT
From:      MILLS@USC-ISID.ARPA
To:        stanonik@NPRDC.ARPA, tcp-ip@SRI-NIC.ARPA
Cc:        MILLS@USC-ISID.ARPA
Subject:   Re: domain servers
In response to the message sent  28 May 1985 1139-PDT (Tuesday) from stanonik@nprdc 

Ron,

Mike O'Connor (oconnor@dcn9) has BIND up and running on our Sun here and
has joy of NIC, ISIB and ISIF UDP servers. Dinky fuzzball DCN1 (128.4.0.1)
runs a UDP server which caches the ARPA domain, but is not full-featured.

The appropriate list is namedroppers@nic.

Dave
-------
-----------[000074][next][prev][last][first]----------------------------------------------------
Date:      Tue, 28-May-85 18:59:58 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: domain servers

From: "Jeffrey I. Schiller" <Schiller@MIT-MULTICS.ARPA>

I have successfully installed BIND here at MIT.  I have no problems
querying the NIC or ISIF (though sometimes they don't answer if they
don't know the name you requested).  I have also found and fixed at
least one bug so far (cache aging) in bind.  Is there a mailing list for
BIND users/hackers to communicate, I would like to know this too.

                    -Jeff

-----------[000075][next][prev][last][first]----------------------------------------------------
Date:      Tue 28 May 85 22:15:05-PDT
From:      Mark Crispin <Crispin@SUMEX-AIM.ARPA>
To:        ROODE@SRI-NIC.ARPA
Cc:        dpk@BRL.ARPA, TOPS-20@SU-SCORE.ARPA, TCP-IP@SRI-NIC.ARPA
Subject:   Re:  MILNET/ARPANET performance
There are those of us who are "homed" on both Milnet and ARPANET,
and periodically need to telnet from hosts to a host on the other
network, especially when software is being sloshed around all over
the place.  This isn't random hacking, either, this is real honest
to goodness Internet business.

It is NOT acceptable to tell us that the Milnet gateways are only
for mail.  It is NOT acceptable to tell us to use a TAC until such
time as enough TAC dialups can be guaranteed.  Of a number of Milnet
TACs in this area, the only one with posted dialups is the SRI TAC,
which has all lines busy about 50% of the time.

I have local ARPANET host access, so I don't care much about local
ARPANET TAC's, but I should note that as far as I know there aren't
any public 1200 baud dialups on the local ARPANET TAC at Stanford
(and I'm not aware of any others).
-------
-----------[000076][next][prev][last][first]----------------------------------------------------
Date:      Tue, 28-May-85 20:01:20 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: domain servers

From: MILLS@USC-ISID.ARPA

In response to the message sent  28 May 1985 1139-PDT (Tuesday) from stanonik@nprdc 

Ron,

Mike O'Connor (oconnor@dcn9) has BIND up and running on our Sun here and
has joy of NIC, ISIB and ISIF UDP servers. Dinky fuzzball DCN1 (128.4.0.1)
runs a UDP server which caches the ARPA domain, but is not full-featured.

The appropriate list is namedroppers@nic.

Dave
-------

-----------[000077][next][prev][last][first]----------------------------------------------------
Date:      Tue, 28 May 85 20:30:08 EDT
From:      Doug Kingston <dpk@BRL.ARPA>
To:        Mark Crispin <Crispin@sumex-aim.ARPA>
Cc:        TOPS-20@su-score.ARPA, TCP-IP@sri-nic.ARPA
Subject:   Re:  MILNET/ARPANET performance
In the dark, no one can hear you scream...

I will add my voice to the list of those who have been silent in
the past about the lousy MILNET/ARPANET gatewaying service provided
by the swamped BBN 11/03 gateways.  Supposedly they are to be upgraded
to Butterflys to solve this, but how long must we wait...  And will
it really solve the problem?

			Using the network for real work,
					-Doug-
-----------[000078][next][prev][last][first]----------------------------------------------------
Date:      Tue, 28-May-85 20:50:08 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   MILNET/ARPANET performance

From: Mark Crispin <Crispin@SUMEX-AIM.ARPA>

Folks -

     I have spent a good bit of time feeling out Telnet
performance to TOPS-20 sites on MILNET, ARPANET, and Canada's
DRENET.  I have observed that this performance between Milnet and
ARPANET is, in a word, terrible.  There are frequent echo delays
of over a minute in duration.  By comparison, ARPANET to DRENET
performance is considerably more tolerable.

     In a number of instances, Telnet performance from an
unloaded TOPS-20 system on ARPANET to another unloaded TOPS-20
system on Milnet has been terrible enough to make serious work
nearly impossible, while access to the Milnet TOPS-20 from the
Milnet TAC was smooth and quite usable.  At times, the delays
have been long enough for the Telnet user program to declare the
connection dead.

     This is a guess, but I believe that the gateways are
throwing out a lot of packets.  Unless they've changed it, all
three networks still endeavor reliable delivery of all 1822
messages so TCP reliable delivery is in theory not resorted to.
It is probably traffic-related, since TCP performance between
ARPANET and DRENET is tolerable in spite of the slow lines at
DRENET.

     Telnet is a worse case test of this, due to its character at
a time nature.  I wonder if the TCP retransmission parameters
need tuning depending upon whether the connection is on a
reliable network (e.g. 1822) or is going through a gateway.

-- Mark --
-------

-----------[000079][next][prev][last][first]----------------------------------------------------
Date:      Tue, 28 May 1985  23:30 MDT
From:      "Frank J. Wancho" <WANCHO@SIMTEL20.ARPA>
To:        David Roode <ROODE@SRI-NIC>
Cc:        Crispin@SUMEX-AIM, dpk@BRL, TCP-IP@SRI-NIC, TOPS-20@SU-SCORE
Subject:   MILNET/ARPANET performance
David,

You have a point.  Certainly MILNET host users should use MILNET TACs
where available.  (But, given a choice between a local call to an
ARPANET TAC and a long distance call by whatever service, including
Autovon, FTS, or even WATS, to a MILNET TAC, which would you choose?)

However, I *thought* the underlying Internet philosophy is to provide
"full" interconnectivity between networks.  I see no reason for
inadequate gateways to be excused on the pretense that they are for
mail only.

The problem is more pervasive when you consider the subnets with their
own inadequate gateways, all following the ARPA/MILNET model.  If the
ARPA/MILNET gateways are to be restricted to mail only, then using
them as "proven" developed models for other gateways is misleading, to
put it mildly.

There is something wrong, and just because it is more visible with
ARPA/MILNET "mail" gateways doesn't make it any less of a problem.

--Frank
-----------[000080][next][prev][last][first]----------------------------------------------------
Date:      Tue, 28-May-85 22:03:27 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  MILNET/ARPANET performance

From: Doug Kingston <dpk@BRL.ARPA>

In the dark, no one can hear you scream...

I will add my voice to the list of those who have been silent in
the past about the lousy MILNET/ARPANET gatewaying service provided
by the swamped BBN 11/03 gateways.  Supposedly they are to be upgraded
to Butterflys to solve this, but how long must we wait...  And will
it really solve the problem?

			Using the network for real work,
					-Doug-

-----------[000081][next][prev][last][first]----------------------------------------------------
Date:      Tue, 28 May 85 22:52:45 EDT
From:      Ron Natalie <ron@BRL.ARPA>
To:        Doug Kingston <dpk@BRL.ARPA>
Cc:        Mark Crispin <Crispin@SUMEX-AIM.ARPA>, TCP-IP@SRI-NIC.ARPA
Subject:   Re:  MILNET/ARPANET performance
What's even worse than swamping the 11/03 gateways is the rather
inane approach to EGP routing.  All EGP packets go through the one
EGP speaking gateway because the gateways don't communicate the
information gleaned from EGP to the MIL-ARPA bridges.  From what
I hear it is the problem of DDN-PMO dragging their feet on the
matter.

In addition, the EGP gateway for MILNET is busted and trashes
a good number of packets going through it.

-Ron

-----------[000082][next][prev][last][first]----------------------------------------------------
Date:      Wed, 29-May-85 00:37:05 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  MILNET/ARPANET performance

From: David Roode <ROODE@SRI-NIC.ARPA>

Wasn't the goal of the MILNET/ARPANET mailbridges merely to provide
mail service?  People can use a MILNET TAC rather than ARPANET host in
the first place if they want to access a MILNET host--they are located
in more and more locations, and access is essentially added
wherever it is requested.  An awful lot of people continue to use
an ARPANET TAC merely because they happen to know its phone number.
I bet there are people on this list who do not know they can
FTP a list of MILNET TAC dialup numbers (those currently
operational) off of the host SRI-NIC.ARPA with the pathname
NETINFO:TAC-PHONES.LIST  .    In fact, it might be interesting
to see what the effect on load would be if everyone who could used
a TAC on the proper network.
-------

-----------[000083][next][prev][last][first]----------------------------------------------------
Date:      Wed 29 May 85 01:44:29-EDT
From:      "J. Noel Chiappa" <JNC@MIT-XX.ARPA>
To:        dpk@BRL.ARPA, Crispin@SUMEX-AIM.ARPA
Cc:        TOPS-20@SU-SCORE.ARPA, TCP-IP@SRI-NIC.ARPA, JNC@MIT-XX.ARPA
Subject:   Re:  MILNET/ARPANET performance
	I feel that I really ought to say a few words in defense of the
gateway maintainers at BBN, who I think are possibly being unjustly
maligned. (I'll try to keep this short, but it is a complex topic.
Please excuse cryptic references; I'm not trying to write a paper!)

	I'm not so sure that the real problem is in their gateways. I
don't have any exact performance figures for their gateways, but my
long experience with LSI11 gateways and MOS indicates to me that
gateways built with that technology can run at over 200 packets/second,
way fast enough to sink an IMP. I don't know if their gateways go quite
that fast, but they can probably handle packets fast enough to swamp
the ARPANet.
	I'm also not sure how much the limited number of buffers in an
LSI11 matters. When the Stanford LSI11 gateway I maintain was upgraded
to use memory mapping and have lots of buffers, the performance was
not greatly improved. (Adding something called RFNM counting did improve
it, but the BBN gateways have had that for a long time.)

	I would point at two possible causes for the problems. The
first is that the ARPANet itself is simply not designed to handle the
style of traffic load that gateways present, and I wouldn't be
surprised if it isn't overloaded anyway. (I've heard some comments from
BBN people that indicate it is.) I don't have any load measurements
from before the conversion to TCP (~1980) but I wouldn't be surprised
if it was up from then. Perhaps someone in BBN could look up some
figures? For aficionados of fine details, there is also a problem
called 'resource blocking' that active hosts run into, which there is
no way for host software to guard against. It results in all outbound
traffic freezing for 15 seconds.
	Also, there are a limited number of gateways between the two
nets; the largest share of the load is handled by 3, the ones at DCEC,
ISI and BBN. 'Well', you say, 'no problem, the IMP's work fine with the
same number of connections. Why not the gateways?' The answer is that
the IMPs cooperate among themselves much more closely, and in addition
have control over the rate at which traffic is let INTO THE NETWORK!
IMP's can always refuse to take packets from the hosts if the resources
to deal with them are not available. Gateways have no such control; the
get given the packets and have to deal with them as best they can.

	This leads on to the final point, which Mark alluded to in
his comment about 'throwing out a lot of packets'. This is precisely
what an overloaded gateway does, and in fact it is about the only
defense mechanism it has. Needless to say, this results in terrible
performance; in addition, network resources are wasted delivering the
packet to the point at which it is discarded.
	Sad to say, Mark, adjusting the timers will probably not help
much. The problem is that any retransmission algorithm is guessing
based on incomplete information; things will always be non-optimal (and
there's probably a Shannon theorm that proves it). You'll either have
lots of waits, or waste lots of resources retransmitting when you don't
need to, (and making things worse by using those resources).

	What the network really needs to deal with these problems are
better congestion and traffic control (the ability to regulate the
traffic flow in the system better), and a lot more information passed
back to the hosts to allow them to make optimal use of the network.
	These are all just symptoms of a deeper truth, which is that
building really big packet switched networks is still emerging
technology. Understanding of problems and proposals of new mechanisms
to handle them are appearing, but there is still a way to go.

	Noel
-------
-----------[000084][next][prev][last][first]----------------------------------------------------
Date:      Wed, 29-May-85 02:24:39 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  MILNET/ARPANET performance

From: Mark Crispin <Crispin@SUMEX-AIM.ARPA>

There are those of us who are "homed" on both Milnet and ARPANET,
and periodically need to telnet from hosts to a host on the other
network, especially when software is being sloshed around all over
the place.  This isn't random hacking, either, this is real honest
to goodness Internet business.

It is NOT acceptable to tell us that the Milnet gateways are only
for mail.  It is NOT acceptable to tell us to use a TAC until such
time as enough TAC dialups can be guaranteed.  Of a number of Milnet
TACs in this area, the only one with posted dialups is the SRI TAC,
which has all lines busy about 50% of the time.

I have local ARPANET host access, so I don't care much about local
ARPANET TAC's, but I should note that as far as I know there aren't
any public 1200 baud dialups on the local ARPANET TAC at Stanford
(and I'm not aware of any others).
-------

-----------[000085][next][prev][last][first]----------------------------------------------------
Date:      Wed, 29-May-85 04:33:31 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  MILNET/ARPANET performance

From: "J. Noel Chiappa" <JNC@MIT-XX.ARPA>

	I feel that I really ought to say a few words in defense of the
gateway maintainers at BBN, who I think are possibly being unjustly
maligned. (I'll try to keep this short, but it is a complex topic.
Please excuse cryptic references; I'm not trying to write a paper!)

	I'm not so sure that the real problem is in their gateways. I
don't have any exact performance figures for their gateways, but my
long experience with LSI11 gateways and MOS indicates to me that
gateways built with that technology can run at over 200 packets/second,
way fast enough to sink an IMP. I don't know if their gateways go quite
that fast, but they can probably handle packets fast enough to swamp
the ARPANet.
	I'm also not sure how much the limited number of buffers in an
LSI11 matters. When the Stanford LSI11 gateway I maintain was upgraded
to use memory mapping and have lots of buffers, the performance was
not greatly improved. (Adding something called RFNM counting did improve
it, but the BBN gateways have had that for a long time.)

	I would point at two possible causes for the problems. The
first is that the ARPANet itself is simply not designed to handle the
style of traffic load that gateways present, and I wouldn't be
surprised if it isn't overloaded anyway. (I've heard some comments from
BBN people that indicate it is.) I don't have any load measurements
from before the conversion to TCP (~1980) but I wouldn't be surprised
if it was up from then. Perhaps someone in BBN could look up some
figures? For aficionados of fine details, there is also a problem
called 'resource blocking' that active hosts run into, which there is
no way for host software to guard against. It results in all outbound
traffic freezing for 15 seconds.
	Also, there are a limited number of gateways between the two
nets; the largest share of the load is handled by 3, the ones at DCEC,
ISI and BBN. 'Well', you say, 'no problem, the IMP's work fine with the
same number of connections. Why not the gateways?' The answer is that
the IMPs cooperate among themselves much more closely, and in addition
have control over the rate at which traffic is let INTO THE NETWORK!
IMP's can always refuse to take packets from the hosts if the resources
to deal with them are not available. Gateways have no such control; the
get given the packets and have to deal with them as best they can.

	This leads on to the final point, which Mark alluded to in
his comment about 'throwing out a lot of packets'. This is precisely
what an overloaded gateway does, and in fact it is about the only
defense mechanism it has. Needless to say, this results in terrible
performance; in addition, network resources are wasted delivering the
packet to the point at which it is discarded.
	Sad to say, Mark, adjusting the timers will probably not help
much. The problem is that any retransmission algorithm is guessing
based on incomplete information; things will always be non-optimal (and
there's probably a Shannon theorm that proves it). You'll either have
lots of waits, or waste lots of resources retransmitting when you don't
need to, (and making things worse by using those resources).

	What the network really needs to deal with these problems are
better congestion and traffic control (the ability to regulate the
traffic flow in the system better), and a lot more information passed
back to the hosts to allow them to make optimal use of the network.
	These are all just symptoms of a deeper truth, which is that
building really big packet switched networks is still emerging
technology. Understanding of problems and proposals of new mechanisms
to handle them are appearing, but there is still a way to go.

	Noel
-------

-----------[000086][next][prev][last][first]----------------------------------------------------
Date:      Wed, 29-May-85 05:25:13 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   The night the clocks stopped

From: mills@dcn6.arpa

Folks,

The evening thunderstorms tonight glitched the power at both our primary
NBS radio WWVB clock receiver at Vienna, VA, and secondary NBS radio WWV
clock receiver 25 miles away at University Park, MD, leaving our fuzzballs
unfit to set your watches to. Even the tertiary GOES satellite clock receiver
at Dearborn, MI, was unreachable because of traffic congestion due in part to
braindamaged host BBN-META trying to set its watch every five minutes around
the clock. Unfortunately, the fuzzies revert to j-random time on 1 January
1985 without comfort of at least one reachable radio fix, which apparently
made a lot of hosts nervous. A veritable onslaught was observed here as
several clock watchers stomped on first one fuzzy and then another when this
occured. I bet a lot of files on IBM PCs up at MIT will have strange
timestamps as a result.

WWVB and WWV radio clock receivers get tummyaches in Summer when ambient
static levels are at their highest. After the glitch tonight, both the primary
and secondary clocks wandered for several hours before eventually
synchronizing on their respective transmitters near Boulder, CO. I tried to
lessen the pain by locking the DCNet swamp to an ISI host, only to discover
that host was locking to the DCNet swamp! Meanwhile, the local power grid was
sloshing to-and-fro milliseconds and glitzing the tracking filters used to
synchronize the DCNet hosts themselves in the absence of the normal
quartz-stabilized reference. All this could have been avoided if we had a UPS
at either the primary or secondary site.

While I am sorry for all those braindamaged timestamps, I again would like to
request of all our timetelling friends: please resist the urge to set your
watches from our fuzzballs with TCP. This causes much grief due to limited
connection resources - recently seven different hosts were observed
simultaneously beating on the DCN1 server at the same time! Puhleeze use UDP
instead of TCP. Also, again note DCN1 is the primary source of accurate time -
all the other fuzzlings can provide time only at degraded accuracy. Finally,
will someone please toss a bomb on BBN-META?

Dave
-------

-----------[000087][next][prev][last][first]----------------------------------------------------
Date:      Wed, 29-May-85 05:50:15 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  MILNET/ARPANET performance

From: Ron Natalie <ron@BRL.ARPA>

What's even worse than swamping the 11/03 gateways is the rather
inane approach to EGP routing.  All EGP packets go through the one
EGP speaking gateway because the gateways don't communicate the
information gleaned from EGP to the MIL-ARPA bridges.  From what
I hear it is the problem of DDN-PMO dragging their feet on the
matter.

In addition, the EGP gateway for MILNET is busted and trashes
a good number of packets going through it.

-Ron

-----------[000088][next][prev][last][first]----------------------------------------------------
Date:      29-May-85 06:09:15-UT
From:      mills@dcn6.arpa
To:        tcp-ip@sri-nic.arpa
Subject:   The night the clocks stopped
Folks,

The evening thunderstorms tonight glitched the power at both our primary
NBS radio WWVB clock receiver at Vienna, VA, and secondary NBS radio WWV
clock receiver 25 miles away at University Park, MD, leaving our fuzzballs
unfit to set your watches to. Even the tertiary GOES satellite clock receiver
at Dearborn, MI, was unreachable because of traffic congestion due in part to
braindamaged host BBN-META trying to set its watch every five minutes around
the clock. Unfortunately, the fuzzies revert to j-random time on 1 January
1985 without comfort of at least one reachable radio fix, which apparently
made a lot of hosts nervous. A veritable onslaught was observed here as
several clock watchers stomped on first one fuzzy and then another when this
occured. I bet a lot of files on IBM PCs up at MIT will have strange
timestamps as a result.

WWVB and WWV radio clock receivers get tummyaches in Summer when ambient
static levels are at their highest. After the glitch tonight, both the primary
and secondary clocks wandered for several hours before eventually
synchronizing on their respective transmitters near Boulder, CO. I tried to
lessen the pain by locking the DCNet swamp to an ISI host, only to discover
that host was locking to the DCNet swamp! Meanwhile, the local power grid was
sloshing to-and-fro milliseconds and glitzing the tracking filters used to
synchronize the DCNet hosts themselves in the absence of the normal
quartz-stabilized reference. All this could have been avoided if we had a UPS
at either the primary or secondary site.

While I am sorry for all those braindamaged timestamps, I again would like to
request of all our timetelling friends: please resist the urge to set your
watches from our fuzzballs with TCP. This causes much grief due to limited
connection resources - recently seven different hosts were observed
simultaneously beating on the DCN1 server at the same time! Puhleeze use UDP
instead of TCP. Also, again note DCN1 is the primary source of accurate time -
all the other fuzzlings can provide time only at degraded accuracy. Finally,
will someone please toss a bomb on BBN-META?

Dave
-------
-----------[000089][next][prev][last][first]----------------------------------------------------
Date:      Wed, 29 May 85 13:23:32 EDT
From:      Ron Natalie <ron@BRL.ARPA>
To:        David Roode <ROODE@SRI-NIC.ARPA>
Cc:        dpk@BRL.ARPA, Crispin@SUMEX-AIM.ARPA, TOPS-20@SU-SCORE.ARPA, TCP-IP@SRI-NIC.ARPA
Subject:   Re:  MILNET/ARPANET performance
The goal of the MILNET/ARPANET gateways is to interconnect the two nets.
These are the only authorized ways of getting packets between hosts on
the MILNET side of the DDN backbone to hosts on the ARPANET side.  The
reason they are called mail bridges is hopefully obsolete.  Originally
certain paranoid elements in DOD thought that those experimental people
on the ARPANET were going to do something to their network, so after spending
years having an internetwork system developed, they decided that they were
going to partition the  two halves, with the exception of mail.  These
gateways were going to be a kludge that examined the TCP port number to
allow only Mail packets to go through.

Most people have probably realized that this idea is not great.  Especially
those of us on the MILNET side who need to talk to the rest of the world.
It is apparent with a little thought that it is a whole lot easier to make
a nuisance out of yourself with mail than anything else, therefore the
blocking gateways would not help.  My personal view is that the gateways
remain full IP gateways and in the case of problem or national emergency
someone at the NOC presses the "destruct gateways" button and partitions the
net.

I don't think that the TACs are loading down the gateways.  TAC's aren't
that efficient, they just don't make that many packets.  The prime TAC
loads are the silly people who are using KERMIT through them, but most
of these people stay on their own side of the chasm.  The big load, as
always is mail.  The fact that these gateways are pretty much the same
as they were two years ago, and the net load has increased dramatically
is a significant factor.  In addition, every since the EGP cutover, they
don't route as efficiently as they used to.  In addition, the entire
ARPANET/MILNET IMP complex is getting in trouble.  More and more traffic
is being pumped through it but the trunk capacity is not being increased
as rapidly.

-Ron
-----------[000090][next][prev][last][first]----------------------------------------------------
Date:      Wed 29 May 85 14:48:18-EDT
From:      "J. Noel Chiappa" <JNC@MIT-XX.ARPA>
To:        ron@BRL.ARPA, dpk@BRL.ARPA
Cc:        Crispin@SUMEX-AIM.ARPA, TCP-IP@SRI-NIC.ARPA, JNC@MIT-XX.ARPA
Subject:   Re:  MILNET/ARPANET performance
	Ron, I'm not sure I completely believe that one either,
although there is some truth to it.

	(To explain to the rest of the list what (I think) he is
alluding to, the routing protocol the BBN gateways use among
themselves, GGP, is somewhat deficient (not really its fault since it
is an ancient protocol) and cannot advise gateways of the existence of
routes that do not pass through the gateway sending the information.
To make that a little plainer, consider the concrete example of the
MIT ARPANet gateway communicating routing info via EGP with a gateway
on the ARPANet at BBN; the BBN gateway has no way, inside GGP, of
letting a gateway on the ARPAnet at ISI know that it can get directly
to MIT by going straight to the MIT ARPANet gateway. All traffic from
ISI to MIT must go via the BBN gateway.)

	It is true that this will tend to clog up the network as
a whole by sending such packets through the ARPANet twice when once
would have done. (Solving this requires replacing GGP. As I understand
it, there was a definite decision to do this in the context of the
Butterfly upgrade. I gather the schedule for that delay has slipped;
I'm not sure where the responsibility lies. You can argue about whether
that was a wise decision, as opposed to spnding resources in upgrading
the LSI11 gateways as a bandaid.)

	However, traffic from the MILNET to the ARPANet should not be
affected directly by this problem. It is true that the network
provides no way for a host (or gateway) to pick the optimum
MILNET/ARPANet gateway from the set available; this is because the
ARPANet looks like an atomic network at the IP routing level, when in
fact as we all know it is a set of links. For this reason it is
important for hosts to set the default ARPANet/MILNet gateway by hand,
using oiutside knowledge of the ARPANet topology to pick the optimal
one.
	Fixing THAT problem in some non-kludge way is yet another large
unattacked issue.

	Noel
-------
-----------[000091][next][prev][last][first]----------------------------------------------------
Date:      Wed, 29-May-85 15:44:39 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  MILNET/ARPANET performance

From: "J. Noel Chiappa" <JNC@MIT-XX.ARPA>

	Ron, I'm not sure I completely believe that one either,
although there is some truth to it.

	(To explain to the rest of the list what (I think) he is
alluding to, the routing protocol the BBN gateways use among
themselves, GGP, is somewhat deficient (not really its fault since it
is an ancient protocol) and cannot advise gateways of the existence of
routes that do not pass through the gateway sending the information.
To make that a little plainer, consider the concrete example of the
MIT ARPANet gateway communicating routing info via EGP with a gateway
on the ARPANet at BBN; the BBN gateway has no way, inside GGP, of
letting a gateway on the ARPAnet at ISI know that it can get directly
to MIT by going straight to the MIT ARPANet gateway. All traffic from
ISI to MIT must go via the BBN gateway.)

	It is true that this will tend to clog up the network as
a whole by sending such packets through the ARPANet twice when once
would have done. (Solving this requires replacing GGP. As I understand
it, there was a definite decision to do this in the context of the
Butterfly upgrade. I gather the schedule for that delay has slipped;
I'm not sure where the responsibility lies. You can argue about whether
that was a wise decision, as opposed to spnding resources in upgrading
the LSI11 gateways as a bandaid.)

	However, traffic from the MILNET to the ARPANet should not be
affected directly by this problem. It is true that the network
provides no way for a host (or gateway) to pick the optimum
MILNET/ARPANet gateway from the set available; this is because the
ARPANet looks like an atomic network at the IP routing level, when in
fact as we all know it is a set of links. For this reason it is
important for hosts to set the default ARPANet/MILNet gateway by hand,
using oiutside knowledge of the ARPANet topology to pick the optimal
one.
	Fixing THAT problem in some non-kludge way is yet another large
unattacked issue.

	Noel
-------

-----------[000092][next][prev][last][first]----------------------------------------------------
Date:      29 May 1985 16:12-EDT
From:      CLYNN@BBNA.ARPA
To:        Crispin@SUMEX-AIM.ARPA
Cc:        TOPS-20@SU-SCORE.ARPA, TCP-IP@SRI-NIC.ARPA
Subject:   Re: MILNET/ARPANET performance
Mark, et.al.,

	I have also experienced many lost packets and delays
between Milnet (ISI) and Arpanet (BBN), but I was trying to FTP
800 page data files.  They always seemed to timeout.  I have
also spent a lot of time trying to make retransmissions and
flow control work a little better than they did (on TOPS20s).

	The statement that TCP reliable delivery not being
resorted to is false, theoretically.  The Arpanet and Milnet
are both reliable 1822 networks, with a nominal limit of 8
outstanding packets between Imp/port pairs.  The gateways
redirect hosts to send packets to the gateway nearest to the
sending host.

	To see the problem, consider the following diagram.

	Milnet Imp  ---Imp--- Milnet ---Imp----  Milnet Imp  -- ISIA
	    |					      |
	  BBN GW				   ISI GW
	    |					      |
BBNA --	Arpanet Imp  --Imp--- Arpanet --Imp----  Arpanet Imp

Traffic from ISIA to BBNA goes to the local imp, through the ISI GW,
across the Arpanet, to BBNA; traffic to ISI goes through the BBN GW,
cross country via the Milnet to ISIA.  The transit time through either
net is (more or less) proportional to number of hops.  Thus it takes
longer to go from the BBN GW to ISIA (via Milnet) than from BBNA to
the BBN GW (or from the ISI GW to BBNA (via Arpanet) than from ISIA to
the ISI GW), the points where the 1822 flow control is applied.
Consequently, BBNA can reliably send packets to the BBN GW faster than
the gateway can reliably get them to ISIA -- even if there is NO other
traffic in either net.  Eventually, the packets at the gateway will
build up and the gateway will have to discard the excess packets
(sending a source quench back to the host).  I.e., assume BBNA to BBN
GW is 50ms or 8 packets per 50 ms = 160 packets per second; BBN GW to
ISIA is about 300ms or 8pkt/300ms = 26 packets per second; thus 134
packets per second down the drain.  (Note that simply switching to
faster processors, e.g., a butterfly, will not help.)

	What is needed is NOT adjustment of retransmission parameters,
what IS needed is end to end flow control algorithms that work, and
some specific guidelines to those who are implementing the protocols.

	There are a few things that could be done to relieve this
particular problem.  The gateways could be programmed to redirect the
hosts to the gateway nearest the destination (so called "destination
routing" which the gateway crew is investigating).
	It isn't simple, and requires knowledge in the gateways of
	many things about topoligies and delays between pairs of
	hosts -- a long way from the "stateless" gateway originally
	described.
One can also get busy and figure out how to do flow control.
	We have added code to our TOPS20s for flow control: it
	closes windows, uses estimated baud rates to limit
	outstanding packets (instead of just filling a window),
	limits the number of packets retransmitted when one is lost,
	and it boths sends and processes source quenchs.  Even if
	this all works, it may not help much until most of the other
	hosts take similar actions.
My solution to the FTP problem was to tell it to use a source route
through the ISI GW.
	It worked because the data was all flowing in one direction
	and because the TCP will automatically invert a received
	source route option (the FTP server didn't have to be changed).

Charlie
-----------[000093][next][prev][last][first]----------------------------------------------------
Date:      Wed, 29-May-85 16:19:20 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   MILNET/ARPANET performance

From: "Frank J. Wancho" <WANCHO@SIMTEL20.ARPA>

David,

You have a point.  Certainly MILNET host users should use MILNET TACs
where available.  (But, given a choice between a local call to an
ARPANET TAC and a long distance call by whatever service, including
Autovon, FTS, or even WATS, to a MILNET TAC, which would you choose?)

However, I *thought* the underlying Internet philosophy is to provide
"full" interconnectivity between networks.  I see no reason for
inadequate gateways to be excused on the pretense that they are for
mail only.

The problem is more pervasive when you consider the subnets with their
own inadequate gateways, all following the ARPA/MILNET model.  If the
ARPA/MILNET gateways are to be restricted to mail only, then using
them as "proven" developed models for other gateways is misleading, to
put it mildly.

There is something wrong, and just because it is more visible with
ARPA/MILNET "mail" gateways doesn't make it any less of a problem.

--Frank

-----------[000094][next][prev][last][first]----------------------------------------------------
Date:      Wed, 29-May-85 17:08:15 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: MILNET/ARPANET performance

From: CLYNN@BBNA.ARPA

Mark, et.al.,

	I have also experienced many lost packets and delays
between Milnet (ISI) and Arpanet (BBN), but I was trying to FTP
800 page data files.  They always seemed to timeout.  I have
also spent a lot of time trying to make retransmissions and
flow control work a little better than they did (on TOPS20s).

	The statement that TCP reliable delivery not being
resorted to is false, theoretically.  The Arpanet and Milnet
are both reliable 1822 networks, with a nominal limit of 8
outstanding packets between Imp/port pairs.  The gateways
redirect hosts to send packets to the gateway nearest to the
sending host.

	To see the problem, consider the following diagram.

	Milnet Imp  ---Imp--- Milnet ---Imp----  Milnet Imp  -- ISIA
	    |					      |
	  BBN GW				   ISI GW
	    |					      |
BBNA --	Arpanet Imp  --Imp--- Arpanet --Imp----  Arpanet Imp

Traffic from ISIA to BBNA goes to the local imp, through the ISI GW,
across the Arpanet, to BBNA; traffic to ISI goes through the BBN GW,
cross country via the Milnet to ISIA.  The transit time through either
net is (more or less) proportional to number of hops.  Thus it takes
longer to go from the BBN GW to ISIA (via Milnet) than from BBNA to
the BBN GW (or from the ISI GW to BBNA (via Arpanet) than from ISIA to
the ISI GW), the points where the 1822 flow control is applied.
Consequently, BBNA can reliably send packets to the BBN GW faster than
the gateway can reliably get them to ISIA -- even if there is NO other
traffic in either net.  Eventually, the packets at the gateway will
build up and the gateway will have to discard the excess packets
(sending a source quench back to the host).  I.e., assume BBNA to BBN
GW is 50ms or 8 packets per 50 ms = 160 packets per second; BBN GW to
ISIA is about 300ms or 8pkt/300ms = 26 packets per second; thus 134
packets per second down the drain.  (Note that simply switching to
faster processors, e.g., a butterfly, will not help.)

	What is needed is NOT adjustment of retransmission parameters,
what IS needed is end to end flow control algorithms that work, and
some specific guidelines to those who are implementing the protocols.

	There are a few things that could be done to relieve this
particular problem.  The gateways could be programmed to redirect the
hosts to the gateway nearest the destination (so called "destination
routing" which the gateway crew is investigating).
	It isn't simple, and requires knowledge in the gateways of
	many things about topoligies and delays between pairs of
	hosts -- a long way from the "stateless" gateway originally
	described.
One can also get busy and figure out how to do flow control.
	We have added code to our TOPS20s for flow control: it
	closes windows, uses estimated baud rates to limit
	outstanding packets (instead of just filling a window),
	limits the number of packets retransmitted when one is lost,
	and it boths sends and processes source quenchs.  Even if
	this all works, it may not help much until most of the other
	hosts take similar actions.
My solution to the FTP problem was to tell it to use a source route
through the ISI GW.
	It worked because the data was all flowing in one direction
	and because the TCP will automatically invert a received
	source route option (the FTP server didn't have to be changed).

Charlie

-----------[000095][next][prev][last][first]----------------------------------------------------
Date:      Wed, 29 May 85 19:18 EDT
From:      Steve Aliff <Aliff@MIT-MULTICS.ARPA>
To:        David Roode <ROODE@SRI-NIC.ARPA>
Cc:        dpk@BRL.ARPA, Crispin@SUMEX-AIM.ARPA, TOPS-20@SU-SCORE.ARPA, TCP-IP@SRI-NIC.ARPA
Subject:   Re: MILNET/ARPANET performance
You're right.  The original brain-damaged idea was to limit gateways to
mail only.  (Although I saw several iterations with limited Telnet,
etc.) That idea seems to have been abandoned, and rightfully so, in
favor of full inter-net gateways.  I can think of several applications,
and even more user environments, where leaving one's favorite terminal
niche to find a dial-up terminal to access a TAC doesn't come close to
being a working solution. Let's find and fix the real problem and not
bring up ghastly ideas from the past.

That's the longest flame I've had recently. Apologies to all innocents
caught in the crossfire.
-----------[000096][next][prev][last][first]----------------------------------------------------
Date:      Wed, 29-May-85 20:24:41 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: MILNET/ARPANET performance

From: Steve Aliff <Aliff@MIT-MULTICS.ARPA>

You're right.  The original brain-damaged idea was to limit gateways to
mail only.  (Although I saw several iterations with limited Telnet,
etc.) That idea seems to have been abandoned, and rightfully so, in
favor of full inter-net gateways.  I can think of several applications,
and even more user environments, where leaving one's favorite terminal
niche to find a dial-up terminal to access a TAC doesn't come close to
being a working solution. Let's find and fix the real problem and not
bring up ghastly ideas from the past.

That's the longest flame I've had recently. Apologies to all innocents
caught in the crossfire.

-----------[000097][next][prev][last][first]----------------------------------------------------
Date:      Thu, 30 May 85 00:48:33 PDT
From:      decwrl!sun!guy@Berkeley (Guy Harris)
To:        tcp-ip@Berkeley
Subject:   Re:  Sun Gateway problems
It's at least fixed in 3.0, and probably in 2.0.  BTW, what does "xyzzy"
equal in this case?  The only call to "m_pullup" I can find in ip_icmp.c is
in "icmp_input", and the code is different from your example...

	Guy Harris
-----------[000098][next][prev][last][first]----------------------------------------------------
Date:      Wed, 29 May 85 22:17:55 edt
From:      Chris Torek <chris@gyre>
To:        Crispin@SUMEX-AIM.ARPA, JNC@MIT-XX.ARPA, dpk@BRL.ARPA
Cc:        TCP-IP@SRI-NIC.ARPA, TOPS-20@SU-SCORE.ARPA
Subject:   Re:  MILNET/ARPANET performance
I might take this opportunity to note that many 4.2BSD sites are
retransmitting packets once every second, no matter what the actual
round trip ack time is; this doesn't help gateway load at all.

There is a bit of code in /sys/netinet/tcp_output.c that looks like this:

		if (SEQ_GT(tp->snd_nxt, tp->snd_max))
			tp->snd_max = tp->snd_nxt;
		
		/*
		 * Time this transmission if not a retransmittion and not
		 * currently timing anything.
		 */
		if (SEQ_GT(tp->snd_nxt, tp->snd_max) && tp->t_rtt == 0) {
			tp->t_rtt = 1;
			tp->t_rtseq = tp->snd_nxt - len;
		}

The second SEQ_GT is guaranteed to fail, thus nothing is ever timed; and
the retransmits happen at the maximum rate (1/second).

The code should be changed to:

		if (SEQ_GT(tp->snd_nxt, tp->snd_max)) {
			tp->snd_max = tp->snd_nxt;
			/*
			 * Time this transmission (it's not a retransmission)
			 * unless we're already timing something.
			 */
			 if (tp->t_rtt == 0) {
				tp->t_rtt = 1;
				tp->t_rtseq = tp->snd_nxt - len;
			}
		}

(Note, Berkeley has fixed this.)  I hope most 4.2 arpa sites are reading
this. . . .

Chris
-----------[000099][next][prev][last][first]----------------------------------------------------
Date:      Thu, 30-May-85 01:31:34 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  MILNET/ARPANET performance

From: Chris Torek <chris@gyre>

I might take this opportunity to note that many 4.2BSD sites are
retransmitting packets once every second, no matter what the actual
round trip ack time is; this doesn't help gateway load at all.

There is a bit of code in /sys/netinet/tcp_output.c that looks like this:

		if (SEQ_GT(tp->snd_nxt, tp->snd_max))
			tp->snd_max = tp->snd_nxt;
		
		/*
		 * Time this transmission if not a retransmittion and not
		 * currently timing anything.
		 */
		if (SEQ_GT(tp->snd_nxt, tp->snd_max) && tp->t_rtt == 0) {
			tp->t_rtt = 1;
			tp->t_rtseq = tp->snd_nxt - len;
		}

The second SEQ_GT is guaranteed to fail, thus nothing is ever timed; and
the retransmits happen at the maximum rate (1/second).

The code should be changed to:

		if (SEQ_GT(tp->snd_nxt, tp->snd_max)) {
			tp->snd_max = tp->snd_nxt;
			/*
			 * Time this transmission (it's not a retransmission)
			 * unless we're already timing something.
			 */
			 if (tp->t_rtt == 0) {
				tp->t_rtt = 1;
				tp->t_rtseq = tp->snd_nxt - len;
			}
		}

(Note, Berkeley has fixed this.)  I hope most 4.2 arpa sites are reading
this. . . .

Chris

-----------[000100][next][prev][last][first]----------------------------------------------------
Date:      30 May 1985 06:29-EDT
From:      CERF@USC-ISI.ARPA
To:        WANCHO@SIMTEL20.ARPA
Cc:        ROODE@SRI-NIC.ARPA, Crispin@SUMEX-AIM.ARPA dpk@BRL.ARPA, TCP-IP@SRI-NIC.ARPA TOPS-20@SU-SCORE.ARPA
Subject:   Re: MILNET/ARPANET performance

Folks,

Gateway performance IS important.  Especially for DoD where the
whole point of internet was to capitalize on connectivity where
ever it could be found; in a crisis, the traffic goes where it
can.

I think the gateway performance has been decreasingly
satisfactory as the level of traffic has built up.  Clearly, the
character-echoplex requirement exacerbates matters a good deal,
and the 8 messages outstanding rule on the ARPANET and MILNET
make the problem more severe since traffic gets throttled below
the TCP/IP level as a result (the new END/END protocol in the
IMPs should help some).

Are there any hard data about gateway throughput - Dave Mills
always seems to have his hands on measurement information - how
about it, Dave?

Can BBN say anything about higher capacity gateways under
development?

Before we tar the LSI-11/03 gateways, let's try to find out where
the bottleneck is - for all I know it is other than the gateway
itself.  I remember that in the Ft.  Bragg Packet Radio
experiments we found that 8 messages outstanding were the real
bottleneck and quickly went to line at a time application support
to reduce the packet rate.  This was particularly acute at Bragg
because nearly every appliation ran on the SAME host and the 8
message limit applied between that host (ISID) and the gateway
qua host on ARPANET.

Vint
-----------[000101][next][prev][last][first]----------------------------------------------------
Date:      30 May 1985 06:32-EDT
From:      CERF@USC-ISI.ARPA
To:        mills@DCN6.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: The night the clocks stopped
Dave,

can you reject time calls coming in via TCP or is the mere act of
rejection a resource-consuming activity which cannot be borne?

Vint
-----------[000102][next][prev][last][first]----------------------------------------------------
Date:      Thu, 30-May-85 12:08:40 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: MILNET/ARPANET performance

From: CERF@USC-ISI.ARPA


Folks,

Gateway performance IS important.  Especially for DoD where the
whole point of internet was to capitalize on connectivity where
ever it could be found; in a crisis, the traffic goes where it
can.

I think the gateway performance has been decreasingly
satisfactory as the level of traffic has built up.  Clearly, the
character-echoplex requirement exacerbates matters a good deal,
and the 8 messages outstanding rule on the ARPANET and MILNET
make the problem more severe since traffic gets throttled below
the TCP/IP level as a result (the new END/END protocol in the
IMPs should help some).

Are there any hard data about gateway throughput - Dave Mills
always seems to have his hands on measurement information - how
about it, Dave?

Can BBN say anything about higher capacity gateways under
development?

Before we tar the LSI-11/03 gateways, let's try to find out where
the bottleneck is - for all I know it is other than the gateway
itself.  I remember that in the Ft.  Bragg Packet Radio
experiments we found that 8 messages outstanding were the real
bottleneck and quickly went to line at a time application support
to reduce the packet rate.  This was particularly acute at Bragg
because nearly every appliation ran on the SAME host and the 8
message limit applied between that host (ISID) and the gateway
qua host on ARPANET.

Vint

-----------[000103][next][prev][last][first]----------------------------------------------------
Date:      30 May 85 12:40:08 EDT (Thu)
From:      ljs@bbnccv
To:        CERF@usc-isi.ARPA
Cc:        WANCHO@simtel20.ARPA, ROODE@sri-nic.ARPA, Crispin@sumex-aim.ARPA, dpk@brl.ARPA, TCP-IP@sri-nic.ARPA, TOPS-20@su-score.ARPA, ljs@bbnccv
Subject:   Re: MILNET/ARPANET performance

The 8 message limit in the Arpanet and Milnet is a major problem for
gateways.  Often in our daily statistics I have seen ARPANET (or MILNET)
gateways dropping a high percentage of packets received (20%-30%) at fairly low
throughputs (50-70 packets per second), while other gateways on faster
and non-blocking networks can pass 200 packets per second with
no dropping at all.  A quick look at the daily ARPANET log often shows 
that the ARPANET (or MILNET) IMPs were blocking their interfaces during
this period.

This says that the processing power of the LSI-11 gateway is not the problem,
at least up to 200 packets per second.  Lack of buffers in the LSI-11 is
a problem, however, since short periods of interface blocking could be
smoothed over by a greater buffering capacity.  There is a project underway
to provide more buffers for the LSI-11.  We are developing a new multiprocessor
gateway which will provide even more buffers and processing
power, in addition to a new interior routing algorithm and a better algorithm
to distribute EGP information internally.  This project is being funded by
DARPA, and to my knowledge the DDN PMO has made no commitment to switch.

The new end-to-end algorithm in the IMPs will improve the situation
considerably, since the IMP will no longer block the entire interface
just because one connection is blocked.  

In addition, there are plans to put EGP in all of the mailbridges (after
the memory upgrade).  This should reduce the EGP-related problems that
MILNET sites have been seeing.

Linda Seamonson
-----------[000104][next][prev][last][first]----------------------------------------------------
Date:      Thu, 30-May-85 13:05:38 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: The night the clocks stopped

From: CERF@USC-ISI.ARPA

Dave,

can you reject time calls coming in via TCP or is the mere act of
rejection a resource-consuming activity which cannot be borne?

Vint

-----------[000105][next][prev][last][first]----------------------------------------------------
Date:      Thu, 30-May-85 13:58:29 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: MILNET/ARPANET performance

From: ljs@bbnccv


The 8 message limit in the Arpanet and Milnet is a major problem for
gateways.  Often in our daily statistics I have seen ARPANET (or MILNET)
gateways dropping a high percentage of packets received (20%-30%) at fairly low
throughputs (50-70 packets per second), while other gateways on faster
and non-blocking networks can pass 200 packets per second with
no dropping at all.  A quick look at the daily ARPANET log often shows 
that the ARPANET (or MILNET) IMPs were blocking their interfaces during
this period.

This says that the processing power of the LSI-11 gateway is not the problem,
at least up to 200 packets per second.  Lack of buffers in the LSI-11 is
a problem, however, since short periods of interface blocking could be
smoothed over by a greater buffering capacity.  There is a project underway
to provide more buffers for the LSI-11.  We are developing a new multiprocessor
gateway which will provide even more buffers and processing
power, in addition to a new interior routing algorithm and a better algorithm
to distribute EGP information internally.  This project is being funded by
DARPA, and to my knowledge the DDN PMO has made no commitment to switch.

The new end-to-end algorithm in the IMPs will improve the situation
considerably, since the IMP will no longer block the entire interface
just because one connection is blocked.  

In addition, there are plans to put EGP in all of the mailbridges (after
the memory upgrade).  This should reduce the EGP-related problems that
MILNET sites have been seeing.

Linda Seamonson

-----------[000106][next][prev][last][first]----------------------------------------------------
Date:      Thu, 30-May-85 14:59:04 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  Sun Gateway problems

From: decwrl!sun!guy@BERKELEY (Guy Harris)

It's at least fixed in 3.0, and probably in 2.0.  BTW, what does "xyzzy"
equal in this case?  The only call to "m_pullup" I can find in ip_icmp.c is
in "icmp_input", and the code is different from your example...

	Guy Harris

-----------[000107][next][prev][last][first]----------------------------------------------------
Date:      30 May 85 20:06:28 PDT
From:      Murray.pa@Xerox.ARPA
To:        mills@dcn6.arpa
Cc:        Murray.pa@Xerox.ARPA, tcp-ip@sri-nic.arpa
Subject:   Re: The night the clocks stopped
We now have a UDP time server on Xerox.ARPA.

This machine gets it's time from the local Ethernet. The time servers
out there are hooked up to the GOES satellite. Usually the clocks around
here are less than 30 seconds off. Occasionally, they get confused. I've
lost contact with DCN1, so I can't double check right now, but we
normally agree to within a few seconds.
-----------[000108][next][prev][last][first]----------------------------------------------------
Date:      Thu, 30-May-85 17:33:46 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  MILNET/ARPANET performance

From: Ron Natalie <ron@BRL.ARPA>

The goal of the MILNET/ARPANET gateways is to interconnect the two nets.
These are the only authorized ways of getting packets between hosts on
the MILNET side of the DDN backbone to hosts on the ARPANET side.  The
reason they are called mail bridges is hopefully obsolete.  Originally
certain paranoid elements in DOD thought that those experimental people
on the ARPANET were going to do something to their network, so after spending
years having an internetwork system developed, they decided that they were
going to partition the  two halves, with the exception of mail.  These
gateways were going to be a kludge that examined the TCP port number to
allow only Mail packets to go through.

Most people have probably realized that this idea is not great.  Especially
those of us on the MILNET side who need to talk to the rest of the world.
It is apparent with a little thought that it is a whole lot easier to make
a nuisance out of yourself with mail than anything else, therefore the
blocking gateways would not help.  My personal view is that the gateways
remain full IP gateways and in the case of problem or national emergency
someone at the NOC presses the "destruct gateways" button and partitions the
net.

I don't think that the TACs are loading down the gateways.  TAC's aren't
that efficient, they just don't make that many packets.  The prime TAC
loads are the silly people who are using KERMIT through them, but most
of these people stay on their own side of the chasm.  The big load, as
always is mail.  The fact that these gateways are pretty much the same
as they were two years ago, and the net load has increased dramatically
is a significant factor.  In addition, every since the EGP cutover, they
don't route as efficiently as they used to.  In addition, the entire
ARPANET/MILNET IMP complex is getting in trouble.  More and more traffic
is being pumped through it but the trunk capacity is not being increased
as rapidly.

-Ron

-----------[000109][next][prev][last][first]----------------------------------------------------
Date:      Thu, 30-May-85 23:39:05 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: The night the clocks stopped

From: Murray.pa@Xerox.ARPA

We now have a UDP time server on Xerox.ARPA.

This machine gets it's time from the local Ethernet. The time servers
out there are hooked up to the GOES satellite. Usually the clocks around
here are less than 30 seconds off. Occasionally, they get confused. I've
lost contact with DCN1, so I can't double check right now, but we
normally agree to within a few seconds.

-----------[000110][next][prev][last][first]----------------------------------------------------
Date:      Fri, 31-May-85 02:26:42 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: The night the clocks stopped

From: mills@dcn6



Murray,

I dunno how you lost contact with DCN1, since you have to go through there
to get to this host. If your clock reference is anywhere close to GOES,
you should be a lot closer than "a few seconds." Mumble a TCP connection
to DCN1 on port 15 and look at the table that comes back. The "Offset"
column shows the offset, in milliseconds, between our local fuzzies. Host
15 is the WWVB clock, host 14 the WWV clock and host 11 the GOES clock.
Host 9, often far out of whack, is a 4.2bsd Sun with either a fractured
crystal or an hourglass. The dispersion in all the hosts is seldom more
than a few tens of milliseconds. There is a systematic offset between
the WWV clocks and the GOES clock of about 50 milliseconds which we have
not been able to explain.

Dave
-------

-----------[000111][next][prev][last][first]----------------------------------------------------
Date:      31 May 1985 0723-PDT (Friday)
From:      stanonik@nprdc (Ron Stanonik)
To:        tcp-ip@sri-nic
Subject:   BIND
Thanks for the many (~15) replies to my question about using
the BIND domain resolver/server for 4.2bsd.  All said they
were successfully querying nic, isib, and isif.  Our problem
turned out to be faulty udp checksums generated by 4.2bsd.
More embarrassing, the problem was well known, and a fix had
been posted.

Ron
stanonik@nprdc
-----------[000112][next][prev][last][first]----------------------------------------------------
Date:      Fri, 31 May 85 08:12:38 edt
From:      Chris Torek <chris@gyre>
To:        sun!guy@gyre
Cc:        tcp-ip@Berkeley
Subject:   Re:  Sun Gateway problems
I forget what the xyzzy was, which is why I used a generic word.

You don't seriously expect me to go look up the code do you :-) ?

Chris
-----------[000113][next][prev][last][first]----------------------------------------------------
Date:      Fri, 31-May-85 09:06:13 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  Sun Gateway problems

From: Chris Torek <chris@gyre>

I forget what the xyzzy was, which is why I used a generic word.

You don't seriously expect me to go look up the code do you :-) ?

Chris

-----------[000114][next][prev][last][first]----------------------------------------------------
Date:      31-May-85 05:10:03-UT
From:      mills@dcn6
To:        Murray.pa@Xerox.ARPA, mills@dcn6.arpa, tcp-ip@sri-nic.arpa
Subject:   Re: The night the clocks stopped


Murray,

I dunno how you lost contact with DCN1, since you have to go through there
to get to this host. If your clock reference is anywhere close to GOES,
you should be a lot closer than "a few seconds." Mumble a TCP connection
to DCN1 on port 15 and look at the table that comes back. The "Offset"
column shows the offset, in milliseconds, between our local fuzzies. Host
15 is the WWVB clock, host 14 the WWV clock and host 11 the GOES clock.
Host 9, often far out of whack, is a 4.2bsd Sun with either a fractured
crystal or an hourglass. The dispersion in all the hosts is seldom more
than a few tens of milliseconds. There is a systematic offset between
the WWV clocks and the GOES clock of about 50 milliseconds which we have
not been able to explain.

Dave
-------
-----------[000115][next][prev][last][first]----------------------------------------------------
Date:      Fri, 31-May-85 11:39:11 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   BIND

From: stanonik@nprdc (Ron Stanonik)

Thanks for the many (~15) replies to my question about using
the BIND domain resolver/server for 4.2bsd.  All said they
were successfully querying nic, isib, and isif.  Our problem
turned out to be faulty udp checksums generated by 4.2bsd.
More embarrassing, the problem was well known, and a fix had
been posted.

Ron
stanonik@nprdc

-----------[000116][next][prev][last][first]----------------------------------------------------
Date:      31-May-85 18:50:42-UT
From:      mills@dcn6.arpa
To:        tcp-ip@sri-nic.arpa
Subject:   ARPANET/MILNET performance statistics
Folks,

Responding to Vint's request, here are some relevant data covering the
ARPANET/MILNET gateway performance. The data have been extracted from the
lastest weekly report produced by BBN and cover only the ARPANET/MILNET
gateways, which represent only seven out of the 38 operational BBN core
gateways. (Who knows how many non-core gateways there are out there...)

The period covered by these data cover just short of a six-day period
and detail the average and peak throughputs and loss rates. The totals shown
are for all of the 38 gateways. Comments follow the tables.

Total Throughput

GWY         RCVD           RCVD     IP       % IP         DEST   % DST
NAME        DGRAMS         BYTES    ERRORS  ERRORS       UNRCH   UNRCH
----------------------------------------------------------------------
MILARP   4,169,046   306,185,112       273   0.00%       7,153   0.17%
MILBBN   4,638,747   272,396,860       458   0.00%      30,045   0.65%
MILDCE   3,952,555   280,374,422       372   0.00%      23,747   0.60%
MILISI   5,282,635   624,869,302       779   0.01%      20,353   0.39%
MILLBL   2,896,764   175,123,126       143   0.00%       6,639   0.23%
MILSAC   2,765,136   157,981,916     1,122   0.04%      10,588   0.38%
MILSRI   2,133,985   117,968,018       169   0.00%      13,832   0.65%
----------------------------------------------------------------------
TOTALS  92,368,009 5,768,504,913 1,556,736   1.69%     190,545   0.21%

GWY         SENT           SENT    DROPPED    % DROPPED
NAME        DGRAMS         BYTES   DGRAMS        DGRAMS
-------------------------------------------------------
MILARP   4,146,989   295,751,188   101,471        2.39%
MILBBN   4,669,813   276,807,235   157,068        3.25%
MILDCE   3,942,271   284,077,034    59,404        1.48%
MILISI   5,138,585   577,311,096   247,222        4.59%
MILLBL   2,877,744   174,574,553    55,537        1.89%
MILSAC   2,792,073   165,159,590    13,393        0.48%
MILSRI   2,156,255   127,256,463    53,483        2.42%
-------------------------------------------------------
TOTALS  92,523,789 5,721,526,805 1,466,274        1.56%

Note that the load balancing, while not optimal, is not too bad. The data do
now show, of course, the extent of the double-hop inefficiencies pointed out
previously. The ARPANET/MILNET gateways see fewer IP errors than average, but
somewhat more broken networks and dropped packets than average.

======================================================

Mean Throughput (per second) and Size (bytes per datagram)

GWY         RCVD         RCVD       IP         AVG BYTES
NAME        DGRAMS       BYTES      ERRORS     PER DGRAM
--------------------------------------------------------
MILARP        8.14      597.90        0.00       73.44
MILBBN        9.06      531.92        0.00       58.72
MILDCE        7.72      547.50        0.00       70.93
MILISI       10.32     1220.21        0.00      118.29
MILLBL        5.66      341.97        0.00       60.45
MILSAC        5.40      308.50        0.00       57.13
MILSRI        4.17      230.36        0.00       55.28

GWY         SENT         SENT     DROPPED     AVG BYTES
NAME        DGRAMS       BYTES    DGRAMS      PER DGRAM
-------------------------------------------------------
MILARP        8.10      577.53        0.20       71.32
MILBBN        9.12      540.53        0.31       59.28
MILDCE        7.70      554.73        0.12       72.06
MILISI       10.03     1127.34        0.48      112.35
MILLBL        5.62      340.90        0.11       60.66
MILSAC        5.45      322.51        0.03       59.15
MILSRI        4.21      248.50        0.10       59.02

These values are way below the maximum throughput of the LSI-11 gateways
(about 200 packets/sec); however, the average size is very small relative to
the maximum ARPANET/MILNET packet size of 1007 octets. One would expect the
resource crunch to be the limited buffer memory available in the present
LSI-11 implementation. Note that BBN is working actively toward a dramatic
increase in available memory, as noted previously.

======================================================

Peak Throughput (sum of datagrams/sec, input + output,
	  time is time of data collection)

GWY          TOTAL            TIME               DROP          TIME
NAME         T'PUT           OF DAY              RATE          OF DAY
------------------------------------------------------------------------
MILARP       47.28         5/24 09:16           27.26%        5/25 22:04
MILBBN       39.53         5/23 15:32           20.70%        5/24 02:18
MILDCE       36.67         5/24 08:02           26.12%        5/24 17:59
MILISI       44.45         5/23 15:02           32.39%        5/21 16:08
MILLBL       37.76         5/22 19:43           34.91%        5/24 12:02
MILSAC       36.91         5/23 13:03            5.75%        5/21 08:53
MILSRI       22.78         5/24 08:47           24.89%        5/21 16:08

Even under peak loads the gateway horsepower is not particularily taxed;
however, the buffering is obviously suffering a good deal. The times of peak
throughputs do not seem to correlate with the times of peak drop rates,
which tends to confirm that most of the drops occur in bunches under
conditions approaching congestive collapse.

The instrumentation in our gateway beteen the ARPANET and four local nets,
some of which are connected by medium-speed (4800/9600 bps) lines, tends to
support the above observations and conclusions. We see intense spasms on the
part of some hosts (names provided upon request) which clearly are to blame
for almost all of the congestion observed here. These hosts apparently have
been optimized to operate well on local Ethernets with small delays and tend
to bombard the long-haul paths with vast numbers of retransmissions over very
short intervals. I would bet a wadge of packets against a MicroVAX-II that the
prime cause for the braindamage is ARP and the unfortunately common
implementation that loses the first data packet during the address-resolution
cycle. If this is fixed, I bet the tendency to err on the low side of
retransmission estimates would go away.

There are other causes of packet spasms that have been detailed in many of my
previous messages. Happily, some have gone away. Those remaining symptoms
indicate continuing inefficiencies in piggybacking and send/ack policies
leading to tinygram floods (with TELNET, in particular). The sad fact is that
these problems have been carefully documented and are not hard to fix;
however, it takes only a few bandits without these fixes to torpedo the entire
Internet performance.

Dave
-------

END OF DOCUMENT