The 'Security Digest' Archives (TM)

Archive: About | Browse | Search | Contributions | Feedback
Site: Help | Index | Search | Contact | Notices | Changes

ARCHIVE: TCP-IP Distribution List - Archives (1983)
DOCUMENT: TCP-IP Distribution List for October 1983 (60 messages, 32146 bytes)
SOURCE: http://securitydigest.org/exec/display?f=tcp-ip/archive/1983/10.txt&t=text/plain
NOTICE: securitydigest.org recognises the rights of all third-party works.

START OF DOCUMENT

-----------[000000][next][prev][last][first]----------------------------------------------------
Date:      3 Oct 1983 20:18:55 PDT
From:      POSTEL@USC-ISIF
To:        tcp-ip@SRI-NIC
Subject:   IP & TCP for Tandem NonStop

Tandem is developing an implementation of IP and TCP.
Contact Mike Choi (408-748-2666).

--jon.
-------
-----------[000001][next][prev][last][first]----------------------------------------------------
Date:      3 October 1983 21:23 EDT
From:      John S. Labovitz <HNIJ @ MIT-ML>
To:        tcp-ip @ SRI-NIC
Does anyone know of an implementation of TCP/IP for the TANDEM NonStop
I or II, and/or FTP, TELNET, etc., for same?

	Thanks in advance,

	John Labovitz
	HNIJ @ MIT-ML

-----------[000002][next][prev][last][first]----------------------------------------------------
Date:      4 Oct 1983 07:30:07-PDT
From:      Donald A. Norman <norman@NPRDC>
To:        tcp-ip@sri-nic
Subject:   GET ME OFF THIS MAILING LIST

HELP

I am Donald Norman, a Professor of Psychology.  I should not
be on the tcp mailing list.  I suspect the confusion arises
because of human error. Once I was norman@isi.  Then someone
else named norman got an account at ISI and guess what, he was
assigned the account "norman@isi".  Sheer incompetence on someone's
part, but given that I study human error (ONR contract, aircraft,
nuclear power plants), I figured it served me right.

The confusion was so great I had to give up my ISI account, even though
I had it first, and for about 5 years.

But now you have followed me to nprdc.   Take pity.  Take me off this list.


 Donald A. Norman     (ucbvax!sdcsvax!sdcsla!norman or norman@nprdc)
 Cognitive Science C-015
 University of California, San Diego
 La Jolla, California 92093
 (619) 452-6770


-----------[000003][next][prev][last][first]----------------------------------------------------
Date:      Tue 4 Oct 83 11:55:59-PDT
From:      Mary Stahl <STAHL@SRI-NIC>
To:        ahill@BBN-UNIX
Cc:        cak@PURDUE, tcp-ip@SRI-NIC, STAHL@SRI-NIC
Subject:   Re: Milnet split
The NIC's ARPANET interface has just recently arrived and is
undergoing testing, so please do not try to connect to us at
10.0.0.51.  When we are up on both nets, our entry in the host table
will contain both netaddresses.  In the meantime, there should be no
problem connecting to SRI-NIC to get tables or other files, whether
you are on net 10 or net 26.

- Mary
-------
-----------[000004][next][prev][last][first]----------------------------------------------------
Date:      Tue, 4 Oct 83 11:06:20 EST
From:      Christopher A Kent <cak@Purdue>
To:        tcp-ip@nic
Subject:   Milnet split
As I was istalling the new host table, I noticed for the first time
that the NIC is going to be on "the other side" of the net from me.
Does this mean that, once the mail bridges are installed, I won't
be able to make TCP connections for host tables and such? Or will
there be a net 10 NIC, too?

Cheers,
chris
-----------[000005][next][prev][last][first]----------------------------------------------------
Date:      Tue, 4 Oct 83 15:19:31 EST
From:      Christopher A Kent <cak@Purdue>
To:        tcp-ip@nic
Subject:   Net 10 NIC?
Thanks to all who wrote and told me that 10.0.0.51 (ST-NIC) will be the
Arpanet connection to the NIC, and that the interface just arrived and
isn't available for use yet, but when it does, it will appear as a
second address for SRI-NIC.

Cheers,
chris
-----------[000006][next][prev][last][first]----------------------------------------------------
Date:      4 Oct 1983 14:40:36 EDT (Tuesday)
From:      Alan Hill <ahill@BBN-UNIX>
To:        Christopher A Kent <cak@Purdue>
Cc:        tcp-ip@nic
Subject:   Re: Milnet split
	There is a net 10 NIC also.  Address it as 10.0.0.51.  Its name
is ST-NIC.

-Alan

-----------[000007][next][prev][last][first]----------------------------------------------------
Date:      Wed 5 Oct 83 07:50:11-PDT
From:      Jake Feinler <FEINLER@SRI-NIC>
To:        hedrick@RUTGERS, tcp-ip@SRI-NIC
Cc:        feinler@SRI-NIC, klh@SRI-NIC, stahl@SRI-NIC
Subject:   Re: Hedrick's conclusions from the pinging discussion
Charles,

I  have been reading the 'pinging' dialog as it goes along and
in your message of 8-sep-83 you state "our experience suggests that
this [updating one's tables from the NIC tables on a regular basis]
is not happening". Our experience here at the NIC is just the opposite.
We logged several thousand accesses to the NIC host name server in
August and we expect September and October to be heavier due to the
need to refresh tables due to the network split.  If you are a recipient
of the DDN-News you are aware that DCA has requested that all hosts
implement the Hostnames Server protocol (RFC 811) and the RFC is included
in the Protocol Handbook.    Further, DCA has asked BBN to register all
gateways with the NIC and to make sure that they do not assign any names
to gateways that have not been registered first.  The NIC has been designated
the official registrar for naming entities on both MILNET and ARPANET
and we are tasked with providing name service to users.  We have also
registered any information from 'foreign' nets that has been provided to
us.

There has been a lot of confusion about host name tables, and I am the
first to admit that in the past the whole issue of gateway names and
addresses and whether they were prime or dumb was very murky.  Also,
we have just gotten our new equipment installed, and once the second
interface is in place (hopefully in a couple of weeks) we will be
accessible from both MILNET and ARPANET.  I believe some of the issues
have been resolved, that our tables are the most current with respect
to ARPANET and MILNET hosts, that FTPing host name files is more tedious
than using the host name service, and that we are now providing good
service.  I urge you to use our table as the reference table for local
tables and to collaborate with us to make the service and the information
even better.  

One other piece of information in case it was missed by some of you.
We now have set up a mailbox called HOSTMASTER@SRI-NIC so that host name
info goes directly to Mary Stahl without stopping off in my mail or
in the NIC mail.  This has helped speed up the addition of new data
tremendously.  There is also a template called 'Host-approve' for persons
making changes to host names or addresses on MILNET or ARPANET.  All
changes should be reported using this template.  New hosts will not
be enabled until this template has been approved by DCA.  Although this
adds some formality to the process, it has actually worked reasonably
well in that there is now a known and published procedure and we no
longer get the info on the back of envelopes or scribbled on someone's
business card.  It also means that DCA, BBN, and the NIC are in much
better sync than was true in the past.

I hope this update of where things stand has been useful to the 
community with respect to host names in general and gateway names
in particular.  Ken Harrenstien (KLH@SRI-NIC) is the NIC contact for
the Host Name Server and Mary Stahl (Dyer) (STAHL@SRI-NIC) is the
Hostmaster (or actually mistress).  We appreciate the feedback and
discussion we have received from many of you and request that you
keep those cards and letters and host names coming in.

Regards,

Jake Feinler/NIC

P.M. (for post mortem)  Yesterday was the day the network split into
two networks - ARPANET and MILNET and I am pleased to report that
things went rather well.  The major problem we saw with respect to
host naming, etc., was that TAC users had not been informed to use
the net number in trying to log in which meant that sometimes they
could not get in.  The NIC is currently 26.0.0.73 and will also be
10.0.0.51 in the near future.  We will keep you posted on this.


J.
-------
-----------[000008][next][prev][last][first]----------------------------------------------------
Date:      5 Oct 83 14:40:10 EDT
From:      Charles Hedrick <HEDRICK@RUTGERS.ARPA>
To:        FEINLER@SRI-NIC.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA, klh@SRI-NIC.ARPA, stahl@SRI-NIC.ARPA
Subject:   Re: Hedrick's conclusions from the pinging discussion
The claim was not that you were out of date but that apparently some
gateways were.  I concluded this for two reasons:
  - that NYU's gateway was not known by the prime gateways when I
	last tested it.  (Presumably it is now, but I am no longer
	depending upon prime gateways for routing.)
  - that one of the managers of a prime gateway (I think a BBN gateway)
	described the problem from his end, and he did not seem to be
	using the NIC host tables.
  - I said that I would be happy to hear from the manager of any gateway
	that was in fact updating itself regularly from the NIC tables.
	I have not heard from any, nor have any sent mail to the
	mailing list.
I believe I am justified in concluding from this that the gateways
do not automatically update themselves from your tables.  As I am sure
you know, N thousand accesses to your host tables does not prove that
any particular set of systems (i.e. the prime gateways) are using them.
Rutgers does update its tables regularly from yours.  We use FTP, as we
want to have the rest of the <NETINFO> directory, i.e. the RFC's and
other random stuff.  Our host and gateway tables are based on yours.  To
the host tables we add 3 additional nicknames that you did not accept
but are essential for local operation.  We change most of the entries in
the gateway table to always-up, to minimize pinging.  But we certainly
are using your work.  I have no complaints about your service.  I know
you have been working very hard to track all of the changes that are
going on.  The only question is whether the gateways are using your
work.

By the way, it turns out that this issue is not really crucial to us. We
ping only 3 selected prime gateways and other gateways that are on
alternative routes.  We would have to ping these even if the prime
gateways were completely up to date.  The purpose of pinging is not to
find routes, but rather to see whether routes are in service or not. The
only way prime gateways could help us is if they would somehow tell us
whenever another gateway went down.  This is probably not a reasonable
request.
-------
-----------[000009][next][prev][last][first]----------------------------------------------------
Date:      11 Oct 1983 00:00 EST
From:      TCP-IP@brl
To:        TCP-IP@brl
Subject:   TCP-IP Digest, Vol 2 #18
TCP/IP Digest            Tuesday, 11 Oct 1983      Volume 2 : Issue 18

Today's Topics:
        Queries about 4.2 BSD, TCP/IP, and Ethernet Availability
              Where to Get a List of TCP/IP Implementations
          Implementation of TCP/IP for TANDEM NonStop Computers
            Comments on TCP/IP on the Perkin Elmer Computers
           Looking for Low-Cost Ethernet Terminal Controllers
                    The MILNET Split: One Perspective
             After the MILNET Split:  Where will the NIC Be?
                    Do Gateways Read the NIC Tables?
      On the Undesirability of "Mail Bridges" as a Security Measure
----------------------------------------------------------------------
                  TCP/IP Digest --- The InterNet Digest
                         LIMITED DISTRIBUTION
          For Research Use Only --- Not for Public Distribution
----------------------------------------------------------------------

Date:  27 September 1983 21:22 mdt
From:  RSanders.Pascalx@denver
Subject:  4.2 BSD/TCP-IP/Ethernet queries
To:  info-micro@brl-vgr, unix-wizards@brl-vgr, tcp-ip@brl, info-pc@usc-isib

  Three questions on availablity:

1)  Is anyone implementing, or planning to implement, 4.2 BSD Unix on
    a micro - besides Sun Microsystems?

2)  Is anyone selling/implementing/planning to implement TCP/IP on
    Ethernet for the IBM-PC - besides MIT?  I believe the 3Com stuff uses XNS.

3)  Is there a commercially available Unix microsystem running TCP/IP
    on Ethernet, or can one be *easily* (no kernal hacking) put together?

  Thanks for any advice or pointers.

-- Rex    RSanders.Pascalx@Denver (Arpanet)    ucbvax!menlo70!sanders (uucp)

------------------------------


Date: 29 Sep 1983 0545-PDT (Thursday)
From: mo@LBL-CSAM
To: RSanders.Pascalx@denver
Cc: info-micro@brl-vgr, unix-wizards@brl-vgr, tcp-ip@brl, info-pc@usc-isib
Subject: Re: 4.2 BSD/TCP-IP/Ethernet queries

Unisoft has announced TCP-IP support based on Berkeley Unix code.
I don't have a phone number, but you can get it from information.
Unisoft is in Berkeley.
	-Mike O'Dell

------------------------------

Date: 27 Sep 83 13:48:10 EDT (Tue)
From: Mark Horton <cbosgd!mark@berkeley>
Subject: list of tcp/ip implementations wanted
To: tcp-ip-request@brl.ARPA

Is there a list somewhere of what TCP/IP implementations currently exist?
Also, I'd be interested in a list of "vendor supported" implementations
of TCP/IP (and, of course, for what hardware/software).  I know about
3COM and Sun, and would like to know if there are others.

If you don't know the answer to this offhand, could you please put a copy
in the next TCP-IP digest?  Thanks.

	Mark Horton
	mark@Berkeley.ARPA

[ The Network Information Center (NIC), host SRI-NIC, maintains
  an up-to-date listing of all TCP implementations
  in the file <netinfo>tcp-ip-implementations.txt
  which can be retrieved with FTP using the "anonymous" account with any
  password.  Or, mail a request for a copy to ARPA: <NIC@NIC>,
  USENET: ...!decvax!brl-bmd!nic, or phone (415)-859-3695.  -Mike ]

------------------------------

Date: 3 October 1983 21:23 EDT
From: John S. Labovitz <HNIJ@mit-ml>
To: tcp-ip@sri-nic

Does anyone know of an implementation of TCP/IP for the TANDEM NonStop
I or II, and/or FTP, TELNET, etc., for same?

	Thanks in advance,

	John Labovitz
	HNIJ @ MIT-ML

------------------------------

Date:  3 Oct 1983 20:18:55 PDT
From: POSTEL@usc-isif
Subject: IP & TCP for Tandem NonStop
To:   tcp-ip@sri-nic

Tandem is developing an implementation of IP and TCP.
Contact Mike Choi (408-748-2666).

--jon.

------------------------------

Date:     Mon, 3 Oct 83 22:38:59 EDT
From:     Doug Kingston  <dpk@BRL-VGR>
To:       tcp-ip@BRL-VGR
Subject:  TCP/IP on Perkin/Elmer

	We have a Perkin/Elmer here at BRL, and we had also heard that
TCP/IP was available for the 32xx series.  Indeed it is, but only on
RS232 lines!!  They won't talk to an Ethernet, 1822 interface, or
even a direct HDLC line between two hosts.  Essentially there is no
good way to talk to them.  TCP/IP over RS232 lines is a poor excuse
for networking for a machine like that.  I rattled their cage that
something should happen in time, but I don't know what form it will
take.  If you hope to hook you PE to the ARPANET or even a local
net with TCP/IP, good luck.  While the software is probably good
enough (or at least close), the hardware just isn't there and interfaced
to the network code.

					Cheers,
						-Doug-

------------------------------

Date: Wed, 5 Oct 83 11:43 PDT
From: Bill Croft <croft%Safe@su-score>
Subject: low cost ethernet terminal cluster controller
To: tcp-ip@brl
Cc: croft@BRL.ARPA

Does anyone sell an "inexpensive" box that allows RS232
terminals access to internet hosts via a local ethernet?
These boxes typically contain a CPU, some number of
UARTs (8 to 16) and an ethernet interface.  In PROM
(or downloaded by PROM) would be code for IP/TCP/TELNET
and simple terminal driving software.

Boxes like this are currently on the market, but with
protocols other than TCP.  It seems to me that using
single board construction and the new Ethernet chip-sets,
it should be possible to get the cost of a connection
to the internet down to a few hundred dollars.
Such a box would even be a good way to connect local
terminals to local hosts, being cheaper than a
"port selector" or running hardwired lines all over
your campus.

Stanford has a SUN based "Ethertip", but it
is currently: (1) somewhat expensive (around $6000?)
since it uses multiple boards.  (2) PUP based (instead
of TCP) at the moment.

	--Bill Croft

------------------------------

Date:     Tue, 11 Oct 83 4:30:44 EDT
From:     Mike Muuss <mike@brl-bmd>
To:       tcp-ip@brl-bmd
Subject:  The MILNET Split: One Perspective

I write this letter almost a full week from the initiation of the MILNET
split, after having spent yet another night riding shotgun on the mail
queues, trying to make sure that we re-establish connectivity before our
11-day "failed mail" timer goes off.  Most of the effort lies in running
endless series of tests to determine which hosts STILL have non-functional
routing tables between them and us.

Sadly, this digest will only be received by people who are doing things
right, so I have to resort to other techniques for getting routing tables
updated.  Perhaps if we all apply enough gentle persuasion, things
can get tidied up in a hurry.

The problem, you see, is that we at BRL have really, truely *believed*
in the viability of the InterNet concept.  Of course, we still do,
although we certainly have felt rather lonely in our little corner of
the InterNet here, only being able to communicate with a "select few".
A good thing that ONE of our machines remains connected to the backbone
(MILNET, in this case), or we would not even had any place to send
our complaints from!  All of our machines save that one are safely tucked
away behind our own local gateway, so that we can engineer our own
solutions to our communications difficulties.  And, therein lies the
rub.

To begin by giving credit where credit is due:  Mike Brescia and the
PRIME Gateway crew at BBN had their act together.  Pop a packet for
BRLNET off to a BBN Prime gateway, and things work perfectly
(except for the MILARPA IMP blowing up unexpectedly, but that's another
story).

A great deal of the difficulty seems to be that absolutely nobody
expected to find a GATEWAY on MILNET!  Ho hum;  well, here we are.
About the only people who could talk to BRLNET after the split were
hosts which didn't bother making routing decisions, and instead
use the rather pragmatic "wildcard routing" algorithm:

  "Gee, this packet isn't for anybody I know -- let's send it to BBN!"

Worked splendidly.  Now, for the rest of the world.  When half the "10"s
became "26"s, everybody diligently updated their host tables.  But,
not so many sites remembered to (usually manually) extract the
current network topology from the GATEWAY section of the NIC tables,
and to reflect those changes in their routing table entries.
I suppose that it was easy to be lulled into a false sense of security,
because most gateways stayed put.  Only about 5 moved from the ARPANET
to MILNET, and the BRL-GATEWAY was probably one of the more noticable
ones.

  "Where did our UNIX-Wizards mail go? ...."

We heard the cries, and noticed the megabytes of accumulation in our
mail queues.  (And noticed our packet counts down by more than 50%).
Want to know how Ron Natalie (my "partner in packets") and I spent
our week?  Phoning and writing to host administrators, trying to help
them figure out how to update their routing tables (a startling number
needed a good deal of help to discover what to change).  Running
tests:  Can we hit them from BRLNET2?  BRLNET?  A MILNET host?
A MILNET TAC?  How about an ARPA host?  Humbug.

(A big round of thanks to Jake and the crew at the NIC -- without the
network directory and WHOIS service, we would have been sunk.
ThankYouThankyouthankythankyouthankyouthankyouthankyou.)

TCP and IP work.  We know that, it's a fact.  But, there seems to
be an almost totally manual mechanism involved when it comes time
to "program" the IP routings.  Disapointing.  (I'd like to note in
passing that, except for loading new host tables into all our hosts,
the only thing Ron had to do was pop a new routing table into our
Gateway.  Our part was easy).  If somebody ever 'nukes the InterNet
until it glows, nothing will work.  Not unless we all take a serious
look at improving the IP routing mechanisms that exist in each and
every host.

BBN-supplied PRIME gateways for everybody probably is not the answer;
either is the long awaited EGP protocol.  But, hopefully, someday,
somebody will work it out.

I'd like to see the next few issues of the digest concentrate on
how the InterNet as an integrated communications system should
"become aware" of changes in the underlying communications configuration,
so that in the future the configuration of the network can undergo
rapid changes (planned and unplanned) and still continue operating.
Think of the flexability this affords: responding to administrative
edicts.  Government foolishness.  Natural disaster.  And yes, even *war*.

(Pardon the rather flippant tone of this message, but I've been chasing
packets across the network all night, and this is my therapy.)
			Cheers,
			 -Mike

------------------------------

Date: Tue, 4 Oct 83 11:06:20 EST
From: Christopher A Kent <cak@purdue>
To: tcp-ip@nic
Subject: Milnet split

As I was istalling the new host table, I noticed for the first time
that the NIC is going to be on "the other side" of the net from me.
Does this mean that, once the mail bridges are installed, I won't
be able to make TCP connections for host tables and such? Or will
there be a net 10 NIC, too?

Cheers,
chris

------------------------------

Date: Tue 4 Oct 83 11:55:59-PDT
From: Mary Stahl <STAHL@sri-nic>
Subject: Re: Milnet split
To: ahill@bbn-unix
cc: cak@purdue, tcp-ip@sri-nic, STAHL@sri-nic

The NIC's ARPANET interface has just recently arrived and is
undergoing testing, so please do not try to connect to us at
10.0.0.51.  When we are up on both nets, our entry in the host table
will contain both net addresses.  In the meantime, there should be no
problem connecting to SRI-NIC to get tables or other files, whether
you are on net 10 or net 26.

- Mary

------------------------------

Date: Tue, 4 Oct 83 15:19:31 EST
From: Christopher A Kent <cak@purdue>
To: tcp-ip@nic
Subject: Net 10 NIC?

Thanks to all who wrote and told me that 10.0.0.51 (ST-NIC) will be the
Arpanet connection to the NIC, and that the interface just arrived and
isn't available for use yet, but when it does, it will appear as a
second address for SRI-NIC.

Cheers,
chris

------------------------------

Date: Wed 5 Oct 83 07:50:11-PDT
From: Jake Feinler <FEINLER@sri-nic>
Subject: Re: Hedrick's conclusions from the pinging discussion
To: hedrick@rutgers, tcp-ip@sri-nic
cc: feinler@sri-nic, klh@sri-nic, stahl@sri-nic

Charles,

I  have been reading the 'pinging' dialog as it goes along and
in your message of 8-sep-83 you state "our experience suggests that
this [updating one's tables from the NIC tables on a regular basis]
is not happening". Our experience here at the NIC is just the opposite.
We logged several thousand accesses to the NIC host name server in
August and we expect September and October to be heavier due to the
need to refresh tables due to the network split.  If you are a recipient
of the DDN-News you are aware that DCA has requested that all hosts
implement the Hostnames Server protocol (RFC 811) and the RFC is included
in the Protocol Handbook.    Further, DCA has asked BBN to register all
gateways with the NIC and to make sure that they do not assign any names
to gateways that have not been registered first.  The NIC has been designated
the official registrar for naming entities on both MILNET and ARPANET
and we are tasked with providing name service to users.  We have also
registered any information from 'foreign' nets that has been provided to
us.

There has been a lot of confusion about host name tables, and I am the
first to admit that in the past the whole issue of gateway names and
addresses and whether they were prime or dumb was very murky.  Also,
we have just gotten our new equipment installed, and once the second
interface is in place (hopefully in a couple of weeks) we will be
accessible from both MILNET and ARPANET.  I believe some of the issues
have been resolved, that our tables are the most current with respect
to ARPANET and MILNET hosts, that FTPing host name files is more tedious
than using the host name service, and that we are now providing good
service.  I urge you to use our table as the reference table for local
tables and to collaborate with us to make the service and the information
even better.  

One other piece of information in case it was missed by some of you.
We now have set up a mailbox called HOSTMASTER@SRI-NIC so that host name
info goes directly to Mary Stahl without stopping off in my mail or
in the NIC mail.  This has helped speed up the addition of new data
tremendously.  There is also a template called 'Host-approve' for persons
making changes to host names or addresses on MILNET or ARPANET.  All
changes should be reported using this template.  New hosts will not
be enabled until this template has been approved by DCA.  Although this
adds some formality to the process, it has actually worked reasonably
well in that there is now a known and published procedure and we no
longer get the info on the back of envelopes or scribbled on someone's
business card.  It also means that DCA, BBN, and the NIC are in much
better sync than was true in the past.

I hope this update of where things stand has been useful to the 
community with respect to host names in general and gateway names
in particular.  Ken Harrenstien (KLH@SRI-NIC) is the NIC contact for
the Host Name Server and Mary Stahl (Dyer) (STAHL@SRI-NIC) is the
Hostmaster (or actually mistress).  We appreciate the feedback and
discussion we have received from many of you and request that you
keep those cards and letters and host names coming in.

Regards,

Jake Feinler/NIC

P.M. (for post mortem)  Yesterday was the day the network split into
two networks - ARPANET and MILNET and I am pleased to report that
things went rather well.  The major problem we saw with respect to
host naming, etc., was that TAC users had not been informed to use
the net number in trying to log in which meant that sometimes they
could not get in.  The NIC is currently 26.0.0.73 and will also be
10.0.0.51 in the near future.  We will keep you posted on this.

J.

------------------------------

Date: 5 Oct 83 14:40:10 EDT
From: Charles Hedrick <HEDRICK@RUTGERS.ARPA>
Subject: Re: Hedrick's conclusions from the pinging discussion
To: FEINLER@SRI-NIC.ARPA
cc: tcp-ip@SRI-NIC.ARPA, klh@SRI-NIC.ARPA, stahl@SRI-NIC.ARPA

The claim was not that you were out of date but that apparently some
gateways were.  I concluded this for two reasons:
  - that NYU's gateway was not known by the prime gateways when I
	last tested it.  (Presumably it is now, but I am no longer
	depending upon prime gateways for routing.)
  - that one of the managers of a prime gateway (I think a BBN gateway)
	described the problem from his end, and he did not seem to be
	using the NIC host tables.
  - I said that I would be happy to hear from the manager of any gateway
	that was in fact updating itself regularly from the NIC tables.
	I have not heard from any, nor have any sent mail to the
	mailing list.
I believe I am justified in concluding from this that the gateways
do not automatically update themselves from your tables.  As I am sure
you know, N thousand accesses to your host tables does not prove that
any particular set of systems (i.e. the prime gateways) are using them.
Rutgers does update its tables regularly from yours.  We use FTP, as we
want to have the rest of the <NETINFO> directory, i.e. the RFC's and
other random stuff.  Our host and gateway tables are based on yours.  To
the host tables we add 3 additional nicknames that you did not accept
but are essential for local operation.  We change most of the entries in
the gateway table to always-up, to minimize pinging.  But we certainly
are using your work.  I have no complaints about your service.  I know
you have been working very hard to track all of the changes that are
going on.  The only question is whether the gateways are using your
work.

By the way, it turns out that this issue is not really crucial to us. We
ping only 3 selected prime gateways and other gateways that are on
alternative routes.  We would have to ping these even if the prime
gateways were completely up to date.  The purpose of pinging is not to
find routes, but rather to see whether routes are in service or not. The
only way prime gateways could help us is if they would somehow tell us
whenever another gateway went down.  This is probably not a reasonable
request.

------------------------------

From:     Mike Muuss <Mike@BRL>
To:       TCP-IP@BRL
Subject:  On the Undesirability of "Mail Bridges" as a Security Measure

Seeing the last few messages brings back to mind the ugly prospect
looming ever larger:  that we will not have ONE InterNet, and we will
not have TWO InterNets, but we will in fact have One-and-a-Half
InterNets, stuck together with mail-only "bridges" (ie Data Fences),
which will prevent the ARPA EXPNET and the MILNET communities from
exchanging data with each other.  In my nightmares, I see things
degenerating to much the same level of service as where the InterNet
touches on "foreign" (non-TCP) networks today.  Unable to retrieve
files, important data will be shipped as mail, and will suffer the
indelicacies of having headers and trailers slapped on it, spaces and
dots and tabs munged with, etc.  Reprehensible kludges like
UUENCODE/UUDECODE will have to become commonplace again.  It's bad
enough having to mail files to USENET, CSNET, etc; but between the
EXPNET and MILNET?  Come on!

I'm entirely in favor of separating the backbones of the two networks;
in addition to giving DCA a much greater degree of control over engineering
the MILNET portion, it also permits the ARPANET portion to do horrible
things to their IMPs, to play partitioning experiments, and generally
have enough of a reprieve from operational considerations to be able
to do meaningful experiments again.  All this is good.

Forcing the split was a good thing, too.  It polished off NCP once-and-for-all,
and it demonstrated that the IP protocol really *does* operate as claimed.
Funneling all IP communications through ``n'' gateways (n=5 at present)
is good, too.  Gets people thinking about multi-path routing algorithms,
and provides a good "safety valve", just in case there should ever be
valid military reasons for separating the networks.

I even believe that TAC access controls (TACACS) are a good thing; I
look forward to the day when (most) all the TAC phone numbers are
published, and freely availible.  But it is important not to be lulled
into a false sense of "security" by measures like TACACS and the
mail-only bridges.  Every host on the network is still required, by
regulation, to take a comprehensive approach to system security.  (The
relevant Army regulation is AR 380-380, similar regulations exist for
the other services).  Every military host is obligated to observe
sercurity procedures as carefully in normal operations as if 50,000
TACACS cards had just been issued to the public school system.  Hiding
ourselves behind mail-only bridges is only asking for trouble, later on.
Being on the MILNET isn't significantly different from offering commercial
(or AUTOVON or FTS) dial-up service, in terms of the threat posed by an
outsider trying to get in.  Now the CLASSIFIED community, that's different.
But there's none of that sort of information on the MILNET, right?

So, here is a loud plea from one (military) researcher who says
"Don't cut the lines of communication!"  An emphatic YES to
security.  Do it by the regulations!  But don't depend on partial
network connectivity as a security measure -- it won't help, and it sure
can hurt.  (Ouch!).

	Your (Civil) Servant,
	  -Mike Muuss
	   Leader, Advanced Computer Systems Team
	   U. S. Army Ballistic Research Laboratory

------------------------------

END OF TCP-IP DIGEST
********************
-----------[000010][next][prev][last][first]----------------------------------------------------
Date:      Tue, 11-Oct-83 06:17:48 EDT
From:      TCP-IP%brl@brl-bmd.UUCP (TCP-IP@brl)
To:        fa.tcp-ip
Subject:   TCP-IP Digest, Vol 2 #18


TCP/IP Digest            Tuesday, 11 Oct 1983      Volume 2 : Issue 18

Today's Topics:
        Queries about 4.2 BSD, TCP/IP, and Ethernet Availability
              Where to Get a List of TCP/IP Implementations
          Implementation of TCP/IP for TANDEM NonStop Computers
            Comments on TCP/IP on the Perkin Elmer Computers
           Looking for Low-Cost Ethernet Terminal Controllers
                    The MILNET Split: One Perspective
             After the MILNET Split:  Where will the NIC Be?
                    Do Gateways Read the NIC Tables?
      On the Undesirability of "Mail Bridges" as a Security Measure
----------------------------------------------------------------------
                  TCP/IP Digest --- The InterNet Digest
                         LIMITED DISTRIBUTION
          For Research Use Only --- Not for Public Distribution
----------------------------------------------------------------------

Date:  27 September 1983 21:22 mdt
From:  RSanders.Pascalx@denver
Subject:  4.2 BSD/TCP-IP/Ethernet queries
To:  info-micro@brl-vgr, unix-wizards@brl-vgr, tcp-ip@brl, info-pc@usc-isib

  Three questions on availablity:

1)  Is anyone implementing, or planning to implement, 4.2 BSD Unix on
    a micro - besides Sun Microsystems?

2)  Is anyone selling/implementing/planning to implement TCP/IP on
    Ethernet for the IBM-PC - besides MIT?  I believe the 3Com stuff uses XNS.

3)  Is there a commercially available Unix microsystem running TCP/IP
    on Ethernet, or can one be *easily* (no kernal hacking) put together?

  Thanks for any advice or pointers.

-- Rex    RSanders.Pascalx@Denver (Arpanet)    ucbvax!menlo70!sanders (uucp)

------------------------------


Date: 29 Sep 1983 0545-PDT (Thursday)
From: mo@LBL-CSAM
To: RSanders.Pascalx@denver
Cc: info-micro@brl-vgr, unix-wizards@brl-vgr, tcp-ip@brl, info-pc@usc-isib
Subject: Re: 4.2 BSD/TCP-IP/Ethernet queries

Unisoft has announced TCP-IP support based on Berkeley Unix code.
I don't have a phone number, but you can get it from information.
Unisoft is in Berkeley.
	-Mike O'Dell

------------------------------

Date: 27 Sep 83 13:48:10 EDT (Tue)
From: Mark Horton <cbosgd!mark@berkeley>
Subject: list of tcp/ip implementations wanted
To: tcp-ip-request@brl.ARPA

Is there a list somewhere of what TCP/IP implementations currently exist?
Also, I'd be interested in a list of "vendor supported" implementations
of TCP/IP (and, of course, for what hardware/software).  I know about
3COM and Sun, and would like to know if there are others.

If you don't know the answer to this offhand, could you please put a copy
in the next TCP-IP digest?  Thanks.

	Mark Horton
	mark@Berkeley.ARPA

[ The Network Information Center (NIC), host SRI-NIC, maintains
  an up-to-date listing of all TCP implementations
  in the file <netinfo>tcp-ip-implementations.txt
  which can be retrieved with FTP using the "anonymous" account with any
  password.  Or, mail a request for a copy to ARPA: <NIC@NIC>,
  USENET: ...!decvax!brl-bmd!nic, or phone (415)-859-3695.  -Mike ]

------------------------------

Date: 3 October 1983 21:23 EDT
From: John S. Labovitz <HNIJ@mit-ml>
To: tcp-ip@sri-nic

Does anyone know of an implementation of TCP/IP for the TANDEM NonStop
I or II, and/or FTP, TELNET, etc., for same?

	Thanks in advance,

	John Labovitz
	HNIJ @ MIT-ML

------------------------------

Date:  3 Oct 1983 20:18:55 PDT
From: POSTEL@usc-isif
Subject: IP & TCP for Tandem NonStop
To:   tcp-ip@sri-nic

Tandem is developing an implementation of IP and TCP.
Contact Mike Choi (408-748-2666).

--jon.

------------------------------

Date:     Mon, 3 Oct 83 22:38:59 EDT
From:     Doug Kingston  <dpk@BRL-VGR>
To:       tcp-ip@BRL-VGR
Subject:  TCP/IP on Perkin/Elmer

	We have a Perkin/Elmer here at BRL, and we had also heard that
TCP/IP was available for the 32xx series.  Indeed it is, but only on
RS232 lines!!  They won't talk to an Ethernet, 1822 interface, or
even a direct HDLC line between two hosts.  Essentially there is no
good way to talk to them.  TCP/IP over RS232 lines is a poor excuse
for networking for a machine like that.  I rattled their cage that
something should happen in time, but I don't know what form it will
take.  If you hope to hook you PE to the ARPANET or even a local
net with TCP/IP, good luck.  While the software is probably good
enough (or at least close), the hardware just isn't there and interfaced
to the network code.

					Cheers,
						-Doug-

------------------------------

Date: Wed, 5 Oct 83 11:43 PDT
From: Bill Croft <croft%Safe@su-score>
Subject: low cost ethernet terminal cluster controller
To: tcp-ip@brl
Cc: croft@BRL.ARPA

Does anyone sell an "inexpensive" box that allows RS232
terminals access to internet hosts via a local ethernet?
These boxes typically contain a CPU, some number of
UARTs (8 to 16) and an ethernet interface.  In PROM
(or downloaded by PROM) would be code for IP/TCP/TELNET
and simple terminal driving software.

Boxes like this are currently on the market, but with
protocols other than TCP.  It seems to me that using
single board construction and the new Ethernet chip-sets,
it should be possible to get the cost of a connection
to the internet down to a few hundred dollars.
Such a box would even be a good way to connect local
terminals to local hosts, being cheaper than a
"port selector" or running hardwired lines all over
your campus.

Stanford has a SUN based "Ethertip", but it
is currently: (1) somewhat expensive (around $6000?)
since it uses multiple boards.  (2) PUP based (instead
of TCP) at the moment.

	--Bill Croft

------------------------------

Date:     Tue, 11 Oct 83 4:30:44 EDT
From:     Mike Muuss <mike@brl-bmd>
To:       tcp-ip@brl-bmd
Subject:  The MILNET Split: One Perspective

I write this letter almost a full week from the initiation of the MILNET
split, after having spent yet another night riding shotgun on the mail
queues, trying to make sure that we re-establish connectivity before our
11-day "failed mail" timer goes off.  Most of the effort lies in running
endless series of tests to determine which hosts STILL have non-functional
routing tables between them and us.

Sadly, this digest will only be received by people who are doing things
right, so I have to resort to other techniques for getting routing tables
updated.  Perhaps if we all apply enough gentle persuasion, things
can get tidied up in a hurry.

The problem, you see, is that we at BRL have really, truely *believed*
in the viability of the InterNet concept.  Of course, we still do,
although we certainly have felt rather lonely in our little corner of
the InterNet here, only being able to communicate with a "select few".
A good thing that ONE of our machines remains connected to the backbone
(MILNET, in this case), or we would not even had any place to send
our complaints from!  All of our machines save that one are safely tucked
away behind our own local gateway, so that we can engineer our own
solutions to our communications difficulties.  And, therein lies the
rub.

To begin by giving credit where credit is due:  Mike Brescia and the
PRIME Gateway crew at BBN had their act together.  Pop a packet for
BRLNET off to a BBN Prime gateway, and things work perfectly
(except for the MILARPA IMP blowing up unexpectedly, but that's another
story).

A great deal of the difficulty seems to be that absolutely nobody
expected to find a GATEWAY on MILNET!  Ho hum;  well, here we are.
About the only people who could talk to BRLNET after the split were
hosts which didn't bother making routing decisions, and instead
use the rather pragmatic "wildcard routing" algorithm:

  "Gee, this packet isn't for anybody I know -- let's send it to BBN!"

Worked splendidly.  Now, for the rest of the world.  When half the "10"s
became "26"s, everybody diligently updated their host tables.  But,
not so many sites remembered to (usually manually) extract the
current network topology from the GATEWAY section of the NIC tables,
and to reflect those changes in their routing table entries.
I suppose that it was easy to be lulled into a false sense of security,
because most gateways stayed put.  Only about 5 moved from the ARPANET
to MILNET, and the BRL-GATEWAY was probably one of the more noticable
ones.

  "Where did our UNIX-Wizards mail go? ...."

We heard the cries, and noticed the megabytes of accumulation in our
mail queues.  (And noticed our packet counts down by more than 50%).
Want to know how Ron Natalie (my "partner in packets") and I spent
our week?  Phoning and writing to host administrators, trying to help
them figure out how to update their routing tables (a startling number
needed a good deal of help to discover what to change).  Running
tests:  Can we hit them from BRLNET2?  BRLNET?  A MILNET host?
A MILNET TAC?  How about an ARPA host?  Humbug.

(A big round of thanks to Jake and the crew at the NIC -- without the
network directory and WHOIS service, we would have been sunk.
ThankYouThankyouthankythankyouthankyouthankyouthankyou.)

TCP and IP work.  We know that, it's a fact.  But, there seems to
be an almost totally manual mechanism involved when it comes time
to "program" the IP routings.  Disapointing.  (I'd like to note in
passing that, except for loading new host tables into all our hosts,
the only thing Ron had to do was pop a new routing table into our
Gateway.  Our part was easy).  If somebody ever 'nukes the InterNet
until it glows, nothing will work.  Not unless we all take a serious
look at improving the IP routing mechanisms that exist in each and
every host.

BBN-supplied PRIME gateways for everybody probably is not the answer;
either is the long awaited EGP protocol.  But, hopefully, someday,
somebody will work it out.

I'd like to see the next few issues of the digest concentrate on
how the InterNet as an integrated communications system should
"become aware" of changes in the underlying communications configuration,
so that in the future the configuration of the network can undergo
rapid changes (planned and unplanned) and still continue operating.
Think of the flexability this affords: responding to administrative
edicts.  Government foolishness.  Natural disaster.  And yes, even *war*.

(Pardon the rather flippant tone of this message, but I've been chasing
packets across the network all night, and this is my therapy.)
			Cheers,
			 -Mike

------------------------------

Date: Tue, 4 Oct 83 11:06:20 EST
From: Christopher A Kent <cak@purdue>
To: tcp-ip@nic
Subject: Milnet split

As I was istalling the new host table, I noticed for the first time
that the NIC is going to be on "the other side" of the net from me.
Does this mean that, once the mail bridges are installed, I won't
be able to make TCP connections for host tables and such? Or will
there be a net 10 NIC, too?

Cheers,
chris

------------------------------

Date: Tue 4 Oct 83 11:55:59-PDT
From: Mary Stahl <STAHL@sri-nic>
Subject: Re: Milnet split
To: ahill@bbn-unix
cc: cak@purdue, tcp-ip@sri-nic, STAHL@sri-nic

The NIC's ARPANET interface has just recently arrived and is
undergoing testing, so please do not try to connect to us at
10.0.0.51.  When we are up on both nets, our entry in the host table
will contain both net addresses.  In the meantime, there should be no
problem connecting to SRI-NIC to get tables or other files, whether
you are on net 10 or net 26.

- Mary

------------------------------

Date: Tue, 4 Oct 83 15:19:31 EST
From: Christopher A Kent <cak@purdue>
To: tcp-ip@nic
Subject: Net 10 NIC?

Thanks to all who wrote and told me that 10.0.0.51 (ST-NIC) will be the
Arpanet connection to the NIC, and that the interface just arrived and
isn't available for use yet, but when it does, it will appear as a
second address for SRI-NIC.

Cheers,
chris

------------------------------

Date: Wed 5 Oct 83 07:50:11-PDT
From: Jake Feinler <FEINLER@sri-nic>
Subject: Re: Hedrick's conclusions from the pinging discussion
To: hedrick@rutgers, tcp-ip@sri-nic
cc: feinler@sri-nic, klh@sri-nic, stahl@sri-nic

Charles,

I  have been reading the 'pinging' dialog as it goes along and
in your message of 8-sep-83 you state "our experience suggests that
this [updating one's tables from the NIC tables on a regular basis]
is not happening". Our experience here at the NIC is just the opposite.
We logged several thousand accesses to the NIC host name server in
August and we expect September and October to be heavier due to the
need to refresh tables due to the network split.  If you are a recipient
of the DDN-News you are aware that DCA has requested that all hosts
implement the Hostnames Server protocol (RFC 811) and the RFC is included
in the Protocol Handbook.    Further, DCA has asked BBN to register all
gateways with the NIC and to make sure that they do not assign any names
to gateways that have not been registered first.  The NIC has been designated
the official registrar for naming entities on both MILNET and ARPANET
and we are tasked with providing name service to users.  We have also
registered any information from 'foreign' nets that has been provided to
us.

There has been a lot of confusion about host name tables, and I am the
first to admit that in the past the whole issue of gateway names and
addresses and whether they were prime or dumb was very murky.  Also,
we have just gotten our new equipment installed, and once the second
interface is in place (hopefully in a couple of weeks) we will be
accessible from both MILNET and ARPANET.  I believe some of the issues
have been resolved, that our tables are the most current with respect
to ARPANET and MILNET hosts, that FTPing host name files is more tedious
than using the host name service, and that we are now providing good
service.  I urge you to use our table as the reference table for local
tables and to collaborate with us to make the service and the information
even better.  

One other piece of information in case it was missed by some of you.
We now have set up a mailbox called HOSTMASTER@SRI-NIC so that host name
info goes directly to Mary Stahl without stopping off in my mail or
in the NIC mail.  This has helped speed up the addition of new data
tremendously.  There is also a template called 'Host-approve' for persons
making changes to host names or addresses on MILNET or ARPANET.  All
changes should be reported using this template.  New hosts will not
be enabled until this template has been approved by DCA.  Although this
adds some formality to the process, it has actually worked reasonably
well in that there is now a known and published procedure and we no
longer get the info on the back of envelopes or scribbled on someone's
business card.  It also means that DCA, BBN, and the NIC are in much
better sync than was true in the past.

I hope this update of where things stand has been useful to the 
community with respect to host names in general and gateway names
in particular.  Ken Harrenstien (KLH@SRI-NIC) is the NIC contact for
the Host Name Server and Mary Stahl (Dyer) (STAHL@SRI-NIC) is the
Hostmaster (or actually mistress).  We appreciate the feedback and
discussion we have received from many of you and request that you
keep those cards and letters and host names coming in.

Regards,

Jake Feinler/NIC

P.M. (for post mortem)  Yesterday was the day the network split into
two networks - ARPANET and MILNET and I am pleased to report that
things went rather well.  The major problem we saw with respect to
host naming, etc., was that TAC users had not been informed to use
the net number in trying to log in which meant that sometimes they
could not get in.  The NIC is currently 26.0.0.73 and will also be
10.0.0.51 in the near future.  We will keep you posted on this.

J.

------------------------------

Date: 5 Oct 83 14:40:10 EDT
From: Charles Hedrick <HEDRICK@RUTGERS.ARPA>
Subject: Re: Hedrick's conclusions from the pinging discussion
To: FEINLER@SRI-NIC.ARPA
cc: tcp-ip@SRI-NIC.ARPA, klh@SRI-NIC.ARPA, stahl@SRI-NIC.ARPA

The claim was not that you were out of date but that apparently some
gateways were.  I concluded this for two reasons:
  - that NYU's gateway was not known by the prime gateways when I
	last tested it.  (Presumably it is now, but I am no longer
	depending upon prime gateways for routing.)
  - that one of the managers of a prime gateway (I think a BBN gateway)
	described the problem from his end, and he did not seem to be
	using the NIC host tables.
  - I said that I would be happy to hear from the manager of any gateway
	that was in fact updating itself regularly from the NIC tables.
	I have not heard from any, nor have any sent mail to the
	mailing list.
I believe I am justified in concluding from this that the gateways
do not automatically update themselves from your tables.  As I am sure
you know, N thousand accesses to your host tables does not prove that
any particular set of systems (i.e. the prime gateways) are using them.
Rutgers does update its tables regularly from yours.  We use FTP, as we
want to have the rest of the <NETINFO> directory, i.e. the RFC's and
other random stuff.  Our host and gateway tables are based on yours.  To
the host tables we add 3 additional nicknames that you did not accept
but are essential for local operation.  We change most of the entries in
the gateway table to always-up, to minimize pinging.  But we certainly
are using your work.  I have no complaints about your service.  I know
you have been working very hard to track all of the changes that are
going on.  The only question is whether the gateways are using your
work.

By the way, it turns out that this issue is not really crucial to us. We
ping only 3 selected prime gateways and other gateways that are on
alternative routes.  We would have to ping these even if the prime
gateways were completely up to date.  The purpose of pinging is not to
find routes, but rather to see whether routes are in service or not. The
only way prime gateways could help us is if they would somehow tell us
whenever another gateway went down.  This is probably not a reasonable
request.

------------------------------

From:     Mike Muuss <Mike@BRL>
To:       TCP-IP@BRL
Subject:  On the Undesirability of "Mail Bridges" as a Security Measure

Seeing the last few messages brings back to mind the ugly prospect
looming ever larger:  that we will not have ONE InterNet, and we will
not have TWO InterNets, but we will in fact have One-and-a-Half
InterNets, stuck together with mail-only "bridges" (ie Data Fences),
which will prevent the ARPA EXPNET and the MILNET communities from
exchanging data with each other.  In my nightmares, I see things
degenerating to much the same level of service as where the InterNet
touches on "foreign" (non-TCP) networks today.  Unable to retrieve
files, important data will be shipped as mail, and will suffer the
indelicacies of having headers and trailers slapped on it, spaces and
dots and tabs munged with, etc.  Reprehensible kludges like
UUENCODE/UUDECODE will have to become commonplace again.  It's bad
enough having to mail files to USENET, CSNET, etc; but between the
EXPNET and MILNET?  Come on!

I'm entirely in favor of separating the backbones of the two networks;
in addition to giving DCA a much greater degree of control over engineering
the MILNET portion, it also permits the ARPANET portion to do horrible
things to their IMPs, to play partitioning experiments, and generally
have enough of a reprieve from operational considerations to be able
to do meaningful experiments again.  All this is good.

Forcing the split was a good thing, too.  It polished off NCP once-and-for-all,
and it demonstrated that the IP protocol really *does* operate as claimed.
Funneling all IP communications through ``n'' gateways (n=5 at present)
is good, too.  Gets people thinking about multi-path routing algorithms,
and provides a good "safety valve", just in case there should ever be
valid military reasons for separating the networks.

I even believe that TAC access controls (TACACS) are a good thing; I
look forward to the day when (most) all the TAC phone numbers are
published, and freely availible.  But it is important not to be lulled
into a false sense of "security" by measures like TACACS and the
mail-only bridges.  Every host on the network is still required, by
regulation, to take a comprehensive approach to system security.  (The
relevant Army regulation is AR 380-380, similar regulations exist for
the other services).  Every military host is obligated to observe
sercurity procedures as carefully in normal operations as if 50,000
TACACS cards had just been issued to the public school system.  Hiding
ourselves behind mail-only bridges is only asking for trouble, later on.
Being on the MILNET isn't significantly different from offering commercial
(or AUTOVON or FTS) dial-up service, in terms of the threat posed by an
outsider trying to get in.  Now the CLASSIFIED community, that's different.
But there's none of that sort of information on the MILNET, right?

So, here is a loud plea from one (military) researcher who says
"Don't cut the lines of communication!"  An emphatic YES to
security.  Do it by the regulations!  But don't depend on partial
network connectivity as a security measure -- it won't help, and it sure
can hurt.  (Ouch!).

	Your (Civil) Servant,
	  -Mike Muuss
	   Leader, Advanced Computer Systems Team
	   U. S. Army Ballistic Research Laboratory

------------------------------

END OF TCP-IP DIGEST
********************

-----------[000011][next][prev][last][first]----------------------------------------------------
Date:      Wed 12 Oct 83 00:20:47-PDT
From:      David Roode <ROODE@SRI-NIC>
To:        tcp-ip@SRI-NIC
Subject:   SRI-NIC on both ARPANET and MILNET
As was discussed earlier on this list, SRI-NIC had only just taken
delivery of a second 1822-style IMP ("ARPANET") interface on the day
the MILNET/ARPANET split happened.  Now thanks to some quick
installation by DEC and some speedy monitor manipulation by Greg Satz
of the NIC staff, we are directly accessible as planned on both network
26 (MILNET) and network 10 (ARPANET) using the DEC-20 Multinet
monitor.  The two addresses are [10.0.0.51] and [26.0.0.73]
and they are included in the just released NIC host tables, version
313.
-------
-----------[000012][next][prev][last][first]----------------------------------------------------
Date:      Thursday, 13 October 1983 09:45 mst
From:      Vinograd@HIS-PHOENIX-MULTICS.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Subject:   TCP/IP for Tandem
I spoke with Mike and the facts are as follows. Initial testing in mid
84, with availability at end of 84. Only connection will be HDH. No
services are planned. In other words all you get is TCP/IP and you (the
user) must do Telnet etc.

Hope this clarifies it.
-----------[000013][next][prev][last][first]----------------------------------------------------
Date:      17 Oct 1983 2143-PDT
From:      Eric P. Scott <EPS at JPL-VAX>
To:        TCP-IP at NIC
Subject:   IP with DEUNA?
Does anybody have (or plan to) IP running over DEC (as opposed to
3Com, Interlan) Ethernet hardware?
					Eric P. Scott
			       Advanced Projects Support Group
			  Computer Science and Applications Section
				  Jet Propulsion Laboratory
------
-----------[000014][next][prev][last][first]----------------------------------------------------
Date:      Tue, 18 Oct 83 03:16:15 EDT
From:      salkind@nyu (Lou Salkind)
To:        EPS@JPL-VAX
Cc:        tcp-ip@nic
Subject:   Re: IP with DEUNA?
I have written a DEC DEUNA driver for 4.1c/4.2 bsd UNIX (which supports TCP/IP
and address resolution protocol on an Ethernet).

It should be easy to modify the driver to work under VMS as well (using the
EUNICE TCP/IP code).  Another possibility for VMS is to send raw packets
(via qio's) through the DEC driver; then you can run both DECnet and TCP/IP
on the same cable.

	Lou Salkind
-----------[000015][next][prev][last][first]----------------------------------------------------
Date:      18 Oct 1983 1258-PDT
From:      Eric P. Scott <EPS at JPL-VAX>
To:        TCP-IP at SRI-NIC
Subject:   Same question for Ungermann-Bass Net/One
Is anyone using (or planning to use) Ungermann-Bass Net/One
hardware for IP?  (The "un" driver supplied with 4.1c is
obsolete.)  Did you choose SDP (Simple Datagram Protocol,
80-bit addresses) or EDLS (Ethernet Data Link Service, standard
Ethernet header)?  If you chose SDP, which IP-types are you
using and how are you handling address resolution?  If you
chose EDLS, do you abandon Net/One bridges?

If you have (written) a 4.1c/4.2 DR11-W driver for "modern" U-B
NIUs I'd like to hear from you.
					-=EPS=-
------
-----------[000016][next][prev][last][first]----------------------------------------------------
Date:      18 Oct 1983 1318-PDT
From:      Eric P. Scott <EPS at JPL-VAX>
To:        TCP-IP at SRI-NIC
Subject:   Why not DECnet on top of Internet?
Face it--nothing works better for VMS-to-VMS communication than
DECnet.  Nothing can--DECnet support is "wired in" at a very
deep level.  Has anyone given \serious/ thought to what would be
required to produce a "fake VMS Ethernet driver" that would
exchange datagrams with your favorite IP module instead of a
physical interface?  The possibility exists for the creation of
a "private social club" for VAX/VMS systems that would allow
easy transfer of files with weird RMS attributes, use of the
multi-window PHONE conference utility, etc. while still
maintaining the "traditional" TCP-based mechanisms for
interoperability with the rest of the world.

					-=EPS=-
------
-----------[000017][next][prev][last][first]----------------------------------------------------
Date:      18 October 1983 12:29 EDT
From:      David C. Plummer <DCP @ MIT-MC>
To:        salkind @ NYU
Cc:        tcp-ip @ SRI-NIC, EPS @ JPL-VAX
Subject:   Re: IP with DEUNA?
    Date: Tue, 18 Oct 83 03:16:15 EDT
    From: salkind@nyu (Lou Salkind)

    It should be easy to modify the driver to work under VMS as well (using the
    EUNICE TCP/IP code).  Another possibility for VMS is to send raw packets
    (via qio's) through the DEC driver; then you can run both DECnet and TCP/IP
    on the same cable.

But the DECnet protocol-address -> Ethernet hardware address is a
direct map.  It does not use address resolution.  Indeed, you can
still run TCP/IP on the same cable using address resolution, but
there are philosophical complaints with the way DEC did things.
I've had a discussion with various people about this [DECnet on
Ethernet].  I can forward a summary to those that are interested..
-----------[000018][next][prev][last][first]----------------------------------------------------
Date:      Tue, 18 Oct 83 13:01:42 EDT
From:      salkind@nyu (Lou Salkind)
To:        DCP@MIT-MC
Cc:        EPS@JPL-VAX, tcp-ip@SRI-NIC
Subject:   Re: IP with DEUNA?
I agree that a DECnet address is mapped directly to an Ethernet address
(in fact, DEC changes the Ethernet hardware address to correspond more
closely to the DECnet address).

What I had in mind for native VMS was to establish "channels" to monitor
both the IP and ARP packet types (the VMS DEUNA driver allows you to do this).
Then I can take these raw packets and feed them into the appropriate
module (IP or address resolution).  Hopefully this will work -- we'll see.

I would be quite interested in reading your summary. 

	Lou
-----------[000019][next][prev][last][first]----------------------------------------------------
Date:      Tue, 18 Oct 83 13:15:18 EDT
From:      Howard G. Corneilson (MISD WA-Team) <hgc@ardc>
To:        tcp-ip@sri-nic
Cc:        mike@brl, wad@ardc
Subject:   [Howard G. Corne:  Re: Protocol query?]
Mike Muuss of BRL suggested I send this message to you as well.  The problem
still exists for us at ARDC (26.1.0.45).  MILARPA does not connect us to the
ARPA community.
 --------
 Howard G. Corneilson (MISD WA-Team)	<hgc@ardc>

----- Forwarded message # 1:

Date:     Wed, 12 Oct 83 0:37:34 EDT
From:     Howard G. Corneilson (MISD WA-Team) <hgc@ardc>
To:       POSTEL@Usc-Isif
cc:       ron@Brl-Vgr, postel@Brl-Vgr, mike@Brl-Vgr, hgc@ARDC, POSTEL@Usc-Isif
Subject:  Re:  Protocol query?

I don't know if this will help or not, but I believe this discussion came as a
result of this host (ARDC 26.1.0.45) not being able to reach any arpa hosts. 
I called on Mike Muuss for help.  In the discussion, Mike suggested I try our
secondary gateway bridge (MILDCEC) instead of our primary (MILARPA).  We could
get through to arpa hosts again.  We are NOT behing the BRL gareway.  The same
thing is true right now (01:10).  When I tried to telnet to arpa hosts using
MILARPA I got the message "connection closed by foreign host".  I tried
several hosts with the same result.  When I switched to our secondary bridge
(MILDCEC) I got through to those same arpa hosts with no trouble.  It seems to
me that there is some problem to get to arpa hosts through MILARPA.  I had no
trouble going to the BBN-NET through MILARPA.  As we (ARDC) have had mail
backup to arpa sites, we appreciate it if this problem can be corrected as
soon as possible.
 --------
 Howard G. Corneilson (MISD WA-Team)	<hgc@ardc>
	Commercial/FTS	(201)724-3663/4364	Autovon 880-3663/4364

----- End of forwarded messages
-----------[000020][next][prev][last][first]----------------------------------------------------
Date:      18 Oct 1983 1613-EST (Tuesday)
From:      Christopher A Kent <cak@PURDUE.ARPA>
To:        EPS@JPL-VAX.ARPA
Cc:        TCP-IP@SRI-NIC.ARPA
Subject:   Re: Why not DECnet on top of Internet?
We wanted to do something like this last summer; but we sort of wanted
it the other way around. We wanted to use a large, existing DECNET as a
tunnel for IPs. The conclusion was that it can be done, but it would
require a fair amount of work, and some of the interested parties
weren't interested.

Dave Mills has successfully used IP nets as tunnels for DECNET
datagrams, but not under VMS. I don't know the details.

Cheers,
chris
----------
-----------[000021][next][prev][last][first]----------------------------------------------------
Date:      18 Oct 83 23:13:01 PDT (Tue)
From:      gilligan@SRI-SPAM (Bob Gilligan)
To:        hgc@ardc
Cc:        mike@brl, tcp-ip@sri-nic, wad@ardc
Subject:   re: [Howard G. Corne:  Re: Protocol query?]
	The problem with ARDC <--> Arpanet host connections crashing
is that ARDC is running 4.1c BSD, which does not handle ICMP
redirect messages correctly.  In fact, recipt of an ICMP redirect
message in 4.1c will clobber any TCP connection open at the time.
One way you can verify that this is indeed happening is by setting
the kernel variable icmpprintfs to 1, either with adb or by editing
the code (it lives in /sys/netinet/ip_icmp.c) and re-building the
kernel.  Once you are running the patched kernel, every ICMP message
you receive will print a few lines of source IP address, type and
code fields, which you will quickly corrolate with closed
connections.
	As to how to solve the problem, you may have already
discovered it:  locate a gateway to the Arpanet that doesn't send
you re-direct messages.  However, since there are multiple gateways
between the Milnet and the Arpanet, there is no guarantee that they
all will not send you redirects at some time or another.  The real
solution is to acquire and install 4.2 BSD, which implements the
correct response to redirect messages (it changes the routing
table).
-----------[000022][next][prev][last][first]----------------------------------------------------
Date:      19 Oct 1983 17:40:11 EDT (Wednesday)
From:      Mike Brescia <brescia@BBN-UNIX>
To:        Howard G. Corneilson (MISD WA-Team) <hgc@ardc>
Cc:        ado@BBN-UNIX, mayersohn@BBN-UNIX, tcp-ip@sri-nic, mike@brl, wad@ardc, brescia@BBN-UNIX
Subject:   Re: [Howard G. Corne:  Re: Protocol query?]
It turns out that in addition to the 4.1c bsd unix bug, ARDC got the
additional nip that the gateways' opinion of primary path for some hosts
differs from the table published by the NIC.  The gateways' table was
generated with preliminary data about traffic patterns, while the table
sent out to the hosts was based on later data.

The change made ARDC makes to route via MILDCEC matches the tables in the
gateways.  That will allow them to work until they get their software
updated.

We will be working to get the tables in the gateways in line with the
more recent data and the list sent to the hosts.

Mike Brescia

-----------[000023][next][prev][last][first]----------------------------------------------------
Date:      19 Oct 1983 17:56:54 EDT (Wednesday)
From:      Buz Owen <ado@BBN-UNIX>
To:        Howard G. Corneilson (MISD WA-Team) <hgc@ardc>
Cc:        tcp-ip@sri-nic, mike@brl, wad@ardc, gateway@BBN-UNIX
Subject:   Re: [Howard G. Corne:  Re: Protocol query?]
Howard:  Due to a oversight in setting up of the loadsharing
tables, the mailbridges presently think that the primary homing
assignment for hosts on imp 45 is mildcec.  This will be
corrected in a day or so.  In the meantime, as you have
discovered, by sending to mildcec you can avoid regularly evoking
redirects with every ip datagram, although the gateways may still
send you redirects from time to time for reasons unrelated to
loadsharing.  Correcting your software is still the only correct
solution to the problem of redirects causing closed connections.
Buz
-----------[000024][next][prev][last][first]----------------------------------------------------
Date:      26 Oct 1983 14:52:23 PDT
From:      BRADEN@USC-ISI
To:        tcp-ip@SRI-NIC
Subject:   A Question on the TCP Max Buffer Size Option
 

 
When doing FTP data transfer, ARPANET hosts now generally send
full 576-octet packets.  Unfortunately, this makes less than
optimum use of the ARPANET, which allows a message to be 1008
octets.  TCP provides a mechanism which can be used to get full
use of ARPANET messages: expand your input buffers and send the
TCP Maxium Segment Size (MSS) option whenever you open a
connection.  If every host does this, it seems that we will make
more efficient use of the ARPANET.
 
However, the present documentation on the MSS option is ambiguous
and confusing.
 
It is confusing, because the option is sent at the TCP level,
but in most implementations it is really a statement about
the size of reassembly buffers available at the IP level.
 
It is also confusing because it isn't really the maximum size
of a TCP segment (which has always been defined to include the
TCP header).  The MSS Option refers only the the maximum amount
of user data to be included in a TCP segment.
 
Now I have a question for all you out there in TCP/IP land:
 
   Suppose my TCP sends yours a SYN packet containing the
   MSS Option, specifying a value of 536 bytes.  What is the
   largest packet that you could conceivably send me on that
   connection?
 
I think the possible answers are:
 
    (a) 536+20+20 = 576.
 
    (b) 536+20+60 = 616, since an IP header could be up to
        60 octets (due to options).  I am assuming that there
        are effectively no TCP options.
 
    (c) None of the above (Let's hear about it!).
 
______
Bob Braden
UCLA Office of Academic Computing
 


-------
-----------[000025][next][prev][last][first]----------------------------------------------------
Date:      26 Oct 1983 16:37:43-PDT
From:      Ron Stanonik <stanonik@NPRDC>
To:        bbn-tcp@bbn-unix
Cc:        tcp-ip@sri-nic, gateways@bbn-unix
Subject:   mail from arpanet tops 20 sites
We seem to be experiencing some problems receiving mail from
arpanet tops-20 sites.  (We're on the milnet).
The problems never seem to occur for small mail (<136 bytes),
always seem to occur for large mail (>13 kbytes), and seldom
occur for medium mail.  There seem to be two types of problems.

>From sumex-aim and sri-ai we receive mail with headers such as:
Delivery-Notice: While sending this message to NPRDC.ARPA, the
 SRI-AI.ARPA mailer was obliged to send this message in 50-byte
 individually Pushed segments because normal TCP stream transmission
 timed out.  This probably indicates a problem with the receiving TCP
 or SMTP server.  See your site's software support if you have any questions.

>From bbng and bbna we receive truncated copies of the same mail
every half hour, for a few days.  It appears we receive a tcp
reset.  According to bbna, they send a tcp reset because
the connection has timed out, but from our end the connection
appears normal (tcp state: established), all of our packets
have been acknowledged, and we're waiting for the remainder
of the mail body.

We didn't see these problems before the arpanet/milnet split.
Has anyone else seen these problems?  Can anyone think of any
straws we can grasp at?  (We're running bbn's tcp/ip sys8.)

Thanks,

Ron Stanonik
stanonik@nprdc
-----------[000026][next][prev][last][first]----------------------------------------------------
Date:      26 Oct 1983 18:07-EDT
From:      CLYNN@BBNA
To:        BRADEN@USC-ISI
Cc:        tcp-ip@SRI-NIC
Subject:   Re: A Question on the TCP Max Buffer Size Option
Bob,
	I, too, think that it should be an IP option (or there should
be a corresponding IP option).

	The TOPS20 hosts, being "conservative in what you send", will
not send a packet whose IP Packet Length exceeds the MSS size.  The
amount of data in the TCP will depend on the length of the headers
(including any options).

	[Do implementations which limit the TCP data (exclusive of
headers) include the SYN/FIN in the "segment"?]

Charles Lynn
-----------[000027][next][prev][last][first]----------------------------------------------------
Date:      Wed, 26 Oct 83 21:32:02 EDT
From:      Mike Muuss <mike@brl-vgr>
To:        stanonik@nprdc
Cc:        bbn-tcp@bbn-unix, tcp-ip@sri-nic, gateways@bbn-unix
Subject:   Re:  mail from arpanet tops 20 sites
We have similar problems talking between BRL (MILNET) and WASHINGTON (ARPA).
The AI-DIGESTS frequently get truncated, and often have MRC's reminder
in the header.  Something is causing packet loss, somewhere.

Also, since a few weeks BEFORE the MILNET split, we have started noticing
that our IMP is holding our RFNB line low for extended periods of time
(several seconds, sometimes minutes).  This causes our network traffic
to exhibit "bursty-ness" when this is happening.  Makes TELNET echoing
take a *long* time.  Until recently we thought that this was something
that our gateway was doing wrong (BRL-GATEWAY), but we have also begun
noticing it on our MILNET host (BRL), and the two machines use wildly
different software (both of which have been pretty stable for many months,
at least in the 1822 interface code).  This behavior could cause
our ACKs to be withheld for a long time.  In any case, the condition
always clears (a) if you wait a while, or (b) if the Host Master Ready
Relay is flapped and the interfaces resynchronize.

I'm loath to say that this might be somebody else's problem, but we
are beginning to run out of ideas.  My only hunch is that this might
be a symptom of the now infamous TOPS-20 pinging problem.  Both these
IMP ports are listed as GATEWAY entries in the NIC Table, and they
do sustain some quantity of pinging.  I wonder if our IMP is being
asked to setup/tear down too many "connections" (to carry IP datagrams
over) per minute?  Can somebody at BBN monitor our traffic for an
hour sometime during the day?

In passing, I'll note that we are IMP 29. (ABER), with 4 trunks and
4 hosts (BRL, APG-1, BRL-TAC, BRL-GATEWAY).  Our own packet counters
show that we transmit+receive on the order of 1,000,000 packets/day,
so these problems may be problems of scale.  In the remote possibility
that this is an IMP-related problem, this may very well be affecting
the MIL/ARPA gateways as well, as my intuition says that they will
be carrying even more traffic than we do.

It would be interesting to hear a definitive statement on the
frgmentation issue in the MIL/ARPA gateways, too.  If a MILNET
host sends a 1006 byte IP datagram to an ARPANET host
via a MIL/ARPA gateway, which of these events will happen?
	1)  The MIL/ARPA gateway sends a 1006 byte IP datagram on,
	2)  The MIL/ARPA gateway fragments into ~512 byte IP datagrams,
	3)  The MIL/ARPA gateway discards the datagram.
(This is important, because VAXen directly on MILNET and ARPANET
will "negotiate" a 1024 byte packet size (4.2 BSD UNIX), and will
have their IMP interface registered with an MTU of 1006;  hence,
the MIL/ARPA gateways *will* be seeing 1006-ish datagram sizes).

Networking!
 -Mike
-----------[000028][next][prev][last][first]----------------------------------------------------
Date:      27 Oct 1983 0048-PDT
From:      Craig Milo Rogers  <ROGERS@USC-ISIB.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   Packet Size Selection
	The problem of Berkeley IP/TCP sending large packets ("jumbograms")
has annoyed us here at ISI, too.  Here are some of our thoughts.

1)	The selection of an appropriate maximum packet size depends upon
	a wide variety of factors.  Limitations due to the IP implentations
	of the source and destination hosts and their connected networks
	are certainly of primary importance.  However, the characteristics
	of any intermediary networks and gateways are also important.
	Type-of-service may be a consideration:  for ARPANET-type nets,
	packets below a certain size may give lower delay.  Since the
	path taken by packets between two hosts may vary over time as
	the internetwork topology varies (perhaps someday, as load varies),
	the appropriate maximum packet size may vary, too.  More
	research is needed here.

2)	The Internet architecture supposes a division of labor between
	IP and TCP.  Part of that division is that IP is to be as
	"stateless" as possible, using only the information at hand
	in a packet and a few other sources (such as a routing table
	of destinations and gateways).  The selection of the optimal
	packet size is dependent upon the expected path taken between
	two hosts over a period of time.  While IP could be more helpful
	(for example, an IP option could collect "maximum" and "optimum"
	packet size information from every host/gateway the packet
	traverses), the ultimate decision is up to the packet generators,
	such as the TCP protocol.  More research is needed here, too.

3)	The IP specifications states that jumbograms may be used only
	among consenting hosts.  One might assume that all hosts on a
	given network would be willing to handle the largest IP packet
	that will fit on the net (this assumption has proven false at
	ISI on a couple of occasions).  It is not in general possible
	to make any assumption (other than 576) when two hosts do not
	share a common network.  Furthermore, it is unsafe to believe
	a received Maximum Segment Size TCP option implies a corresponding
	IP packet size, unless you know the characteristics of the
	path between the hosts.  Here are two simple approaches to
	a solution:

	1)  If you receive a Maximum Segment Size option advertizing
	    a larger than normal value, it should be possible to
	    transmit correspondingly large IP packets, as long as
	    you fragment them down to the 576 limit before transmitting
	    them to a network.

	2)  If you receive a Maximum Segment Size option advertizing
	    a larger than normal value from a host on a directly
	    connected network, and if you agree that you own host
	    supports such packets, it should be possible to send
	    jumbograms up to the implied maximum.  Otherwise, any
	    incoming Maximum Segment Size options should be discarded,
	    and the TCP maximum segment size should be based on the
	    IP limit of 576 less headers (and a reasonable allowance for
	    IP and TCP options).

	The second solution is probably the better one.  Part of the
	problem arises because in most cases the issues and specifications
	for transmitting IP in particular lower-level networks are
	woefully underpublished.  More research (or at least, more
	documentation) is needed here.

4)	Finally, the environment in which the IP and TCP protocols
	are used has evolved since the Internet research was
	started.  It is not uncommon for personal "hosts", such as
	IBM PCs and SUN workstations, to have as much main memory
	each as the main timesharing hosts on the ARPANET did a
	decade ago.  So, engineering compromises such as the 576
	octet limit may need to be revised periodically.  My usual
	comment applies.

					Craig Milo Rogers
-------
-----------[000029][next][prev][last][first]----------------------------------------------------
Date:      26 Oct 1983 23:01:35 EDT (Wednesday)
From:      Mike Brescia <brescia@BBN-UNIX>
To:        Mike Muuss <mike@brl-vgr>
Cc:        tcp-ip@sri-nic, brescia@BBN-UNIX
Subject:   Packet size at MIL/ARPA gateways
To clear up just one point, the MIL/ARPA gateways do not fragment in either
direction.  They are prepared to receive and forward packets up to 1008 bytes
in length without alteration.

Mike

-----------[000030][next][prev][last][first]----------------------------------------------------
Date:      27 Oct 1983 at 0952-PDT
From:      dan@Sri-Tsc
To:        stanonik@Nprdc
Cc:        tcp-ip@Sri-Nic, gateways@Bbn-Unix, bbn-tcp@Bbn-Unix
Subject:   Re: mail from arpanet tops 20 sites
We also seem to be getting the symptoms you report, on our PDP 11/44's
running 2.8BSD with the 4.1aBSD TCP/IP code.  We get those same "50 byte
segment" messages from SRI-KL (TOPS-20), and keep getting truncated
messges ("sender closed connection" -- so I assume it was a reset from
afar) from host WASHINGTON.  SRI-KL, WASHINGTON, and our 11/44
(SRI-TSC) are all on the arpanet.  Haven't had time to look into it.

	-Dan Chernikoff (dan@sri-tsc)
-----------[000031][next][prev][last][first]----------------------------------------------------
Date:      Thursday, 27 October 1983 09:13 edt
From:      JSLove@MIT-MULTICS.ARPA (J. Spencer Love)
To:        Mike Muuss <mike@BRL-VGR.ARPA>
Cc:        stanonik@NPRDC.ARPA, bbn-tcp@BBN-UNIX.ARPA, tcp-ip@SRI-NIC.ARPA, gateways@BBN-UNIX.ARPA
Subject:   Re: mail from arpanet tops 20 sites
At MIT-Multics, we observed that our IMP would block us (i.e., stop
indicating that it was ready for the next bit) when we tried to send
more than 8 packets to a single HOST/IMP destination.  When this
happened, all output to the network completely stopped until our IMP was
able to send us a RFNM (ready for next message) for that HOST/IMP.  That
is, the connection resource being exhausted was not the number of
connections to different destinations, it was the number of packets
outstanding to a particular destination.  The local wisdom around here
is that this is independent of the "port expander" field in the internet
address (the 3rd byte); only the 2nd and 4th bytes are significant in
determining the connection used between the two IMP's.

It seems quite likely that you are exceeding this limit while
communicating with your gateway, since you may have many connections
open simultaneously across the gateway.  The problem may be exacerbated
by gateway congestion of the other type:  trying to keep connections
open to too many IMP's, which would delay acceptance of your 9th packet.
Of course, I haven't ruled out the effects of pinging gateways on the
network.

The Multics TCP implementation now in the field does not address this
problem, and difficulties have been observed at RADC (on the MILNET)
which we suspect are related.  However, the initial HDH implementation
(which may have changed) did not block the host; HDLC flow control
wasn't well debugged.  Instead, it crashed the IMP. Understandably, the
network control center and other users of the IMP at MIT were not
pleased.  So we implemented RFNM processing, and a new release of TCP
which incorporates this improvement will eventually get to the other
(non-HDH) Multics sites.

Since we were able to "get away" with not processing RFNM's under TCP, I
bet your implementor took the same shortcut.  I may lose the bet, but
that's what your problem sounds like.

I wonder what steps the IMP software maintainers have taken or will take
in response to the new traffic patterns created by the MILNET split.
Perhaps a little lobbying is needed to persuade planners that this
problem will not go away; it is plausible to suppose that this problem
is temporary if traffic across the gateways will be sharply diminished
by increased network segregation (e.g., SMTP-only gateways).
-----------[000032][next][prev][last][first]----------------------------------------------------
Date:      28 Oct 1983 0118-PDT
From:      Craig Milo Rogers  <ROGERS@USC-ISIB.ARPA>
To:        TCP-IP@SRI-NIC.ARPA
Subject:   Re: A Question on the TCP Max Buffer Size Option
	It just occured to me that if you want to be ultra-conservative,
then the correct formula is:

	MSS = MTU - sizeof(MAXTCPHDR) - sizeof(MAXIPHDR)

			or

	MSS = MTU - 60 - 60

	After all, there can be up to 40 octets of options in the TCP
header, too.  At the moment there aren't any options particularly
worth putting there (other than Max Segment Size in the first packet),
but perhaps someday...

	This really just points out that we are trying to use MSS
for a purpose for which it isn't well suited.  MSS and MTU are
not exactly related;  for an MTU of 576 octets, MSS can lie between
536 octets and 456 octets.  This represents a 17% difference in
TCP "data" per packet.  MSS was probably intended for "small hosts"
which wanted to limit the size of their TCP buffers below that
implied by the 576 octet convention in IP.  Given current trends
in memory prices and system configurations it seems unlikely
that any IP/TCP host will really need to limit its buffers below
the 576 boundary.

	Perhaps we can change the semantics of MSS without causing
any undue grief in our existing IP/TCP environment.  I would like to
propose that the definition of the Maximum Segment Size Option in TCP
be changed to "the maximum receive segment size, at the TCP which sends
the Maximum Segment Size option, for IP/TCP packets without any IP or
TCP options." (worded a little less clumsily).  Another sentence could
state that a connection's outgoing MSS should be reduced by the number
of IP and TCP option octets in each packet.  This provides a precise
relationship between MSS and the maximum IP packet size on a connection.

	It should not be very difficult to implement this change in
the definition of MSS.  Its probably true that some IP impelmentations
need fixing in this area, anyway.

					Craig Milo Rogers
-------
-----------[000033][next][prev][last][first]----------------------------------------------------
Date:      Fri, 28 Oct 83 1:30:38 EDT
From:      Ron Natalie <ron@brl-vgr>
To:        tcp-ip@sri-nic
Subject:   LBTP RFC
Network Hacking Group                                        R. Natalie
Request for Comments:  999                                          BRL
                                                         November, 1981

                      LETTER BOMB TRANSFER PROTOCOL

1.  INTRODUCTION

   The objective of Letter Bomb Transfer Protocol (LBTP) is to transfer
   simple opinions of disgruntled network users reliably and efficiently.
   LBTP is independant of any particular transmission or mail processing
   subsystem and requires only a moderately reliable mail transfer system.

   An important LBTP is its capability to relay these opinions across
   transport service environments.  LBTP provides a non-verbal, easily
   understood response to any number of situations such as mail systems
   who return non-decipherable error messages.

2. LBTP Model:

                          \                                           
                         *-XXX
                          /   XX
                                X
                                  X
                                   X
                                   X
                                   X                            
                               IIIIIIIII                     
                               IIIIIIIII                 
                               IIIIIIIII                    
                               IIIIIIIII                              
                               XXXXXXXXX                              
                        XXXXXXXXXXXXXXXXXXXXXXX                       
                     XXXXXXXXXXXXXXXXXXXXXXXXXXXXX                    
                  XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX                 
               XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX              
              XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX             
            XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX           
           XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX          
          XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX         
         XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX        
        XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX       
        XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX       
       XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX      
       XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX      
       XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX      
       XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX      
       XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX      
        XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX       
        XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX       
         XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX        
          XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX         
           XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX          
            XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX           
              XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX             
               XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX              
                  XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX                 
                     XXXXXXXXXXXXXXXXXXXXXXXXXXXXX                    
                        XXXXXXXXXXXXXXXXXXXXXXX                       
                               XXXXXXXXX                              
                                                                      
                                                                      
                                                                      
                                                                      
                                                                      
-----------[000034][next][prev][last][first]----------------------------------------------------
Date:      Fri 28 Oct 83 14:08:16-PDT
From:      Joseph I. Pallas <PALLAS@SU-SCORE.ARPA>
To:        louie@UMD3.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA, petry@UMD2.ARPA
Subject:   Re: TELNET End of line
The TELNET specification makes the explicit assumption that BOTH ends of a
TELNET connection are NVTs.  This requires that both ends transmit CR LF
for a "newline."

The problem is that the NVT definition specifies the effect of a
newline on the virtual printer, but does not define the mapping from
an ASCII keyboard to the virtual terminal's newline key.  Most user
TELNET programs do not map any key to newline when in "full duplex"
mode (server WILL ECHO), and map the local equivalent of newline (CR
on most systems, LF or both on some) to newline in the default (server
WON'T ECHO) mode.  It seems that most TELNET implementations completely
discard the NVT model in full-duplex mode, since the character-at-a-time
transmission pattern is incompatible with almost every other aspect of
the NVT specification.

This may be the cause of the problem--there is no TELNET negotiation
for transmission pattern, so the de facto standard has arisen of the
user TELNET piggy-backing the transmission pattern onto the
remote-echo option.  Obviously, an explicit LINE-AT-A-TIME transmission
pattern (as the default) would /require/ the line to be terminated with
a TELNET newline sequence.  And, equally obviously, a CHAR-AT-A-TIME
pattern would make the concept of line-terminator meaningful only to the
remote (in which case, a line-oriented host such as umd-univac would
probably choose to treat both CR and CR LF as line terminators, perhaps
including LF if physical terminals use that convention for communication
with the host).

It hardly seems the time to introduce a new TELNET option, especially one
that would probably invalidate every existing implementation.  Perhaps the
best thing to do would be to make the de facto standard de jure, and
require user TELNET programs to send newline at the end of a line when they
are doing local echoing (i.e., invalidate the Berkeley VAX implementation
that sends CR NUL [these implemenations must be difficult or impossible to
use with TELNET-based servers like SMTP and FTP]).  This is assuming, of
course, that the user TELNET always has a concept of a "line" in half-duplex
mode, probably a safe assumption.

joe
-------
-----------[000035][next][prev][last][first]----------------------------------------------------
Date:      28 Oct 1983 15:38:25 PDT
From:      BRADEN@USC-ISI
To:        louie@UMD3, tcp-ip@SRI-NIC
Cc:        louie@UMD2, petry@UMD2, BRADEN@USC-ISI
Subject:   Re: TELNET End of line
In response to the message sent  28-Oct-83 17:11:14-UT from louie@umd3

Louis,

Your initial assumption was correct -- CR LF is he proper delimiter to
TELNET lines.  If a site sends anything else over the Internet, we should
call the Protocol Police.  Could you please name the guilty sites, so we 
can get it mended?

Thanks,

Bob Braden
-------
-----------[000036][next][prev][last][first]----------------------------------------------------
Date:      28 Oct 1983 15:50:25 PDT
From:      BRADEN@USC-ISI
To:        PALLAS@SU-SCORE, louie@UMD3
Cc:        tcp-ip@SRI-NIC, petry@UMD2, BRADEN@USC-ISI
Subject:   Re: TELNET End of line
In response to the message sent  Fri 28 Oct 83 14:08:16-PDT from PALLAS@SU-SCORE.ARPA

Interesting discussion, but we did this 10 years ago.  CR LF I*S* the
only legitimate end-of-line indication over the Internet.

Bob Braden
-------
-----------[000037][next][prev][last][first]----------------------------------------------------
Date:      Fri 28 Oct 83 17:01:52-PDT
From:      Joseph I. Pallas <PALLAS@SU-SCORE.ARPA>
To:        BRADEN@USC-ISI.ARPA
Cc:        louie@UMD3.ARPA, tcp-ip@SRI-NIC.ARPA, petry@UMD2.ARPA
Subject:   Re: TELNET End of line
    Interesting discussion, but we did this 10 years ago.  CR LF I*S* the
    only legitimate end-of-line indication over the Internet.

    Bob Braden

Great!  But what's a line?  In particular, what's a line in
"full-duplex" mode TELNET connections?  I think the key point here is
that louie's question is NOT answered in RFC854, despite the fact that
it's quite a bit less than 10 years old.  I don't mean to be snotty;
some of us weren't around for the discussion 10 years ago, but we
still have to implement the specification.  The specification is
incomplete, as I said before: it doesn't specify the mapping from user
keyboard to NVT newline.  It's clear that newline is NOT the
appropriate thing to send in response to a carriage return in a
full-duplex style connection...it would confuse most operating systems
and character-interactive programs.

The goal of TELNET in its most common usage (as stated in RFC854) is
to provide remote interactive access to another computer /as
transparently as possible/.  When talking to a UNIVAC running Exec-8
through a concentrator, one types a carriage return to end a line.
One should be able to do the same thing through a TELNET connection.
When talking to Emacs on machine X, one types a carriage return to
invoke a particular operation, which is generally distinct from the
operation invoked by a line feed, and is in particular NOT marking the
end of any line (there being no line to end).

joe
-------
-----------[000038][next][prev][last][first]----------------------------------------------------
Date:      28 Oct 1983 19:56:59-PDT
From:      CCVAX.trest@Nosc
To:        TCP-IP@SRI-NIC
Please Add Me to your List.  THANKS!!

	trest@nosc
	trest@nosc-tecr

	Mike Trest
	4065 Hancock Street
	San Diego, Ca 92110
	(619)225-1980
-----------[000039][next][prev][last][first]----------------------------------------------------
Date:      28 October 1983 17:03 EDT
From:      David C. Plummer <DCP @ MIT-MC>
To:        louie @ UMD3
Cc:        tcp-ip @ SRI-NIC, louie @ UMD2, petry @ UMD2
Subject:   TELNET End of line
In TELNET...

CR LF means "new line" CR NULL should never be sent from the user
process unless the transmit binary option is on and the user
typed Return control-@.  CR NULL is only to be sent (unless
transmit binary is on) from the server to the user to indicate
the carriage should go to the left but not down.

You didn't say which direction Berkeley Unix sent its data.

-----------[000040][next][prev][last][first]----------------------------------------------------
Date:      28-Oct-83 17:11:14-UT
From:      louie@umd3
To:        tcp-ip@sri-nic
Cc:        louie@umd-univac, petry@umd-univac
Subject:   TELNET End of line
I've run into a problem in implementing our TELNET server for a UNIVAC 1100
series host.  The interactive interface for the Sperry/UNIVAC operating system
is line oriented, so my TELNET need to accumulate an entire line before passing
it along to the OS.  The question is, what delimits the end of line?  The
initial assumption that I made was the CR LF pair.   I believe that RFC 764,
the TELNET protocol specification says that this pair is to be interpreted
as a "new line" character.  

Using CR LF seems to have worked for most hosts; however it seems that some
Berkeley VAX hosts' TELNET send a CR NUL instead of a CR LF.  Which is correct?
Or, are both correct?  What do other line oriented hosts look for?

             Louis A. Mamakos
             (louie@umd3)

             Computer Science Center - Systems
             University of Maryland

-------
-----------[000041][next][prev][last][first]----------------------------------------------------
Date:      28 October 1983 23:49 EDT
From:      David C. Plummer <DCP @ MIT-MC>
To:        BRADEN @ USC-ISI
Cc:        tcp-ip @ SRI-NIC, PALLAS @ SU-SCORE, petry @ UMD2, louie @ UMD3
Subject:   Re: TELNET End of line
    Date: 28 Oct 1983 15:50:25 PDT
    From: BRADEN@USC-ISI

    In response to the message sent  Fri 28 Oct 83 14:08:16-PDT from PALLAS@SU-SCORE.ARPA

    Interesting discussion, but we did this 10 years ago.  CR LF I*S* the
    only legitimate end-of-line indication over the Internet.

No, it is a silly discussion.  I was in 7th grade ten years ago,
but I've had to deal with this lossage [TELNET] all too much in
recent years.

Basically, Braden is correct, louie is wrong.

(1) TELNET is a byte stream protocol.  Please don't confuse it
with transmission protocols; if you do, I'll sick the modularity
police on you.  This means don't say TELNET and and TCP/IP or
TELNET and (big I) Internet in the same sentence and give the
impression they are related.  TELNET and (little i) internet is
OK.

(2) newline has NOTHING to do with the state of ECHO
negotiation.  NOTHING.  If you want to bypass newline, you have
to use negotiate transmit binary (or so I've been taught).

(3) NVTs are inherently ascii terminals.  I'm not even sure OLD
TELNET could use 8 bits since negotiation characters started at
0200.  NEW TELNET can support 8 bits (e.g., meta key), with
proper negotiation.  With hairier negotiation it might be
possible to support the 12bit space cadet (Lisp Machine).

(4) /As transparently as possible/ is a joke.  With all the
possible negiations going on, you aren't really talking to the
computer at the other end; you are talking through two
interpreters.

(5) If the Berzerkely user telnet program was sending CR NUL to
your Univac, then it is wrong.

(6) I'd love to put the modularity police on this one: As I
understand it, running EMACS when TELNETing to a machine requires
transmit binary (otherwise newline will just screw you
completely).  As I see it, either you negotiate transmit binary
from the start (bypassing newline for the entire session) or
putting the TTY in binary mode tries to negotiate transmit
binary.

In conclusion, TELNET should be shot.  Take a look at the SUPDUP
RFC.  In this day of interactive systems (just about everything
except IBMs and Univacs) and with display terminals the norm, it
is rediculous to hang on to a line-at-a-time, printing terminal
oriented protocol (i.e., TELNET).  I'm not SUPDUP is /the/
answer; it has its own problems.  However, it doesn't make a lot
of restrictive assumptions that TELNET does.

-----------[000042][next][prev][last][first]----------------------------------------------------
Date:      28-Oct-83 20:05:14-UT
From:      louie@umd3
To:        tcp-ip@sri-nic
Cc:        louie@umd-univac, petry@umd-univac
Subject:   TELNET End of line
The Sperry/UNIVAC 1100 systems doesn't have any terminals directly connected to
it of the dumb asynchronous type.  At least it doesn't think that it does.  Most
of the software is set up to work with block mode terminals, so all it sees is
a message.  Our local (dumb async ascii) terminals are connected though a
front end concentrator, which emulates a cluster of teletype like terminals.
Local user's press the carriage-return key to end a line, but the CR never
makes it past the front-end.

I suspect that you may be correct in that I'll have to accept both CR LF and
CR NULL pairs.  It seems that since the Telnet EL (Erase Line) control function
used to be specified to delete characters back until the last CR LF sequence
(in RFC-764) that CR LF would be used.  But, it seems that the definition in
RFC-854 has been changed to erase the current "line", which is much less
specific.  Oh well...

              Louis A. Mamakos
              (louie@umd3)

              Computer Science Center - Systems
              University of Maryland

-------
-----------[000043][next][prev][last][first]----------------------------------------------------
Date:      Sat, 29 Oct 83 10:42 PDT
From:      Taft.PA@PARC-MAXC.ARPA
To:        Joseph I. Pallas <PALLAS@SU-SCORE.ARPA>
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: TELNET End of line
I second Plummer's comments, and have this to add:

You said: "The specification is incomplete ... it doesn't specify the
mapping from user keyboard to NVT newline."

Rightly so!  The Telnet specification is a computer-to-computer
protocol, not a user interface standard.  The means by which a user
causes an NVT end-of-line to be generated is a user interface issue.  It
is the responsibility of the User Telnet program to map from the user's
end-of-line indication (Return, NewLine, CR, LF, or whatever) to the NVT
end-of-line.  And it is the responsibility of the Server Telnet to map
from the NVT end-of-line to whatever the server uses to represent
end-of-line.

I'm no great fan of the existing Telnet specification; and in particular
I think it's silly to use two characters to represent end-of-line.  But
a standard is a standard; and until it gets changed let's conform to it.
And for heavens sake don't mix up user interface issues with a
computer-to-computer protocol.

	Ed Taft

-----------[000044][next][prev][last][first]----------------------------------------------------
Date:      Sat 29 Oct 83 13:10:01-PDT
From:      Mark Crispin <MRC@SU-SCORE.ARPA>
To:        Taft.PA@PARC-MAXC.ARPA
Cc:        stanonik@NPRDC.ARPA, TCP-IP@SRI-NIC.ARPA
Subject:   Re: mail from arpanet tops 20 sites
     That Delivery-Notice message means that a message delivery was
attempted using normal, maximum size segments with Push only at the
required points.  One of the following three things happened:
 . the connection got reset
 . more than 2 minutes were expended trying to get 1000 bytes through
 . more than 5 minutes were expended waiting for the SMTP reply after
   having sent <CR><LF>.<CR><LF>

     The TOPS-20 SMTP sender tried sending the message using small
(40 byte) segments and setting Push on each one, on the assumption that
maybe the receiver's TCP is choking on normal maximum size unpushed
segments.  Since you saw the message, that means it got through this
way.

     If it fails that way, the next time it will retry (30 minutes
later) will using normal size segments.  So you only see the obnoxious
Delivery-Notice if small segments are done and that only happens
immediately after a failure with normal segments.

     It is quite possible that the Delivery-Notice is a false alarm.
But if there is a consistant pattern then it means there really is a
problem.

     Soon I'll remove the Delivery-Notice.  I'll also remove the short
segment sending code.  So mail that gets delivery notices now will just
not get delivered in the future...
-------
-----------[000045][next][prev][last][first]----------------------------------------------------
Date:      29 Oct 1983 1725-PDT
From:      Eric P. Scott <EPS at JPL-VAX>
To:        TCP-IP at SRI-NIC
Subject:   Delivery notices
We occasionally receive mail from SRI-NIC with the "50 byte
individually pushed" notice.  Please note that our connection to
the CIT IMP is via a 19.2Kb microwave channel and it doesn't take
much activity to make the throughput for a given SMTP connection
appear unreasonable.  The solution is "of course" to raise the
link bandwidth; we're working on that.

					-=EPS=-
------
-----------[000046][next][prev][last][first]----------------------------------------------------
Date:      29 Oct 1983 1750-PDT
From:      Eric P. Scott <EPS at JPL-VAX>
To:        TCP-IP at SRI-NIC
Subject:   The Robustness Principle as applied to TELNET
"Be conservative in what you do, be liberal in what you accept
from others." (RFC-793)

You should send CR LF at end of line, but while you're waiting
for the Protocol Police to respond, try taking CR as end-of-line,
and discard the next character if it's NUL or LF.  Don't flame at
me about how nice it would be if this were a perfect world; there
are still misimplementations that need to be defended against.

As a side note to David Plummer, I would like to point out that
Berkeley Unix has come up with its own alternative to TELNET
called RLOGIN using 513 as its well-known-port.  (*I* know what
RFC 870 says; don't flame at me about this either.)

					-=EPS=-
------
-----------[000047][next][prev][last][first]----------------------------------------------------
Date:      30 Oct 83 0239 EST
From:      Rudy.Nedved@CMU-CS-A
To:        Mark Crispin <MRC@SU-SCORE>
Cc:        Taft.PA@PARC-MAXC, stanonik@NPRDC, TCP-IP@SRI-NIC
Subject:   Re: mail from arpanet tops 20 sites
Mark,

Thanks for documenting how that notice is generated. I don't have to
figure out why every now and then we got those notices. You see your
5 minute check for a "250 mail delivered" response tended to nail our
Unix machines that 1) have load averages above 9 (making a load average
of 25 on a 20 look fast) and/or that 2) receive large messages that need to be
copied into a spool file.

-Rudy
-----------[000048][next][prev][last][first]----------------------------------------------------
Date:      Sun 30 Oct 83 16:12:28-PST
From:      Mark Crispin <MRC@SU-SCORE.ARPA>
To:        TCP-IP@SRI-NIC.ARPA, Info-VAX@SRI-CSL.ARPA
Subject:   protocol police
     VAX/VMS has been implicated in another violation of incorrect
implementation of Internet protocols.  In particular, the VAX/VMS
SMTP server evidentally supplied with the Compion ACCESS-T package
rejects a domain address in the RCPT command with
	550 Requested action not taken: mailbox unavailable.
(e.g. "RCPT TO:<POSTMASTER@NBS-SDC.ARPA>").  I have observed that
COMPION-VMS does accept domain addresses so evidentally this bug has
been fixed, but it does not seem to have been distributed widely.

     During DECUS, a representative of the NIC said, rather smugly,
that the NIC does not have this problem because they don't use domain
addresses.  Somehow I don't think this is a very helpful attitude for
the NIC to have.  Many software designers erroneously feel that
achieving communication with the NIC is sufficient validation of the
correctness of one's Internet implementation and a guarantee of being
able to successfully communicate with other TOPS-20 systems.  Perhaps
the NIC shouldn't have the attitude of coddling incorrect Internet
implementations, or if they must they should not allow an incorrect
implementation to show up without continually reminding that site that
their implementation is incorrect.
-------
-----------[000049][next][prev][last][first]----------------------------------------------------
Date:      30 Oct 1983 1550-PDT
From:      Eric P. Scott <EPS at JPL-VAX>
To:        TCP-IP at SRI-NIC
Cc:        Postmaster@Compion-VMS,Info-VAX@SRI-CSL
Subject:   Overtime for the Protocol Police
    Mail-From: ARPAnet host SRI-NIC rcvd at Sun Oct 30 14:28-PDT
    Received: from compion-vms by SRI-NIC with TCP; Sun 30 Oct 83 14:27:32-PST
    Received: from JPL-VAX.ARPA by SRI-NIC with TCP; Sat 29 Oct 83 18:08:54-PDT
    Date: 29 Oct 1983 1750-PDT
    From: Eric P. Scott <EPS at JPL-VAX>
    Subject: The Robustness Principle as applied to TELNET
    To: TCP-IP at SRI-NIC
    Reply-To: EPS at JPL-VAX
    
[ text deleted --EPS ]
     
    This mail could not be delivered!!
    Probable cause was insufficient recipient privileges.

I received one of these for each message I sent to TCP-IP@SRI-NIC.
*I* didn't send this message--the only clue is the "compion-vms"
line.  Insufficient recipient privileges?  Who invented THAT?
Eeeek.  Probable cause for the Protocol Police to work overtime.

					-=EPS=-
------
-----------[000050][next][prev][last][first]----------------------------------------------------
Date:      30 Oct 1983 1626-PDT
From:      Eric P. Scott <EPS at JPL-VAX>
To:        TCP-IP at SRI-NIC
Cc:        Info-VAX@SRI-CSL
Subject:   We have our suspect...
    Mail-From: ARPAnet host COMPION-VMS rcvd at Sun Oct 30 15:07-PDT
    Date: 30 Oct 1983 1550-PDT
    From: Eric P. Scott <EPS at JPL-VAX>
    Subject: Overtime for the Protocol Police
    To: Postmaster at Compion-VMS
...
    Reply-To: EPS at JPL-VAX

...

    This mail could not be delivered!!
    Probable cause was insufficient recipient privileges.

Haul them in.
					-=EPS=-
------
-----------[000051][next][prev][last][first]----------------------------------------------------
Date:      Sunday, 30 October 1983 20:25 est
From:      JSLove@MIT-MULTICS.ARPA (J. Spencer Love)
To:        EPS@JPL-VAX.ARPA
Cc:        TCP-IP@SRI-NIC.ARPA, Postmaster@COMPION-VMS.ARPA, Info-VAX@SRI-CSL.ARPA
Subject:   Re: Overtime for the Protocol Police
I would like to add a couple of additional specifications to the
indictment.  I don't think much of adding to the message at the end, and
the message added is singularly uninformative.  The message returned by
the mailer should look like:

   To:  Sender@Originating-Site
   From:  Network Mailer@Losing-Site
   Subject:  Unable to deliver mail

   It was not possible to deliver mail to Loser@Losing-Site.
   The frobboznik was full.

   *** Original Message Text Follows ***

   To:  tcp-ip@SRI-NIC
   From:  Hacker@MIT-SOMETHING
   Subject:  Disarming the Protocol Police

   They shouldn't be permitted to carry guns.
   Precision bombing using Internet Protocol
   Transition Workbooks should suffice.

It should be possible to determine from the message text what the user
is that couldn't be reached, and what site couldn't reach it, and why.
It should also be possible to tell this from the first screenful.  For a
long message, such as this one, I shouldn't have to wade through the
text 10 times to find out what went wrong for each of 10 users (once is
bad enough)...
-----------[000052][next][prev][last][first]----------------------------------------------------
Date:      Mon 31 Oct 83 00:52:08-PST
From:      David Roode <ROODE@SRI-NIC>
To:        MRC@SU-SCORE, TCP-IP@SRI-NIC, Info-VAX@SRI-CSL
Subject:   Re: protocol police
Since I was the representative of the NIC who made the comment closest
to what seems to be alluded to by MRC@SU-SCORE in his recent message,
I have to say I was not attempting to sound smug.  We were attempting
to follow the doctrine of tolerating the most on the part of other
implementations and requiring the least of them.  In this respect
we can communicate with people who have not implemented domains
in their addresses for SMTP.  If anything, we recognize that
we are one of the more flexible sites to talk to due to our information
dissemination function and so do not hold ourselves up to be tested
in regards to a site's compliance with protocols.  I.e. it is NOT
the case that "If a site can talk to the NIC, then they can talk to
anybody."  I think the absurdity of this statement is the best evidence
of its inapplicability.
-------
-----------[000053][next][prev][last][first]----------------------------------------------------
Date:      30-Oct-83 17:05:16-UT
From:      louie@umd3
To:        tcp-ip@sri-nic
Subject:   Re: TELNET End of line
When I mentions that the problem did not occur with the BBN TCP/IP in 4.1BSD,
i referred to the entire distribution, specifically the user telnet program, not
TCP itself.  Bring on the protocol police!

                  Louis Mamakos
                  (louie@umd3)
                  University of Maryland

-------
-----------[000054][next][prev][last][first]----------------------------------------------------
Date:      31 Oct 1983 0943-PST
From:      Eric P. Scott <EPS at JPL-VAX>
To:        TCP-IP at SRI-NIC
Cc:        don.provan@CMU-CS-A,Info-VAX@SRI-CSL
Subject:   Re: more Overtime for the Protocol Police
Ah.  Our mailer predates RFC 821/RFC 822; " at " was permissible
under RFC 733.  You are the first to catch me--everyone else has
mail software that parses the old-format headers!  (Come on guys,
how many of you still have mail support in your FTP servers too?)

Does anyone have (or is interested in) a (shudder) implementation
validation suite?  The Smallberg surveys were far from
comprehensive.
					-=EPS=-
------
-----------[000055][next][prev][last][first]----------------------------------------------------
Date:      31 Oct 83 09:58:01 EST
From:      Charles Hedrick <HEDRICK@RUTGERS.ARPA>
To:        EPS@JPL-VAX.ARPA
Cc:        TCP-IP@SRI-NIC.ARPA
Subject:   Re: Delivery notices
We use a 9600 baud connection, and I don't think we ever get that
message.  I don't think 19.2Kb alone is enough to explain the problem.
-------
-----------[000056][next][prev][last][first]----------------------------------------------------
Date:      31 Oct 83 1133 EST (Monday)
From:      don.provan@CMU-CS-A
To:        EPS@JPL-VAX
Cc:        TCP-IP@SRI-NIC, Info-VAX@SRI-CSL
Subject:   more Overtime for the Protocol Police
while we're talking about protocol police, i'm getting sick and tired of
trying to explain to my users that the reason they can't reply to
"EPS at JPL-VAX" is that it's an ill-formed mail box.  any chance of
this being  fixed before the arpanet losses what little credibility
it still has here?
-----------[000057][next][prev][last][first]----------------------------------------------------
Date:      31 October 1983 13:28 EST
From:      dcab645 @ DDN1
To:        tcp-ip @ sri-nic
Cc:        dcab645 @ DDN1
Subject:   Change of Addee


Date: October 31, 1983
Text: Please change the TCP-IP distribution to DCAB645 at DDN1
vice DCACODE252 at USC-ISI.
Thanks, Jack Snively

-----------[000058][next][prev][last][first]----------------------------------------------------
Date:      31 Oct 1983 1439-EST (Monday)
From:      Christopher A Kent <cak@PURDUE.ARPA>
To:        EPS@JPL-VAX.ARPA
Cc:        Info-VAX@SRI-CSL.ARPA, TCP-IP@SRI-NIC.ARPA
Subject:   Re: We have our suspect...

While we're at it, that . in the From: line shouldn't be there, either.

Doesn't this all really belong in Header-People?

Cheers,
chris
Return-Path: <@SRI-NIC:EPS@JPL-VAX>
Received: from purdue.ARPA (purdue-arthur.ARPA) by merlin.ARPA; Mon, 31 Oct 83 14:16:47 EST
Message-Id: <8310310409.AA21716@purdue.ARPA>
Received: from SRI-NIC by purdue.ARPA; Sun, 30 Oct 83 23:09:14 EST
Received: from JPL-VAX.ARPA by SRI-NIC with TCP; Sun 30 Oct 83 15:26:51-PST
Date: 30 Oct 1983 1626-PDT
From: Eric P. Scott <EPS@JPL-VAX>
Subject: We have our suspect...
To: TCP-IP@SRI-NIC
Cc: Info-VAX@SRI-CSL
Reply-To: EPS at JPL-VAX

    Mail-From: ARPAnet host COMPION-VMS rcvd at Sun Oct 30 15:07-PDT
    Date: 30 Oct 1983 1550-PDT
    From: Eric P. Scott <EPS at JPL-VAX>
    Subject: Overtime for the Protocol Police
    To: Postmaster at Compion-VMS
...
    Reply-To: EPS at JPL-VAX

...

    This mail could not be delivered!!
    Probable cause was insufficient recipient privileges.

Haul them in.
					-=EPS=-
------
----------
-----------[000059][next][prev][last][first]----------------------------------------------------
Date:      31 October 1983 17:26 est
From:      JFisher.Help at RESTON
To:        TCP-IP at SRI-NIC
Subject:   TCP/IP & PR1ME
Does anyone out there know if anybody is implementing (or thinking about
implementing) tcp/ip (and presumably telnet/ftp/mail) for PR1ME machines ?

END OF DOCUMENT