The 'Security Digest' Archives (TM)

Archive: About | Browse | Search | Contributions | Feedback
Site: Help | Index | Search | Contact | Notices | Changes

ARCHIVE: TCP-IP Distribution List - Archives (1985)
DOCUMENT: TCP-IP Distribution List for February 1985 (28 messages, 21624 bytes)
SOURCE: http://securitydigest.org/exec/display?f=tcp-ip/archive/1985/02.txt&t=text/plain
NOTICE: securitydigest.org recognises the rights of all third-party works.

START OF DOCUMENT

-----------[000000][next][prev][last][first]----------------------------------------------------
Date:      Sun 3 Feb 85 07:50:26-PST
From:      Mathis@SRI-KL.ARPA
To:        Murray.pa@XEROX.ARPA
Cc:        "(Van Jacobson) van"@LBL-CSAM.ARPA, tcp-ip@SRI-NIC.ARPA
Subject:   Re: milnet/arpanet connectivity
   I would guess that you are seeing the problems with a host pretending
to be a gateway.  Consider just the gateway: it gets routing (actually
just accessability) infomation via EGP from the core; that may say "use X".
When the machine is sourcing packets as a host with a MILNET source address,
the core gateways should try to load-share and send a redirect to "Y";
even without load-sharing, the gateway will send back a redirect reflecting
the current GGP state of the world.  If you insert these redirects into the
routing table, the routing will bounce back and forth.  Unless the packet has
a MLNET source, the gateway won't send back redirects.
-------
-----------[000001][next][prev][last][first]----------------------------------------------------
Date:      Mon 4 Feb 85 01:32:24-PST
From:      Bob Larson <BLARSON%ECLD@ECLA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   Is tcp available for Primes?'

Is Tcp (with telnet, ftp, etc.) available for Prime 50 series computers?
(Running Primos 19.3.6)  How much does it cost (to a non-profit university)
and what hardware does it require?  

Thanks,
Bob Larson <Blarson@Usc-Ecl.Arpa>
-------
-----------[000002][next][prev][last][first]----------------------------------------------------
Date:      4 Feb 85 13:11:11 EST
From:      Charles Hedrick <HEDRICK@RUTGERS.ARPA>
To:        BLARSON%ECLD@USC-ECL.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: Is tcp available for Primes?'
Prime is working on a DDN TCP implementation, i.e TCP over X.25.  I'm
not sure when it will be ready, but it shouldn't be too long now.
Contact Mary Cole, 617-879-2960, x3869
-------
-----------[000003][next][prev][last][first]----------------------------------------------------
Date:      6 Feb 1985 11:43-EST
From:      CERF@USC-ISI.ARPA
To:        ron@BRL-TGR.ARPA
Cc:        Murray.pa@XEROX.ARPA, TCP-IP@SRI-NIC.ARPA
Subject:   Re:  Cabernet?
Hmmmm.

I remember a Scottish game called tossing the Caber which is a 14
foot tree.  So obviously a Cabernet must be a tree-topology
network...

Vint
-----------[000004][next][prev][last][first]----------------------------------------------------
Date:      Sun, 10 Feb 85 20:08:23 CST
From:      Mike Caplinger <mike@rice.ARPA>
To:        tcp-ip@nic.ARPA
Subject:   Post Office Protocol, version 2 (RFC 937)
Two questions regarding RFC 937:

1)  I would like to see a settable option to control whether message
transmissions are done in netascii or not.  It seems silly, in an
environment with homogeneous character sets (eg, a network of Unix
VAXen and Suns) to have to netascii and unnetascii each message.  If
the protocol is used directly by a user interface, it would slow a
message retrieve down substantially, for no good reason.  (For those
who will claim the protocol is to have few options; "few" doesn't mean
"so few as to be unreasonably slow", does it?)  Note that I am
certainly NOT suggesting netascii be optional.

2)  I am frightened by another protocol that requires that a password
be transmitted, in plaintext.  One could expect that most
implementations would simply use the user's regular login password.
This requires either a) the user type in his password during every
mailer session, or b) the user have a file on his workstation which
contains the password in plaintext form.  Neither of these options
seems attractive.  Maybe we could use the authentication server (RFC
931) here?

	- Mike
-----------[000005][next][prev][last][first]----------------------------------------------------
Date:      11-Feb-85 06:16:49-UT
From:      mills@dcn6
To:        tcp-spec@nic
Cc:        tcp-ip@nic
Subject:   Improvements in TCP performance over Ethernets and thin wires
Folks,

As the result of ongoing efforts to review and improve the TCP specification,
some of us have been exploring refinements to the send-policy and ack-policy
mechanisms suggested in the current specification MIL-STD-1778. The
refinements are designed to reduce congestion by aggregating data into larger
segments, piggybacking ACKs and reducing the impact of silly-window syndrome
and other oddities on the flow dynamics.

In order to test the proposed mechanisms, we implemented some of them on the
ubiquitous LSI-11 "fuzzballs" and incited other famous bullies to fight back.
This memo chronicles the wars and hopefully buries some of the dead. While a
bit on the long side, this memo raises some very serious issues of performance
on the part of widely distributed systems and proposes some simple solutions
which have been tested and found to work very well.

The Problem

The most worrisome problem faced by some of us in the backwaters of the
Internet is congestion in the gateways on the long, thin pipes splicing our
lusty broadcast cables to each other and the ARPANET. Host implementations
such as 4.2bsd designed primarily for use on these cables blow the poor
gateways out of the water, especially with character-at-a-time TELNET. A
revealing experiment, suggested by John Nagle, is to crank up your favorite
4.2bsd workstation via Ethernet, gateway and 9600-bps line to an IMP and
mumble TELNET with your opposite 4.2bsd or TOPS-20. A few seconds of
finger-twinkling is usually sufficient for the gateway to suffocate for want
of buffers.

The reason for this poor behavior is simply that no mechanism is specified to
discourage a host from generating large numbers of small packets when given
the irrestable enticement of a large window. A suggestion made some time ago
by John Nagle and subsequently implemented and refined by me operates as
follows: The sending TCP accepts client data blocks and combines them in a
queue before transmission. If there are no data in transit (retransmission
queue is empty), the entire contents of the queue are segmentized and
transmitted. If data are in transit, transmission is suppressed until the size
of the queue exceeds a specified value suggested as the MSS of the connection.

The result of this policy is that for high-volume traffic where the sender
client offers data at a rate near the maximum for the connection, there is
little or no sender delay and maximum-size segments are used. On the other
hand, the result with low-delay interactive trafic is that the sender will
delay a period up to one round-trip time, during which data are aggregated in
the queue. Thus, no more than a single segment of less than MSS octets will be
permitted per round-trip time and the protocol degenerates into stop-and-wait
mode. There are a few further details which need not concern us here.

Experimentation reveals that the success of the send-policy mechansim depends
strongly on the selection of ack-policy mechanism. In order to encourage
efficient use of the connection, it is often wise to wait a little while
before sending an ACK, so that the eventual ACK can combine the updates for
both the left-window-edge and receive-window state variables, rather than
sending two ACKs separately. However, the "little while" adds to the apparent
round-trip delay and reduces the buffering efficiency. A worthwhile
improvement in this policy used in the fuzzballs is to force an ACK if the
number of octets delivered to the client since the last ACK exceeds a
specified value suggested as the MSS of the connection.

Thus, the send-policy and ack-policy mechanisms interact in subtle ways hard
to predict in particular configurations. The experiments summarized below
should give some insight into these issues and help in the assessment of
whether the proposed policies are worthwhile and should be considered for
inclusion in the amended TCP specification. Note before reading further,
however, that the use of these policies here in the backwaters has resulted in
a profound improvement in general performance to the point that minor
reductions in performance under some conditions seems wholly justifiable.

The Experiments

The performance data are summarized below in the form of segment traces, where
each line represents a single segment observed at one end of the connection,
"snd" indicates a transmitted segment and "rcv" a received one. Following are
further details:

		 Time-
Time (UT)	 stamp	IP ID	LWE	Length	Window
------------------------------------------------------
03:41:56 TCP rcv 12538	36696	0	-1	449

The timestamp is in milliseconds modulo 2^16, the IP ID is from the IP header,
LWE is the position of the first octet of the segment relative to the current
receive left-window edge and Window is the number of octets remaining in the
reassembly buffer. A minus sign preceeding the Length indicates the Push bit
was set.

		 Time-
Time (UT)	 stamp	X  Y	LWE	Length	Window
------------------------------------------------------
03:41:56 TCP snd 13188	1  0	0	0	907

The timestamp is as previously, the X tallies the current number of segments
waiting in the host for transmission, Y tallies the number of data segments 
transmitted pending acknowledgement, LWE is the postion of the first octet of
the segment relative to the current transmit left-window edge and Window is
the apparent number of octets remaining in the receiver reassembly buffer. A
minus sign preceeding the Length indicates the Push bit is set.

In both the rcv and snd cases a zero length indicates an ACK-only segment.

The configuration of the first experiment included a fuzzball connected by a
4800-bps link to a gateway, then a 56-Kbps link to an ARPANET IMP and TOPS-20
host. The intent was to test the ACK piggybacking efficiency using
character-at-a-time TELNET. One would expect good efficiencies in this case,
since the server can piggyback ACKs on the echoes of operator keystrokes and
the stop-and-wait character of the send-policy mechansim allows these ACKs to
piggyback on the next chunk transmitted by the user.

The trace shown below indicates this is exactly what happened. The scenario
starts out with the operator stroking at moderately twinkling rates, then
increases to the auto-repeat rate. Note the increase in the segment size due
to the sender delay for the round-trip time. The operator "feel" of the
dynamics was excellent compared with the arbitrarily chosen 500-millisecond
packetization delay formerly used by the client TELNET.

TELNET from DCN6 fuzzball to USC-ISID TOPS-20
03:41:56 TCP rcv 12538	36696	0	-1	449
03:41:56 TCP snd 13188	1  0	0	0	907
03:42:01 TCP snd 17454	1  1	0	-1	906
03:42:01 TCP rcv 17987	36712	0	-1	449
03:42:01 TCP snd 17987	1  1	0	-1	905
03:42:02 TCP rcv 18604	36713	0	-1	449
03:42:02 TCP snd 18604	1  1	0	-19	886
03:42:03 TCP rcv 19553	36714	0	-19	431
03:42:03 TCP snd 19553	1  1	0	-28	858
..
03:42:14 TCP rcv 30518	36764	0	-31	419
03:42:14 TCP snd 30518	1  1	0	-17	542
03:42:15 TCP rcv 31501	36768	0	-17	433
03:42:16 TCP snd 32368	1  0	0	0	542
03:42:16 TCP rcv 32701	36768	-17	-17	450
03:42:16 TCP snd 32718	1  0	0	0	542
03:42:28 TCP snd 44466	1  1	0	-1	541

The configuration of the next experiment included a fuzzball connected by a
4800-bps link to another fuzzball. The intent was to test the send-policy and
ack-policy mechanisms using bulk-data transfers typical of FTP. The resulting
trace shows a stable state with three segments in flight and all the link
bandwidth in use. The same experiment was doen between two fuzzballs on an
Ethernet with similar results, although the flow was not so smooth. More about
this in the conclusions.

FTP from DCN6 fuzzball to DCN1 fuzzball
06:05:48 TCP rcv 59253	59102	0	0	1986
06:05:49 TCP snd 60169	2  3	1072	536	378
06:05:49 TCP rcv 60386	60232	0	0	1986
06:05:50 TCP snd 61269	2  3	1072	536	378
06:05:50 TCP rcv 61436	61284	0	0	1986
06:05:51 TCP snd 62352	2  3	1072	536	378
06:05:51 TCP rcv 62568	62409	0	0	1986
06:05:52 TCP snd 63435	2  3	1072	536	378
06:05:52 TCP rcv 63618	63456	0	0	1986

Now we try a hard one. The configuration of the next experiment includes a
Sun workstation (4.2bsd) on an Ethernet connected by gateway and 4800-bps link
to a fuzzball and then back over the same link to the gateway and then via
56-Kbps link to an ARPANET IMP and TOPS-20(!). To make it really gruesome, two
tandem TELNET connections were used, one from the Sun to the fuzzball, the
other from the fuzzball to the TOPS-20, with character-at-a-time used on both
connections. The trace was recorded at the fuzzball.

The following trace was produced by an innocent user composing mail at the
TOPS-20. When we first look in on the victim, segments are flying with a
single octet of data, at least on the Sun-fuzzball link (the fuzzball-TOPS-20
link was doing much better than that, as will be seen later). Note that
piggybacked ACKs are working well. Sharp eyes may note the last line, which
apparently indicates a violation of the send-policy, since more than a single
segment of size less than MSS is apparently outstanding. In the fuzzball
implementation if an ACK is forced out and data are waiting on the queue, the
data will be gratuitously included even if another data segment is in transit.
This observation illustrates the complexities possible in tuning the protocol
for maximum performance.

TELNET from DCN9 Sun Unix via DCN6 fuzzball to USC-ISID TOPS-20
16:09:25 TCP rcv 35393	5005	0	-1	961
16:09:26 TCP rcv 35593	5006	0	-1	961
16:09:26 TCP rcv 35826	5007	0	-1	961
16:09:26 TCP rcv 36026	5008	0	-1	961
16:09:26 TCP snd 36243	1  1	0	-1	2047
16:09:26 TCP rcv 36426	5009	0	-1	961
16:09:27 TCP rcv 36660	5010	0	-1	961
16:09:27 TCP rcv 36893	5011	0	-1	961
16:09:27 TCP snd 37476	1  2	1	-3	2044

At about this time the gateway began dropping packets, with the loss quickly
reaching alarm levels. The particular gateway has about 20K bytes available
for packet buffers. Now, apparently an ACK from the Sun was lost and the
fuzzball retransmitted a segment. No other data were on the queue at that
time. This continued from time to time throughout the remainder of the
scenario.

16:09:27 TCP rcv 37526	5012	0	0	962
16:09:28 TCP rcv 37810	5013	0	-1	961
16:09:28 TCP rcv 37993	5014	0	-1	961
16:09:28 TCP rcv 38193	5015	0	-1	961
16:09:28 TCP rcv 38427	5016	0	-1	961
16:09:29 TCP snd 38610	1  1	3	0	2045
16:09:29 TCP rcv 38677	5017	0	-1	961

Now, a segment from the Sun was lost, creating a hole that will take a long
time to fill. Subsequent segments from the Sun, all including only a single
octet, eventually result in complete congestion collapse. The poor user sees
none of this - his terminal enters catatonia mode. Note that the fuzzball
sends ACKs without delay for out-of-order received segments, which may be a
mistake.

16:09:30 TCP rcv 39593	5019	1	-1	962
16:09:30 TCP snd 39593	1  2	3	-4	2041
16:09:30 TCP rcv 39793	5020	2	-1	962
16:09:30 TCP snd 39793	2  2	7	0	2041
16:09:30 TCP rcv 40044	5021	3	-1	962
16:09:30 TCP snd 40044	2  1	4	0	2044
16:09:30 TCP rcv 40277	5022	4	-1	962
16:09:30 TCP snd 40294	2  1	4	0	2044
16:09:30 TCP rcv 40527	5023	5	-1	962
16:09:31 TCP snd 40527	2  1	4	0	2044
16:09:31 TCP rcv 40777	5024	6	-1	962
16:09:31 TCP snd 40777	2  1	4	0	2044
16:09:32 TCP rcv 41694	5026	8	-1	962
..

The user quickly becomes concerned about the lack of echoes and stops
twinkling. His terminal remains in catatonia mode while retransmissions from
both the Sun and fuzzball bombard the gateway. Source-quench messages are
flying, which the Sun ignores and the fuzzball uses to throttle back (details
beyond the scope of present discussion). Eventually, a heroic string of ACKs
are received from the Sun, strangely at about 200-millisecond intervals,
followed by a segment contianing the the long-delayed octet lost thirty
seconds ago. By this time the user terminal has abandoned catatonia mode and
has come back to life with no data lost. If nothing else, this scenario is
certainly a convincing demonstration of TCP robustness!

16:10:04 TCP rcv 8699	5138	30	-1	962
16:10:04 TCP snd 8715	2  6	84	0	1964
16:10:04 TCP rcv 8932	5139	31	-1	962
16:10:04 TCP snd 8949	2  6	84	0	1964
16:10:05 TCP rcv 9899	5141	34	0	962
16:10:06 TCP rcv 10099	5142	34	0	962
16:10:06 TCP rcv 10299	5143	34	0	962
16:10:06 TCP rcv 10499	5145	34	0	962
16:10:06 TCP rcv 10682	5146	34	0	962
16:10:06 TCP rcv 10882	5147	34	0	962
16:10:08 TCP rcv 12083	5149	0	-34	928
16:10:08 TCP rcv 12333	5150	-34	-34	962
16:10:08 TCP snd 12333	1  0	0	0	2048

The next configuration is identical to the previous, with the exception that a
fuzzball running the proposed send-policy and ack-policy was used. Note the
efficient use of piggyback and aggregation. There was no evidence whatsoever
of congestion in the gateway and no hesitation or glitches in the terminal
copy. The operator "feel" was indistinguishable from the first experiment
using the fuzzball connected via 4800-bps. In all, the performance improvement
visible to the operator was genuinely dramatic.

TELNET from DCN5 fuzzball via DCN6 fuzzball to USC-ISID TOPS-20
16:30:41 TCP snd 556	1  0	0	0	961
16:30:41 TCP snd 622	2  1	0	-1	960
16:30:42 TCP rcv 889	728	0	-5	957
16:30:42 TCP snd 1606	1  2	1	-3	958
16:30:43 TCP rcv 2456	2107	0	0	962
16:30:43 TCP snd 2456	1  1	0	-5	957
16:30:44 TCP rcv 3757	3641	0	0	962
16:30:45 TCP rcv 4440	4318	0	-1	961
16:30:46 TCP snd 5107	1  1	0	-1	961
16:30:46 TCP rcv 5474	5319	0	-5	957
16:30:47 TCP snd 6074	1  1	0	-1	960
16:30:47 TCP rcv 6641	6495	0	-8	954
16:30:48 TCP snd 6858	1  1	0	-1	960
..
16:31:09 TCP rcv 28081	27933	0	-1	961
16:31:09 TCP snd 28081	1  1	0	-1	960
16:31:10 TCP rcv 29215	29098	0	0	962
16:31:10 TCP snd 29215	1  1	0	-11	951
16:31:11 TCP rcv 30232	30115	0	0	962
16:31:11 TCP rcv 30632	30510	0	-1	961
16:31:12 TCP snd 31382	1  0	0	0	962

Conclusions

The send-policy and ack-policy suggested by John Nagel and myself and studied
in this memo do in fact dramatically improve the performance of workstations
operated in configurations including a high-speed local net connected to the
world by relatively thin wires. The operator "feel" is as good as or better
than client packetization timeouts used in some TELNET implementations and in
the CCITT PAD. The policies automatically adapt to nets of different speeds,
since the sender delay is a function only of the round-trip ACK time, and
becomes vanishingly small for Ethernets.

There was one area of concern that was revealed in testing. Sometimes hosts do
not utilize maximum-size segments for bulk-data transfer after establishing an
MSS specifiying a maximum size. For instance, TOPS-20 uses segments of 400-odd
octets after specifying an MSS of 536. Unfortunately, anything less than the
MSS degenerates to stop-and-wait, so uneccessary delays occur. As a
refinement, some other choice of queue trigger may be indicated - perhaps some
fraction of the MSS, an arbitrary value or even a function dependent on
round-trip delay.

Dave
-------
-----------[000006][next][prev][last][first]----------------------------------------------------
Date:      13 Feb 1985 1421 PST
From:      Ron Tencati <TENCATI@JPL-VLSI.ARPA>
To:        tcp-ip@nic
Cc:        info-vax@sri-kl,info-nets%mit-oz@mit-mc,tcp-ip-vms@nic,tcp-ip-unix@nic
Subject:   VMS networking software query

I'm just wondering if anyone else has a configuration similar to ours:

We are running the Kashtan 4.1cBSD version of the internet kernel software
on our VMS system.  Our VMS MAIL program has been patched to accept upper
and lower case, and recognize exclamation points in the "To:" field so that
our users can mail to "MAILER!User@Host". This mail is locally delivered
to our pseudo user MAILER and distributed over the net from there.  I assume
that FTP and FINGER servers remain the same.

What I am trying to do is determine if the 4.0 change is going to affect
anyone in the same way it is going to affect us.  I would be interested
in hearing from people who are not running Wollongong-supported software
as to just what it is they have.  We have a problem with buying the
TWG release in that someone has to pay for it, and it isn't really nice
to tell your local hosts that they will have to cough up $$$ if they
want to remain on the net.  Although that may well be the case.

Thanks for your inputs.

Ron
------
-----------[000007][next][prev][last][first]----------------------------------------------------
Date:      Wed, 13 Feb 85 17:25:33 pst
From:      Bill Croft <croft@safe>
To:        tcp-ip@nic
Subject:   Stanford Ethernet AppleTalk Gateway (SEAGATE)
I posted this to info-mac last week and just now thought that
the tcp-ip list might be interested.  If you take a copy of the
code or are considing putting one together, let me know.
----
The beta release of our Stanford Ethernet - AppleTalk Gateway
(SEAGATE) is ready.  On [SUMEX]<info-mac> the files are:

	seagate.ms	documentation in -ms format
	seagate.hard	the wirelist for the applebus interface
	seagate.shar1	the main gateway sources (including the above doc's)
	seagate.shar2	the ddt, dlq, testscc, and tftp subdirectories

All these files are plain ASCII and can be FTPed from SUMEX with
the 'anonymous' login.  The two shar (shell archive) files are
each about 170K bytes, so we would appreciate it if you would
avoid transfers during 9 AM to 5 PM PST.

Below are some sections of the formatted seagate.ms file.

----


        Stanford Ethernet Applebus Gateway (SEAGATE)


                         Bill Croft

                    Stanford University
                       Medical Center
                 SUMEX Project *, rm TB105
                    Stanford, CA  94305
                      croft@sumex.arpa

                     beta release, 1/85


                          ABSTRACT

          This note explains how to make your own gate-
     way between ethernet and applebus.  Such a gateway
     allows UNIX (or other) systems on the ethernet to
     act as servers for the Macintosh.


1.  Introduction

     This note describes SEAGATE, a gateway (Apple term:
bridge) that connects an ethernet using the DARPA internet
protocols (IP), to an applebus using Apple or IP protocols.
The IP protocol family was chosen because many campuses and
engineering groups are using it on their ethernets;  most
such groups have access to Berkeley UNIX.  With such a gate-
way in place, it becomes possible to create UNIX server dae-
mons to provide file, printing, mail, etc.  services for the
Macintoshes.

     In addition, it would be possible for the UNIX systems
to become integrated into a Macintosh Office such that UNIX
users could access Apple provided services such as printing
on a LaserWriter or sending mail to Macintosh users via an
Apple file server.

     This distribution of SEAGATE provides all the informa-
tion and software you should need to setup your own gateway.
Please bear in mind that this distribution is not 'sup-
ported' and that we can't give extensive help about the
mechanics of putting your gateway together.  I would like to
hear about bug reports or enhancements however.

2.  Protocol packages / servers

     UNIX provides a large number of IP based servers.  With
a stripped down C based IP package, many of the UNIX user
level programs (such as TELNET and FTP) could be ported over
to the Mac straightforwardly.  Alas, such a package does not
yet exist.  [I could envision creating such a package by
sniping sections out of the 4.2 BSD UNIX kernel].

     What does exist currently is a port of the MIT IP pack-
age (for the IBM PC) to the Macintosh.  This was done by
Mark Sherman of Dartmouth in the summer of 84.  Since there
were no commercial C compilers available at the time, Mark
transliterated the MIT code from C into Workshop Pascal.  At
this writing, the TFTP (trivial file transfer protocol) and
TIME (fetch time-of-day from server) programs from MIT have
been ported.  These programs work correctly between Macin-
toshes, or through the gateway between a Macintosh and UNIX.
Written by MIT, but as yet unported are the TCP and TELNET
packages.

     While this porting was a large and admirable project, I
am not sure that it is the right base to build Mac IP ser-
vices upon.  For one thing, the MIT TCP implementation (in
the original C) is incomplete and cannot handle data streams
in both directions (it's only good enough for TELNET, where
the sending stream is low volume).  My hope is that someone
will take a relatively full and debugged IP package and
adapt it to the Mac, all in the C language.

     Meanwhile, the gateway provides another alternative.
All Apple services on applebus are based on the applebus
datagram protocol, called DDP (datagram delivery protocol).
In addition to passing IP packets back and forth, the gate-
way will do a small amount of protocol conversion:  if it
receives a DDP from the applebus destined for the ethernet,
it will 'wrap' it with an IP/UDP header, doing appropriate
address and port number conversions.  This allows Apple DDP
services to be written as UDP daemons on UNIX, without
requiring any UNIX kernel changes.

     Conversely, a UDP packet received by the gateway from
the ethernet will be converted to a DDP (by stripping the
IP/UDP headers) if the UDP destination port number matches a
certain 'magic number'.  While these protocol conversion
functions are currently compiled into the gateway, and
easily altered, one could also imagine them being selected
dynamically based on any packet fields (such as host
address).  This would allow for hosts that understand DDP
packets directly at the kernel level.

3.  Addressing and routing

3.1.  IP 'nets' versus 'subnets'

     The gateway can be configured to treat each
(ether/applebus) cable as a separate IP 'net' number, or as
separate IP 'subnets'.  Unless you are at a site which
implements subnets, such as Stanford, MIT, or CMU, you will
probably use plain 'net' numbers.

     As mentioned above, the gateway can translate DDP
addresses (2 bytes of net number, 1 byte of node number) to
IP addresses (4 bytes total for both net number and node
number).  When subnets are NOT used, a mapping table inside
the gateway is used to convert between network/node numbers.
The information to setup this table is in the Configuration
section below.  If your site does not use subnets, you can
probably skip or skim over the next couple sections below.

3.2.  Subnets

3.2.1.  Subnet addressing limitations

3.3.  Routing protocols

     The gateway broadcasts an applebus RTMP (routing table
maintenance protocol) packet every 30 seconds.  This informs
the Macintoshes of the DDP network number of their applebus
cable.

     When routing a packet, if the IP (major) network number
of the destination does not match that of any interface, the
packet is forwarded to a 'default' gateway specified at con-
figuration time.  In the subnet case, the gateway assumes
that there are other 'smarter' gateways or hosts that will
answer ARPs for subnets not matching its own.

...

3.4.  Protocol conversion

3.5.  DDP routing

     At present the gateway only really knows about routing
IPs.  In the future it would be desirable to participate
more in applebus RTMP protocol, and to allow the ethernet
(or even the whole DARPA internet) to be used as a long-haul
backbone between applebus segments.

4.  Prerequisites

     To assemble your own gateway, you will need at least
the items below:

     The hardware is a 3 card multibus system:  A 'SUN'
     68000 CPU board, an Interlan NI3210 ethernet card, and
     a homebrew applebus card (about 8 chips) which takes an
     afternoon to wirewrap.  More details in the hardware
     section below.

     A UNIX (usually VAX) running 4.2 BSD, 4.1 BSD or Eun-
     ice.  This is because the source distributed is written
     in the PCC/MIT 68000 C compiler.  [This is the same
     compiler included with the SUMACC Mac C cross develop-
     ment kit.]  You can probably make due with any 68K C
     compiler and assembler, but it will be harder.

     Inside Mac, update service, and the Mac software sup-
     plement.

     Applebus Developer's Kit, includes:  protocol manual,
     applebus taps and interconnecting cable, Mac applebus
     drivers on SONY disks.

     Dartmouth's IP package from Mark Sherman
     (mss%dartmouth@csnet-relay).  The gateway distribution
     includes the binary for TFTP, but if you want the whole
     package (and source), you should get it from Mark.

     A Lisa Workshop system is handy to have around; you
     would need it to compile Mark's sources.  Even if you
     are doing development in C, Apple releases Applebus
     updates as a combination of Mac and Lisa disks.  The
     Mac disks contain the 'driver' binary resources.  The
     Lisa disks contain source for header files.

5.  Hardware used

5.1.  CPU board

5.2.  Ethernet board

5.3.  Applebus board

5.4.  Other hardware.

6.  Software organization

7.  Configuration

7.1.  Software

7.2.  CPU board

7.3.  NI3210 ethernet board

7.4.  Applebus board

8.  Operation

8.1.  Downloading

8.2.  Console 'commands'

8.3.  Debug printouts

8.4.  TFTP usage

9.  Throughput

     Using Mark's TFTP and the Berkeley 4.2 BSD TFTP daemon,
we made some simple timings.  On the Mac side, TFTP used a
ramdisk to avoid any delays induced by the slow SONY drive.
For a UNIX to Mac transfer, we found that the Mac took 43 ms
between data received and ack sent, while UNIX spent 25 ms
between ack received and next data sent

     Since these times were from the applebus peek program,
the mac time is artificially high since it includes the 20
ms or so of packet transmission time on applebus (35 usec /
byte).  So then, each side has about a 20 ms delay before
responding.

     Most of the transfer occured at 512 data bytes every
70ms = 7314 bytes / sec = 58K baud.

     Note however that the IP TFTP protocol is just that, a
'trivial' FTP.  It is purely half-duplex in nature.  When we
start using Apple's ATP, which can stream several packets
per acknowledgement, it should boost throughput signifi-
cantly.  Gursharan Sidhu tells me that their process-to-
process (no disks) ATP throughput is 180K baud (out of the
230K available on the cable).  This is very good, consider-
ing many TCP's running on 10 megabit ethernet are lucky to
get a few hundred kilobits of thoughput.

10.  Future plans

     Here are some obvious things that could be done next.

     Here is the most interesting thing I would try:  
     get the 'per gateway' cost way down, by building a sin-
     gle board version of it.  I picked the Intel 82586 eth-
     ernet controller for just this reason:  all you should
     need is a board with the 68000, memory, the 82586 and
     the Z8530.  Hopefully you could get the cost down below
     $1000 per gateway.  Then just sprinkle them around
     campus, using ethernet as your 'long-haul' and applebus
     within a floor, or group of offices.

     I would like to quickly finish an ATP subroutine pack-
     age that runs on the UNIX side.  This will allow rapid
     construction of applebus servers on UNIX.  A program
     equivalent in functionality to FTP or TFTP should be
     less than 5 pages of Mac C code.  [Since the Mac MPP
     applebus driver package is doing the 'dirty work' of
     ATP for you].


11.  Acknowledgements

     Nick Veizades built and helped debug our applebus
hardware interface.  Mark Sherman's Mac IP package allowed
easy access to the UNIX TFTP daemon for general debugging.
Gursharan Sidhu, the 'applebus architect', deserves much
credit for making this protocol family as simple and elegant
as it is.  Arnie Lapinig of Apple was always helpful when we
needed another tap box or question answered.

     In the Stanford network community, Bill Yundt supplied
us with free hardware and Ed McGuigan kept the applebus
updates flowing in our direction.  Ed Pattermann (formerly
SUMEX director, now at Intellicorp) made the mistake of
turning us onto Macintoshes, when we 'should have been'
hacking on LISP machines.

-----------[000008][next][prev][last][first]----------------------------------------------------
Date:      Fri, 15-Feb-85 18:35:26 EST
From:      tcp-ip-unix@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   VMS networking software query

From: Ron Tencati <TENCATI@JPL-VLSI.ARPA>


I'm just wondering if anyone else has a configuration similar to ours:

We are running the Kashtan 4.1cBSD version of the internet kernel software
on our VMS system.  Our VMS MAIL program has been patched to accept upper
and lower case, and recognize exclamation points in the "To:" field so that
our users can mail to "MAILER!User@Host". This mail is locally delivered
to our pseudo user MAILER and distributed over the net from there.  I a This mail is locally delivered
to our pseudo user MAILER and distributed over the net from there.  I assume
that FTP and FINGER servers remain the same.

What I am trying to do is determine if the 4.0 change is going to affect
anyone in the same way it is going to affect us.  I would be interested
in hearing from people who are not running Wollongong-supported software
as to just what it is they have.  We have a problem with buying the
TWG release in that someone has to pay for it, and it isn't really nice
to tell your local hosts that they will have to cough up $$$ if they
want to remain on the net.  Although that may well be the case.

Thanks for your inputs.

Ron
------

-----------[000009][next][prev][last][first]----------------------------------------------------
Date:      18 Feb 85 13:39:11 EST
From:      Charles Hedrick <HEDRICK@RUTGERS.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   clarification of FTP protocol
When you start a RETR or SEND command, the server is supposed to
give an "intermediate reply" of 1xx when he is opening the data
connection.  Currently the UCLA FTP server gives two 1xx messages.
This causes problems for single-thread implementations.  After
reading the first intermediate reply, we go off to code to do the
transfer.  The second 1xx message doesn't even get ACK'ed until the
end of the transfer.  For long transfers this could cause trouble.
At best, it will cause lots of retransmissions.  At worst, the
server could time out.  I claim that there should be exactly one
1xx intermediate reply message.  However the RFC gives a state
diagram of the form

     -----------Wait-----------
	      /	     \
	      |      |
	      \      /
	        1xx

Normally this allows any number of 1xx's (including 0).  I suspect
that this is just sloppy diagramming, and that the intent is clear
from other parts of the RFC.  Can someone give me an authoritative
ruling?

		
-------
-----------[000010][next][prev][last][first]----------------------------------------------------
Date:      Tue, 19 Feb 85 17:28:43 EST
From:      Ron Natalie <ron@BRL-TGR.ARPA>
To:        Charles Hedrick <HEDRICK@RUTGERS.ARPA>
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re:  clarification of FTP protocol
I don't understand how your TCP works if this causes problems.  Your
machine should ACK the second message, even when your FTP hasn't read
it.  While I fear the implementation I use also has heartburn, it would
be easy to make it agree with the documentation.  You just have to keep
eating 1XX repies until you get a 2XX, 3XX, 4XX, or 5XX, before you
do another command.

-Ron
-----------[000011][next][prev][last][first]----------------------------------------------------
Date:      20 Feb 1985 09:30-EST
From:      CLYNN@BBNA.ARPA
To:        ron@BRL-TGR.ARPA
Cc:        HEDRICK@RUTGERS.ARPA, tcp-ip@SRI-NIC.ARPA
Subject:   Re:  clarification of FTP protocol
In general, a TCP cannot ACK data for which it has not taken responsibility
for delivery to a user.  If the FTP is using a single buffer, for example,
on the control connection, and the first 1xx message was sent in a packet
which had the PUSH bit set, the message would be delivered to the user.
If the FTP were to then process the data connection without first establishing
another receive buffer on the control connection, the packet containing
the second 1xx message could not be processed (no buffer) and thus could
not be ACKed by a TCP.
-----------[000012][next][prev][last][first]----------------------------------------------------
Date:      Wed, 20 Feb 85 16:31:01 EST
From:      Mike Muuss <mike@BRL-TGR.ARPA>
To:        CLYNN@BBNA.ARPA
Cc:        TCP-IP@sri-nic.ARPA
Subject:   Re:  clarification of FTP protocol
Ahh.  4.2 BSD UNIX TCP ack's the data as soon as it has been QUEUED
for the user, not delivered to the user.  Allows additional asynchrony
between the network and user processing.
	-Mike
-----------[000013][next][prev][last][first]----------------------------------------------------
Date:      Fri, 22 Feb 85 14:02 EST
From:      David C. Plummer in disguise <DCP@SCRC-QUABBIN.ARPA>
To:        Mike Muuss <mike@BRL-TGR.ARPA>, CLYNN@BBNA.ARPA
Cc:        TCP-IP@SRI-NIC.ARPA
Subject:   Re:  clarification of FTP protocol
    Date:     Wed, 20 Feb 85 16:31:01 EST
    From:     Mike Muuss <mike@BRL-TGR.ARPA>

    Ahh.  4.2 BSD UNIX TCP ack's the data as soon as it has been QUEUED
    for the user, not delivered to the user.  Allows additional asynchrony
    between the network and user processing.
Sorry for not keeping track of this conversation (shoot me if I'm being
redundant).  TCP's /should/ ack data when it is queued, as this reduces
retransmission.  The /window/ should not be opened until the user
gobbles the data.

-----------[000014][next][prev][last][first]----------------------------------------------------
Date:      Fri, 22 Feb 85 18:10:44 est
From:      Lee Moore  <lee@rochester.arpa>
To:        tcp-ip@sri-nic.arpa
Subject:   Hostnames server for 4.2BSD Unix
Has anybody written a Hostnames Server for 4.2BSD Unix?

Now you may ask, why would I want one since we have SRI-NIC?  I want
to use it in our campus network (which doesn't have access to the
Arpanet).  I know it's now too hard to write one but I thought I'd
ask anyway.

thanks,
  lee




-----------[000015][next][prev][last][first]----------------------------------------------------
Date:      Sat, 23 Feb 85 02:59 EST
From:      Mike StJohns <StJohns@MIT-MULTICS.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   Clarification on routing: gateways.

Situation:  One system hooked to the core network (Milnet for example),
it acts as a gateway for another network directly attached to it.

1) How does the core network know that this specific host is a gateway
for the directly attached network?  Does this have to be specified in
the core network routing tables?  Or is there some protocol (EGP, GGP?)
it uses to tell the core network (not the core GATEWAYS..)  it is the
gateway for the directly attached network?

2) If there are no other gateways on the core network, what protocols
does the host acting as a gateway have to run?  (ICMP of course, but GGP
or EGP?)

3) True or false:  All hosts on a network (any type of network) either
need to know all the gateways on the local net or a specific host that
can handle gateway routing for them.

(1 and 2 refer specifically to an IMP based network)

My confusion stems from reading the various gateway RFCs.  Unless I
totally misread them, a host acting as a gateway for a backend network,
which only wants to access (or provide access to) hosts on the core
network need not implement ANY gateway protocol.

Mike .
-----------[000016][next][prev][last][first]----------------------------------------------------
Date:      Mon, 25 Feb 85 11:36:58 EST
From:      Ron Natalie <ron@BRL-TGR.ARPA>
To:        Mike StJohns <StJohns@MIT-MULTICS.ARPA>
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re:  Clarification on routing: gateways.
Of course, you are asking your questions generically, but EGP and its
use has evolved to a pretty specific situation. First, it generally does
not make a difference if you are talking IMP based or anyother type of
network.  If you are talking about a system other than "THE CORE," it's
not clear that you want to do EGP or GGP.  What you need is some
conspiracy between all the gateways as to how to route the off-net
packets.  Now in the current system, it's a bristly tree (more like a
hedge).  There is THE CORE.  These gateways talk to each other using
their own private protocol (GGP).  Some of these gateways speak EGP, and
collect information from gateways representing other autonomous
systems.  Research, discussion (arguments) are taking place at this very
instant to handle the replacement to all of this.

1) How does the core network know that this specific host is a gateway
for the directly attached network?

	It is a protocol.  A gateway that is directly connected to a
	net which has EGP speaking core gateways on it (i.e., directly
	connected to MILNET or ARPANET), uses EGP to specify what networks
	are behind it.  It may also speak for other gateways in its
	autonomous system.

2) If there are no other gateways on the core network, what protocols
does the host acting as a gateway have to run.

	This doesn't happen in THE CORE.  Gateways don't need to support
	any protocol other than IP, especially if there aren't any other
	gateways.  ICMP is useful however.  As soon as you have two
	gateways, you need some cooperation between them however, be it
	EGP, GGP, or some other protocol.

3)  True or false:  All hosts on a network (any type of network) either
need to know all the gateways on the local net or a specific host that
can handle gateway routing for them.

	False, typically hosts only know of one or two default (either
	closest or very knowledgable) gateways that they send all
	unknown packets to.  These gateways will advise of better
	routes and hosts generally cache the advisories for some
	short period of time.

-Ron
-----------[000017][next][prev][last][first]----------------------------------------------------
Date:      25 Feb 1985 11:45:03 EST
From:      MILLS@USC-ISID.ARPA
To:        StJohns@MIT-MULTICS.ARPA, tcp-ip@SRI-NIC.ARPA
Cc:        MILLS@USC-ISID.ARPA
Subject:   Re: Clarification on routing: gateways.
In response to the message sent   Sat, 23 Feb 85 02:59 EST from StJohns@MIT-MULTICS.ARPA

Mike,

A stab at answering some of your questions:

1) How does the core network know that this specific host is a gateway
for the directly attached network?  Does this have to be specified in
the core network routing tables?  Or is there some protocol (EGP, GGP?)
it uses to tell the core network (not the core GATEWAYS..)  it is the
gateway for the directly attached network?

A: I'm not sure what you mean by "core network." The network itself may have
no knowledge of the IP routing, which is maintained by gateways. In order to
announce the presence of a network, your host would have to operate as a
gateway and, as such, run EGP with another gateway, usually one belonging
to the "core system."

2) If there are no other gateways on the core network, what protocols
does the host acting as a gateway have to run?  (ICMP of course, but GGP
or EGP?)

A: I think your scenario reduces to a world with two networks connected by
your host acting as a gateway. It has no peers and thus runs no algorithm at
all. Switching datagrams between the two networks is strictly a private
matter, although it would have to support the usual host-gateway ICMP
interactions in any case.

3) True or false:  All hosts on a network (any type of network) either
need to know all the gateways on the local net or a specific host that
can handle gateway routing for them.

(1 and 2 refer specifically to an IMP based network)

A: No cigar. The hosts need know only a "small" set of gateways, one of which
can be used to get things going. ICMP redirects then will correct the initial
assumption. The above points have nothing to do with IMPs.

My confusion stems from reading the various gateway RFCs.  Unless I
totally misread them, a host acting as a gateway for a backend network,
which only wants to access (or provide access to) hosts on the core
network need not implement ANY gateway protocol.

A: False. THe host must support gateway services and thus must run EGP. The
various RFCs are in fact confusing, since they represent evolutionary steps in
what has turned out to be a rocky road to consensus. Read RFC-904 as the
definitive statement on protocol specification. The earlier concepts of
"appropriate first hop" and "stub EGP" may be misleading and hard to reconcile
with that document. In case of conflict, the simpler configuration rules
expressed in RFC-904 take precedence over the earlier "tree-structured"
restrictions.

Dave
-------
-----------[000018][next][prev][last][first]----------------------------------------------------
Date:      25 Feb 85 14:01:08 EST (Mon)
From:      Christopher A Kent <cak@Purdue.ARPA>
To:        Ron Natalie <ron@BRL-TGR.ARPA>
Cc:        Mike StJohns <StJohns@MIT-MULTICS.ARPA>, tcp-ip@SRI-NIC.ARPA
Subject:   Re: Clarification on routing: gateways.
	From: Ron Natalie <ron@BRL-TGR>

	This doesn't happen in THE CORE.  Gateways don't need to support
	any protocol other than IP, especially if there aren't any other
	gateways.  ICMP is useful however.  As soon as you have two
	gateways, you need some cooperation between them however, be it
	EGP, GGP, or some other protocol.

False. An implementation of IP must support ICMP -- ICMP is a mandatory
portion of any IP implementation, according to RFC 791, 792.

Cheers,
chris
----------
-----------[000019][next][prev][last][first]----------------------------------------------------
Date:      Mon, 25 Feb 85 14:13:37 EST
From:      Ron Natalie <ron@BRL-TGR.ARPA>
To:        Christopher A Kent <cak@PURDUE.ARPA>
Cc:        Ron Natalie <ron@BRL-TGR.ARPA>, Mike StJohns <StJohns@MIT-MULTICS.ARPA>, tcp-ip@SRI-NIC.ARPA
Subject:   Re:  Clarification on routing: gateways.
FALSE!  Neither RFC 791 (IP) nor 792 (ICMP) indicates that ICMP is
required.  It isn't, things work just fine without it, although
everyone is encouraged to support it, and the general tone is that
any reasonable IP implementation will do ICMP.  The only law that
I ever heard is that hosts directly connected to MILNET or ARPANET
must honor ICMP redirects to comply with the load sharing of the
MILNET-ARPANET bridges.

Turns out that while gateways frequently have need to send ICMP
messages, the only ones they ever look at are pretty mundane
things like the timestamp.  We got by for the longest time without
doing ICMP at all, and then by only checking for ICMP echos.
The question was directed specifically for gateways.

-Ron
-----------[000020][next][prev][last][first]----------------------------------------------------
Date:      Mon 25 Feb 85 14:41:31-EST
From:      J. Noel Chiappa <JNC@MIT-XX.ARPA>
To:        StJohns@MIT-MULTICS.ARPA, tcp-ip@SRI-NIC.ARPA
Cc:        JNC@MIT-XX.ARPA
Subject:   Re: Clarification on routing: gateways.
	The answers given so far are correct, but to help those who
were confused by the exchange, let me briefly sketch the 'big picture'
in slightly broader terms:

	The Internet system is currently divided into contiguous,
independent groups of networks; these groups are called 'autonomous
systems'. The 'leader among equals' of these groups is one called 'the
core', comprised of the ARPANet and MILNet and some others. The
system has gotten too big and too dynamic to exist on static routing
tables; instead, gateways exchange information on who is where,
and how to get there, using 'routing protocols'.
	The protocol that the autonomous systems use to talk to one
another is called EGP (for Exterior Gateway Protocol). It is used by
the gateways which are (logically) on the edge of autonomous systems
to talk to gateways outside their autonomous system (hence the
'Exterior'). Inside an autonomous system, the gateways of that system
may use whatever means the desire to communicate routing information.
GGP is one such protocol, and is currently used by the gateways in the
core.

	Your last point ("a host acting as a gateway for a backend
network, which only wants to access, or provide access to, hosts on
the core network need not implement ANY gateway protocol") betrays a
fundamental misunderstanding, and a common cause of problems: you may
know where the rest of the system is, but they may not know where you
are. It is the job of routing protocols (such as EGP) to provide that
information to everyone.

	Finally, a point to note about gateways is that the Internet
system is going through a (probably long) period of change. Building
gateways is an art, not a science. Documentation will be sketchy and
often outdated, etc. The required protocols and functionality will
change, so it will not be possible to relax with a 'completed'
gateway. All in all, for these reasons, building gateways is not an
encouraged occupation.

	This all applies to the existing large Internet system; if you
have a smaller, detached system, then the gateways there can do
whatever they like as far as exchanging routing information goes.

		Noel
-------
-----------[000021][next][prev][last][first]----------------------------------------------------
Date:      25 Feb 85 16:59:56 EST (Mon)
From:      Christopher A Kent <cak@Purdue.ARPA>
To:        Ron Natalie <ron@BRL-TGR.ARPA>
Cc:        Christopher A Kent <cak@Purdue.ARPA>, Mike StJohns <StJohns@MIT-MULTICS.ARPA>, tcp-ip@SRI-NIC.ARPA
Subject:   Re: Clarification on routing: gateways.
The first paragraph of RFC 792 reads:

   The Internet Protocol (IP) [1] is used for host-to-host datagram
   service in a system of interconnected networks called the
   Catenet [2].  The network connecting devices are called Gateways.
   These gateways communicate between themselves for control purposes
   via a Gateway to Gateway Protocol (GGP) [3,4].  Occasionally a
   gateway or destination host will communicate with a source host, for
   example, to report an error in datagram processing.  For such
   purposes this protocol, the Internet Control Message Protocol (ICMP),
   is used.  ICMP, uses the basic support of IP as if it were a higher
   level protocol, however, ICMP is actually an integral part of IP, and
   must be implemented by every IP module.

Need it be stated any more strongly?

chris
----------
-----------[000022][next][prev][last][first]----------------------------------------------------
Date:      Mon, 25 Feb 85 21:00 EST
From:      Mike StJohns <StJohns@MIT-MULTICS.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: Clarification on routing: gateways.
From the messages I've received, its obvious that everyone is as
confused as I.  OK, lets throw out the generic approach and go for
specifics.

Assume a "core" network consisting of c-30 imps, about 6-10 of them.
Assume a host connected to this core network via HDH.  Assume this host
is also attached to a backend network which hads three other hosts.  The
backend network runs TCP/IP.  Assume that while there are other hosts on
the "core" network, there are no other networks connected and therefore,
no other ngateways.  Assume that most if not all of the c-30 imps have
c-30 tacs attached to them.

1) Neither EGP nor GGP is applicable since there are NO OTHER gateways.

2) The only thing the backend hosts have to know is to forward packets
with unknown destinations to the host that is acting as a gateway.  It
either forwards the packet to the IMP it is attached to, or it drops it.

Q?)  (Maybe this is the one I should have asked before) Do the TACs
maintain information about gateways on the core?

Q?)  Again, are my assumptions correct that neither EGP nor GGP need be
run?

The above describes the initial configuration of DISNET, the Secret
level MILNET clone.  The host with the backend network is a Multics
system at the pentagon.  The rest of the hosts on the backend network
are Multics systems.  The TACs and IMPS are located at various places
around the country.  Unfortunately, there is no current implementation
of EGP available that will run on these Multics systems.

Thanks to all of you for your messages so far.  Mike
-----------[000023][next][prev][last][first]----------------------------------------------------
Date:      26 Feb 1985 11:34:57 EST
From:      MILLS@USC-ISID.ARPA
To:        ron@BRL-TGR.ARPA, cak@PURDUE.ARPA
Cc:        StJohns@MIT-MULTICS.ARPA, tcp-ip@SRI-NIC.ARPA, MILLS@USC-ISID.ARPA
Subject:   Re:  Clarification on routing: gateways.
In response to the message sent      Mon, 25 Feb 85 14:13:37 EST from ron@BRL-TGR.ARPA

Ron,

Byte your tongue, friend. Regardless of the impression you might get while
sifting through the RFC ooze, ICMP is emphatically required in all hosts
and gateways. This is the position taken officially by the IAB,
Task Forces, contract implementors and DCA. Gateways are expected to
faithfully report unreachable paths and reply to echos, although I suspect
nobody (except us?) does the timestamp thing right. However, you do have
a point in that if a gateway never originates a message itself, but does
forward third-party traffic, it will never see an ICMP message directed to
itself.

Dave
-------
-----------[000024][next][prev][last][first]----------------------------------------------------
Date:      26 Feb 1985 11:46:46 EST
From:      MILLS@USC-ISID.ARPA
To:        StJohns@MIT-MULTICS.ARPA, tcp-ip@SRI-NIC.ARPA
Cc:        MILLS@USC-ISID.ARPA
Subject:   Re: Clarification on routing: gateways.
In response to the message sent   Mon, 25 Feb 85 21:00 EST from StJohns@MIT-MULTICS.ARPA

Mike,

From your description, the Multics host is acting like a gateway. Set the
default gateway address in the IMPs to its address on each net. Teach
the Multics host how to forward from one interface to the other. This
can be done with fixed tables, no protocols and no particular implementation
ingenuity. You get to build your very own Internet and even assign your own
address space. Enjoy.

Dave
-------
-----------[000025][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26 Feb 85 14:04:09 EST
From:      Ron Natalie <ron@BRL-TGR.ARPA>
To:        Mike StJohns <StJohns@MIT-MULTICS.ARPA>
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re:  Clarification on routing: gateways.
BRL runs a 6 node C30 Net inside BRL.  There are currently three or
four gateways on it, counting two going to the MILNET.  One gateway
(going on two) is speaking EGP to the CORE to make the rest of the
world happy.  The rest of the routing tables are static (i.e. initially
compiled into the gateways, but changeable by manaul intervention).  We
are working towards some kind of interior gateway protocol since many
of our paths are redundant.  The C30's currently form the core of our
network.  Various locations around our facility have gateways that hook
up a variety of LAN's including Ethernet, Proteon Ringnet, Hyperchannel,
and DEC PCL-11B.  We also have 4 C30 TACs on this net.

The only thing the hosts know is what their address is, and where to send
unknown packets to.  Some hosts are given a little extra hint when they
just happened to be multihomed.  We got a Ethernet based laser printer
from IMAGEN, that we just keyed in it's address and the address of the
gateway and it flies.

TACs don't seem to keep any information on gateways CORE or otherwise.
The TAC software that we are using has the IMP address of the gateway
off the IMP network compiled in it.  I don't know if they do anything
with the ICMP redirects or not.

It would seem to me that all you have to do is tell the hosts on the LAN
and on the IMPs is that the default gateway is the MULTICS machine.  It
only needs to be prepared to forward accross appropriately to make things
work minimally.  Make things easy on your self and decree the maximum IP
packet size to be the same on both nets.  I've come accross very few hosts
that fragment properly in the gateway case.

-Ron
-----------[000026][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26 Feb 85 15:03:05 EST
From:      Ron Natalie <ron@BRL-TGR.ARPA>
To:        MILLS@USC-ISID.ARPA
Cc:        ron@BRL-TGR.ARPA, cak@PURDUE.ARPA, StJohns@MIT-MULTICS.ARPA, tcp-ip@SRI-NIC.ARPA, MILLS@USC-ISID.ARPA
Subject:   Re:  Clarification on routing: gateways.
I stand corrected, that ICMP is required.  It turns out that for the
most part, you can get along just fine without it.  The only time I
ever heard any prior furor over ICMP was when MILNET was split off.

While Destination Unreachables and Redirects are handy, and pings
satisfy people whose minds work that way, other features of ICMP
are ignored, like source quench (which I send when the I can't send
any more packets to that host, lest my IMP block) and the timestamp
as you indicate.

-Ron
-----------[000027][next][prev][last][first]----------------------------------------------------
Date:      28 Feb 1985 1027-PST
From:      Contr23 <CONTR23 at NOSC-TECR>
To:        TCP-IP at SRI-NIC
Subject:   Info on DoD protocol reference model?

Could anyone give me some information on the "DoD Protocol Reference
Model"? I have seen some documents on this, the most complete being
produced by Lillienkamp, Mandell and Smith at SDC dated 2 Dec 83.
I found it to be more comprehensive and more applicable to some of
our network design problems than the ubiquitous ISO model. I have 
also seen the "DoD model" mentioned in other papers (and also sometimes
referred to as the "DARPA model").

Specifically, my questions include (but are not limited to):
Is there an official DoD document or draft which describes the
concept; Are there other documents/models sponsered by the DoD;
Has later revisions of the SDC work been released; What is the
status of the MIL-STD FTP, SMTP specs.

Any informations is appreciated,
Thanks

Paul Higgins
E-Systems ECI Division
net: contr@nosc-tecr.

------

END OF DOCUMENT