The 'Security Digest' Archives (TM)

Archive: About | Browse | Search | Contributions | Feedback
Site: Help | Index | Search | Contact | Notices | Changes

ARCHIVE: TCP-IP Distribution List - Archives (1986)
DOCUMENT: TCP-IP Distribution List for March 1986 (109 messages, 58577 bytes)
SOURCE: http://securitydigest.org/exec/display?f=tcp-ip/archive/1986/03.txt&t=text/plain
NOTICE: securitydigest.org recognises the rights of all third-party works.

START OF DOCUMENT

-----------[000000][next][prev][last][first]----------------------------------------------------
Date:      Sat,  1 Mar 86 13:41:24 EST
From:      jas@proteon.arpa
To:        tcp-ip@sri-nic.arpa,,v2lni-people@mc.lcs.mit.edu
Subject:   4.3BSD proNET driver
Would everyone who's been working with the 4.3BSD beta test proNET
drivers please conact me *directly* and let me know how it's been working.
It seems to have some problems, and I'd like to help iron them out.
I'm primarily responsible for all the changes made to the 4.2BSD driver
to generate the 4.3BSD one, and would like to see it really work right.
I do know of some bugs in it, but they don't seem to be related to
the problems in the field.

					John Shriver
					jas@proteon.arpa
-------

-----------[000001][next][prev][last][first]----------------------------------------------------
Date:      1 Mar 1986 22:03:05 EST
From:      DASG@USC-ISID.ARPA
To:        brescia@BBNCCV.ARPA, tcp-ip@SRI-NIC.ARPA
Cc:        DASG@USC-ISID.ARPA
Subject:   Message transmission times
Just an echo of my sentiments as well concerning the geological time.  I am 
only a user of the net and my host is SRI-NIC.  I am a speed typist and 
routinely find that I fill up my keyboard buffer faster than DDN can accept.
Matters are really ridiculous when I transmit a diskbased ascii file.  For
example, a routine transmission of 2-3 pages will take about 15 minutes to
go through the system with 20-30 minutes not being unheard of.  I called the
SRI sysop one time and although sympathetic, his quote was along the lines
"no one ever said DDN was fast".  Given that DDN is supposed to be the man-
datory networking medium for the military in the future, we better do some-
thing to improve it or the overnight letters will be quicker.

gary swallow
DASG-AMZ
AV 225-1633, comm (202) 695-1633
-------
-----------[000002][next][prev][last][first]----------------------------------------------------
Date:      3 Mar 1986 1008-PST (Monday)
From:      Barry Leiner <leiner@RIACS.ARPA>
To:        Glen Foster <GFoster@USC-ISI.ARPA>
Cc:        Rick Adams <rick@SEISMO.CSS.GOV>, tcp-ip@SRI-NIC.ARPA, Mike Brescia <brescia@BBNCCV.ARPA>
Subject:   Re: Poor mil/arpa performance
Mike Brescia,

The symptoms described are not unusual for ARPA TACs.  I used to get
similar symptoms many times before switching to a SUN.  (i.e. Using a
terminal through a TAC connected to ISIA/TOPS20).

The working hypothesis was that the problem was caused by an
interaction between flow control on the Arpanet/Milnet, the TAC TCP and
the TOPS20 TCP.  However, insufficient effort was put on the problem to
solve it (by the appropriate parties, namely BBN working with ISI).

Regards,

Barry



----------
-----------[000003][next][prev][last][first]----------------------------------------------------
Date:      3 Mar 1986 07:47:45 EST
From:      Glen Foster <GFoster@USC-ISI.ARPA>
To:        Mike Brescia <brescia@BBNCCV.ARPA>
Cc:        Rick Adams <rick@SEISMO.CSS.GOV>, tcp-ip@SRI-NIC.ARPA
Subject:   Re: Poor mil/arpa performance

To answer your questions about how the DARPA program mgr. is accessing
Seismo...

She's using an IBMPC running Dick Gilmann's VDTE HP terminal emulator.
The pathway you described is correct.  There is a 50K trunk between
IMP 28 and IMP 25 and, I suspect, most of the packets get sent thataway.
We are on a Bridge Ethernet <-> RS 232C to the pc in the building and I
am certain that there are no performance problems with the internal local
network.  No screen editor, the slow turnaround is just too painful!  She
is accessing Seismo.  I don't know about typing speed, I'm about a 25 wpm
(not including errors!) and I get the same delays when I'm on Seismo.

The TAC buffers actually get full at times and the darn thing starts
beeping at every keystroke.  On one occasion, this has lasted for over
five minutes.  

I'm am not certain as to exactly where the bottleneck occurs.  I have been
experiencing similar delays (although not as severe) on USC-ISI and 
IPTO-VAX lately (esp. in the afternoon 1600 - 1900).  All hosts report
reasonable load averages and it "feels" like network problems (congestion?).

I hope the switch to the Arpanet TAC will help a little.  At least it will
lower resource usage slightly!

Glen
-------
-----------[000004][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3 Mar 86 08:26:47 est
From:      gross@mitre.ARPA (Phill Gross)
To:        tcp-ip@sri-nic
Cc:        gross@mitre.ARPA
Subject:   Mail Bridge Performance

I have also been a victim of the impressively poor Arpanet/Milnet mail
bridge performance.  When using a Milnet TAC from home, I get your
fairly standard 1200 baud type response for Milnet hosts.  When I try
to use Apranet hosts, however, I often get delays so long that the TAC
tells me 'host not responding'.  This clearly makes it impossible to
work through one of the mail bridges, even if all you are doing is
actually reading mail.

A few simple tests seems to show that the throughput isn't all that
bad for ftp.  Why are things so bad for Telnet and pings?  (The  
numbers I got were around 1 kByte/s for the mail bridges, around 2 
kBps for Mitre's minimal gateway and perhaps a little greater than 
2 kBps on the average for a fuzzway.)  The recent comment about
'8 pkts per  keystroke' for certain pathological Telnet situations
could use some amplification.

I decided to look at the gateway throughput reports over the last year to see 
if it had anything to tell us. The most obvious thing has been a definite 
increase in traffic over the past 5-6 weeks.  Various snippets of data and 
my speculative ramblings are included below for your reading pleasure.  
Comments are welcome, particularly from those closer to the data.

I found that over the last year the mail bridges (7 out of ~40 lsi gws) 
generally account for about 1/3 of the total traffic and between 
40-50% of the total dropped packets.  That sounds worse than it 
is, however, since the dropped packets account for only about 3% of 
the total sent.  So, although the mail bridges seem to drop more than 
their share, it doesn't seem that dropped packets account for their 
lousy performance.  I've included output below analyzing a couple 
of recent weeks.  It's interesting that the mail bridges tend to have 
longer packets than the rest of the gateways.   Could that mean that
people really use the mail bridges for mail and us Telnet users are
in the minority?  Anyone got any suggestions?  

The percentages at the bottom are straight from the throughput reports 
and, when boiled down, lead me to believe that no more than half
of the total traffic is real user data.  The rest is system overhead
(gateway protocols and icmp).  But that's different argument.



Mail Bridge Data from Gateway Throughtput Report  for week of Jan 20
                      (38 total gateways)

                            datagrams        bytes
Mail Bridge Rcvd Totals :    28529446   2824653776   (avg pkt len= 99.0 bytes)
LSI Gateway Rcvd Totals :    93635883   7532803988   (avg pkt len= 80.4 bytes)
MB percent of Rcvd Total:      30.468       37.498

Mail Bridge Sent Totals :    28529773   2808049620   (avg pkt len= 98.4 bytes)
LSI Gateway Sent Totals :    94735845   7345672471   (avg pkt len= 77.5 bytes)
MB percent of Sent Total:      30.115       38.227

Mail Bridge Dropped:           726513  (2.55% of MB total sent)
LSI Gateway Dropped:          1858797  (1.96% of LSI total sent)
MB percent of Drpd :           39.085

percent pkts addressed to gateways  = 35.28
percent pkts originating at gateways= 38.43
percent pkts forwarded to gateways  = 41.94



Mail Bridge Data from Gateway Throughtput Report for week of Jan 27
                      (39 total gateways)

                            datagrams        bytes
Mail Bridge Rcvd Totals :    33340101   3096532134   (avg pkt len= 92.9 bytes)
LSI Gateway Rcvd Totals :   100841370   8846421595   (avg pkt len= 87.7 bytes)
MB percent of Rcvd Total:      33.062       35.003

Mail Bridge Sent Totals :    33046453   3209057088   (avg pkt len= 97.1 bytes)
LSI Gateway Sent Totals :   103236490   8169295835   (avg pkt len= 79.1 bytes)
MB percent of Sent Total:      32.010       39.282

Mail Bridge Dropped:          1333934  (4.04% of MB total sent)
LSI Gateway Dropped:          3098725  (3.00% of LSI total sent)
MB percent of Drpd :           43.048

percent pkts addressed to gateways  = 42.79
percent pkts originating at gateways= 47.10
percent pkts forwarded to gateways  = 48.41


I also decided to plot the traffic sent by the gateways and convinced
myself that I saw some interesting trends.  In addition to a general
sensitivity in the data to holidays and school schedules, there has been
a definite upswing in traffic over the last 5-6 weeks.  I've included the
plots below.  It's great fun to try to figure out what it means.

Last year seems to start off with a plateau around 75 million packets
per week for the whole gateway system.  After climbing and leveling to
85-90M, it drops off around memorial day.  It picks up at 90-95M for
most of the summer before dropping in August (hard working summer-hires
quitting to head for the beaches?)  The rest of the year looks like taget
practice except for lows at Labor day, Thanksgiving, Christmas and
New Years.  

At first I thought this was a standard holiday trend but
then noticed that several of these turned out to be problems in data
collection (presumably either gateways or the monitoring hosts were
down).  This may have even more sinister implications - when BBN goes
home for the holidays, the network falls apart.  (And here I thought
robustness was one of the fundamental requirements.)

The highest traffic was the second week of the new year (schools back
in session, time to catch up on sf-lovers?)  The most recent 5-6 weeks show
an upswing, which, if continued, means a traffic increase of 33% over the
last year.

The data for the mail bridges seems flatter than for the system as a
whole.  Except for significant bumps in Mar-Apr and mid-summer, the traffic
hovered between 25-27 Million packets per week through September.
There does seem to be the same pseudo-holiday effect toward the end
of the year and the data also reflects the upswing over the last 5-6
weeks (but without the bump in the second week of Jan).


          Traffic Sent by LSI Gateways  (1/28/85 - 2/17/86)
120  |              (in Million pkts)
     |                                                 .       
110  |                                                      .  
     |                                                     .   
100  |            .                   .        .         .     
     |                      .  .           .            . .    
 90  |        .       ..  .. .. .             .   ..           
     |         ... ...                  ..  .             
 80  |      .           .        . . .                        
     |.   .. .           .        .             .   *         
 70  |   .                          *  .              .          
     |                                           *   *           
<60  |                                    *  *                  
     +---------------------------------------------------------
      JF   M   A    M   J   J   A   S    O   N   D    J   F
                  (* denotes incomplete data)


          Traffic Sent by Mail Bridges  (1/28/85 - 2/17/86)
 40  |              (in Million pkts)
     |                                                           
     |                                                           
     |            .            .                         . ..    
 30  |        ...  .          . .              .          .        
     |      .    .  .. .  .             .. .      ..   ..        
     |.   .. .        . .. ...   ... ..     . .                  
     |   .                             .        .     .          
 20  |                              *            *  **           
     |                                    *                      
<15  |                                       *                  
     +---------------------------------------------------------
      JF   M   A    M   J   J   A   S    O   N   D    J   F
                  (* denotes incomplete data)

So if things seem to have gotten worse lately, it's because there has
been a real increase in traffic.  Perhaps the recent traffic numbers 
(30-33M pkts for mail bridges and 100-105M for the whole system) represent 
a new Peter Principle corollary - the system has been utilized past it's
level of competence.  Anyone know the installation schedule for the
Butterflys?

Has anyone out there wasted the disk space to save the gateway throughput 
reports over the last few years?  If so, get in touch.  I'd be interested 
to get a longer baseline.  Since my programs can't read paper, I'd 
prefer online copies (but I'll take what I can get).

Phill Gross
-----------[000005][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3 Mar 86 9:08:26 EST
From:      Terry Slattery <tcs@usna.arpa>
To:        mills@dcn6.arpa, mike@BRL.ARPA
Subject:   MILNET / ARPANET gateways
The script below also points out dropped packets as well as the
horrible round trip times over the MIl/ARPA gateways.  I was trying
to get the NTP data Dave collected.  
Script started on Mon Mar  3 08:58:33 1986
1 usna> ping dcn1.arpa
PING dcn1.arpa (128.4.0.1): 56 data bytes
64 bytes from 128.4.0.1: icmp_seq=27. time=4140. ms
64 bytes from 128.4.0.1: icmp_seq=28. time=3190. ms
64 bytes from 128.4.0.1: icmp_seq=29. time=3300. ms
64 bytes from 128.4.0.1: icmp_seq=13. time=22150. ms
64 bytes from 128.4.0.1: icmp_seq=14. time=21590. ms
64 bytes from 128.4.0.1: icmp_seq=15. time=20630. ms
64 bytes from 128.4.0.1: icmp_seq=16. time=22510. ms
64 bytes from 128.4.0.1: icmp_seq=30. time=8550. ms
64 bytes from 128.4.0.1: icmp_seq=31. time=7720. ms
64 bytes from 128.4.0.1: icmp_seq=17. time=22410. ms
64 bytes from 128.4.0.1: icmp_seq=32. time=8360. ms
64 bytes from 128.4.0.1: icmp_seq=33. time=11060. ms
64 bytes from 128.4.0.1: icmp_seq=34. time=10920. ms
64 bytes from 128.4.0.1: icmp_seq=36. time=8960. ms
64 bytes from 128.4.0.1: icmp_seq=37. time=8020. ms
64 bytes from 128.4.0.1: icmp_seq=38. time=7940. ms
64 bytes from 128.4.0.1: icmp_seq=39. time=7120. ms
64 bytes from 128.4.0.1: icmp_seq=40. time=6170. ms
64 bytes from 128.4.0.1: icmp_seq=41. time=5830. ms
64 bytes from 128.4.0.1: icmp_seq=42. time=4990. ms
64 bytes from 128.4.0.1: icmp_seq=43. time=4050. ms
64 bytes from 128.4.0.1: icmp_seq=44. time=3200. ms
64 bytes from 128.4.0.1: icmp_seq=45. time=3110. ms
64 bytes from 128.4.0.1: icmp_seq=46. time=3040. ms
64 bytes from 128.4.0.1: icmp_seq=47. time=2560. ms
64 bytes from 128.4.0.1: icmp_seq=48. time=1680. ms
64 bytes from 128.4.0.1: icmp_seq=49. time=1760. ms
64 bytes from 128.4.0.1: icmp_seq=18. time=33110. ms
64 bytes from 128.4.0.1: icmp_seq=50. time=1710. ms
64 bytes from 128.4.0.1: icmp_seq=51. time=6330. ms
64 bytes from 128.4.0.1: icmp_seq=52. time=5430. ms
64 bytes from 128.4.0.1: icmp_seq=53. time=16360. ms
64 bytes from 128.4.0.1: icmp_seq=54. time=15930. ms
64 bytes from 128.4.0.1: icmp_seq=55. time=15030. ms
64 bytes from 128.4.0.1: icmp_seq=56. time=21620. ms
64 bytes from 128.4.0.1: icmp_seq=57. time=21590. ms
64 bytes from 128.4.0.1: icmp_seq=58. time=22250. ms
64 bytes from 128.4.0.1: icmp_seq=59. time=21330. ms
64 bytes from 128.4.0.1: icmp_seq=60. time=20430. ms
64 bytes from 128.4.0.1: icmp_seq=70. time=36320. ms
64 bytes from 128.4.0.1: icmp_seq=71. time=35410. ms
64 bytes from 128.4.0.1: icmp_seq=75. time=34630. ms
64 bytes from 128.4.0.1: icmp_seq=76. time=33690. ms
^C
----dcn1.arpa PING Statistics----
113 packets transmitted, 43 packets received, 61% packet loss
round-trip (ms)  min/avg/max = 1680/13398/36320
2 usna> 
2 usna> date
Mon Mar  3 09:00:45 EST 1986
3 usna> ^D
script done on Mon Mar  3 09:00:47 1986

	-tcs
-----------[000006][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3 Mar 86 10:27:39 est
From:      Jerry Feldman  <feldman@rochester.arpa>
To:        tcp-ip@sri-nic.arpa
Subject:   REMOVE ME

Sorry to bother everyone, but other attempts failed. Remove me from this list.
-----------[000007][next][prev][last][first]----------------------------------------------------
Date:      3 Mar 1986 12:42:34 CST
From:      DDN-IMP@GUNTER-ADAM.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Subject:   [oli2146 @ KOREA-EMH: PDP 11/70 for E-Mail host.]
Can anyone answer on what does an old '11 need to connect to the net?
                ---------------

Return-Path: <oli2146@korea-emh>
Received: FROM KOREA-EMH.ARPA BY GUNTER-ADAM.ARPA WITH TCP ; 3 Mar 86 05:07:00 CST
Date: 28 Feb 86 18:12 GMT
From: oli2146 @ KOREA-EMH
Subject: PDP 11/70 for E-Mail host.
To: ddn-usaf @ ddn1, afd @ gunter-adam, ddn-imp @ gunter-adam, programs @ hawaii-emh, pcdionc @ hawaii-emh
CC: ct @afcc-1, isg2146 @ KOREA-EMH, com1982 @ KOREA-EMH, c4s-opns @ KOREA-EMH, cc-1855 @ KOREA-EMH

Ref our pre-survey of Osan and Kunsan AB for milnet nodes.
We have been informed that the milnet installations for the two sites will
 consist of a  TAC and IMP.  This would not include an Electronic
Mail Host.  We are exploring the possibility of   picking up a
potentially excess PDP 11/70 computer (configuration unknown).

What would have to be done to have this type of system configured to act
as a mail host?  What is the requirement for disk space?  How many comm
ports are required?  What is the RAM requirement?  We would be interested in

using either the same or a similiar software package presently used on the 
Korea Electronic Mail Host.

MARK H. MEADERS, Captain, USAF
Information Systems Liaison

-------
-----------[000008][next][prev][last][first]----------------------------------------------------
Date:      3 Mar 1986 1514-PST (Monday)
From:      Barry Leiner <leiner@RIACS.ARPA>
To:        Glen Foster <GFoster@USC-ISI.ARPA>
Cc:        tcp-ip@SRI-NIC.ARPA, Mike Brescia <brescia@BBNCCV.ARPA>
Subject:   Re: Poor mil/arpa performance
If I recall correctly, the delays were worst around 4pm EST. Not
surprising, maximum load both on east coast and west coast and in
between.

Barry

----------
-----------[000009][next][prev][last][first]----------------------------------------------------
Date:      3 Mar 1986 17:03:11 EST
From:      Edward A. Cain <cain@EDN-UNIX.ARPA>
To:        gross@mitre.ARPA (Phill Gross)
Cc:        testing-interest@usc-isif.arpa,tcp-ip@sri-nic,gross@mitre.ARPA
Subject:   Re: Mail Bridge Performance
Phill,

Thanks for the summary of mailbridge traffic. I think it does partially
explain why performance is so awful at times thru the mailbridges. The
correlation with school schedules is interesting, too, and probably a better
guess than any I've heard recently.

There is one other important consideration. Performance on the ARPANET alone
has been terrible at times. For example, ICMP ECHO and ECHO REPLY round-trip
measurements between east and west coast hosts were averaging 18 seconds on 
Feb 3-4, with tails of the delay distribution out to 37 seconds, as measured
from DCEC (via arpanet) and at BRL (via milnet). Delays were very high
again during the Feb 12-14 time period. Even worse, on Feb 20th, one hour
in the afternoon the roundtrip delay from DCEC to the arpanet interface of
the ISI mailbridge was 30-40 seconds, and from DCEC to the arpanet
interface of the SRI mailbridge the delays were 45-47 seconds during the
same hour, with 90% packet loss!!! 

Usually, this kind of behavior on the arpanet is coincident with the outage
of key lines or nodes in the arpanet. On Feb 20th for example, line 76
(utah to lbl2) and line 76 (sri2 to collins) were both down most of the
day because of flooded cableheads.!!! The loss of a key component in the
arpanet seems to create serious congestion when the traffic goes up. And
congestion is noticed quickly by the mailbridges, which are among the
busiest arpanet hosts in terms of both packets sent and connection blocks
used (in the IMP).

Some of the overhead traffic you mentioned, although still alarmingly
high, has decreased noticeably since a year ago. The decrease in Packets
Originating at a Gateway could be due mainly to hosts learning how to
handle ICMP Redirects. 

I don't suspect replacing the mailbridges with Butterflies is going to
make any noticeable difference. The new congestion control scheme for the IMP
might help, if anyone does anything with Source Quench, because it paves
the way for gateways to learn about congestion in the networks (currently,
RFNM blocking is the only trick). Unless some action is taken to provide
a congestion control strategy at both the network and internet levels, or
atlernatively, enough spare capacity is provided in the arpanet to avoid
most of the congested situations, I don't think there will be any improvement
in mailbridge performance.

Ed Cain


-----------[000010][next][prev][last][first]----------------------------------------------------
Date:      3 Mar 1986 20:52-PST
From:      the tty of Geoffrey S. Goodfellow <Geoff@SRI-CSL.ARPA>
To:        cain@EDN-UNIX.ARPA
Cc:        testing-interest@USC-ISIF.ARPA, tcp-ip@SRI-NIC.ARPA gross@MITRE.ARPA
Subject:   Re: Mail Bridge Performance
With respect to a decrease in Packets Originating at a Gateway (mainly from
hosts learning how to handle ICMP Redirects), not from the hosts behind the
SRI-CSL-GW, which connects two Ethernets (128.18 & 192.12.33) to each other
and to the ARPANET (10.2.0.2).

Remembering that only hosts and not gateways can "act on" ICMP ReDirects,
our gateway ricochets a large portion of its GGP network destine traffic thru
MILLBL's ARPANET interface.  This is due to the fact that GGP networks such as
SATNET, VAN and ISI to name but a few which are on the ARPANET, are not in our
neighbor table (and us not in theirs).

The other bulk of our traffic seems to ricochet thru the WISC and PURDUE
EGP/GGP universe gateways.  Thus, we end up communicating next door to
Stanford or the AI centers LISP machines lair downstairs by traipsing cross
country rather than directly with each other on the same or neighboring IMP.

Using gateways to pass data between networks by the injection of a packet in
one network interface and having it disgorge out the other is a great example
of efficacy in action, but this business of sending stuff "thru" gateways in
and out the same network interface doesn't seem to be a parsimonious use of
network resources.  Will the highly touted Butterfly gateways be solving
these types of problems any time soon?  If not, how about some work on getting
packets to the destination in the most efficient and direct manner?

g
-----------[000011][next][prev][last][first]----------------------------------------------------
Date:      3 Mar 1986 17:57:11 EST
From:      Glen Foster <GFoster@USC-ISI.ARPA>
To:        Barry Leiner <leiner@RIACS.ARPA>
Cc:        Glen Foster <GFoster@USC-ISI.ARPA>, Rick Adams <rick@SEISMO.CSS.GOV>, tcp-ip@SRI-NIC.ARPA, Mike Brescia <brescia@BBNCCV.ARPA>
Subject:   Re: Poor mil/arpa performance

The problem seems to have worsened in the last few months.  I experience
delays even when on a Sun telnetted to ISIA although they are not as
bad as the TAC to ISIA or especially the TAC to Seismo.  The delays
are very variable during the day and seem to peak at "prime time."

I'll haul out the old stopwatch and get some representative times for
character echo.

Glen
-------
-----------[000012][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3 Mar 86 22:16:12 EST
From:      Mike Muuss <mike@BRL.ARPA>
To:        Phill Gross <gross@mitre.arpa>
Cc:        tcp-ip@sri-nic.arpa, gross@mitre.arpa
Subject:   Re:  Mail Bridge Performance
A much more interesting statistic to plot would be the number of
packets the gateways DROPPED, both as an absolute number -vs- time,
and also as a ratio w.r.t number not dropped.

A display of round trip times would also be interesting.

We have software at BRL that collects this kind of data, but to date
we only use it within our Campus-net, so as to keep unimportant
traffic off the MILNET trunks.  Except for the IMP and Gateway
logs at BBN, I don't think there is much data around...
	-M
-----------[000013][next][prev][last][first]----------------------------------------------------
Date:      4 Mar 1986 07:28:12 EST
From:      Glen Foster <GFoster@USC-ISI.ARPA>
To:        Barry Leiner <leiner@RIACS.ARPA>
Cc:        Glen Foster <GFoster@USC-ISI.ARPA>, tcp-ip@SRI-NIC.ARPA, Mike Brescia <brescia@BBNCCV.ARPA>
Subject:   Re: Poor mil/arpa performance

The worst delays occur 0800-1000 EST and 1600-1900 EST (except Fridays).

I usually get in around 0730 and immediately check my mail because I know 
that performance will quickly degrade to the point where the frustration
factor decreases the value of net access.

There is a noticeable improvement in performance from 1200-1300 EST.  

Glen
-------
-----------[000014][next][prev][last][first]----------------------------------------------------
Date:      Tue, 4 Mar 86 09:00:46 est
From:      tinker@dtix (Tinker)
To:        tcp-ip@sri-nic.ARPA
Subject:   gateway sources sought

Hello,

On behalf of the David Taylor Naval Ship R&D Center, I am in the market
(and have $$$$) for a DDN (MILNET)-to-local-ethernet gateway.  We would
like a turnkey vendor supported "box" to attach directly to our local
IMP and support at least one "backend" local ethernet consisting of 
mostly VAXes running 4.xbsd UNIX, VMS (Wollongong), and a few workstations.

In view of some recent readings I've gotten from the DDN PMO, this
device would have to attach to the IMP via an X.25 (standard mode)
interface.  In addition we would like to have some kind of fault
tolerance built into the system, even if that means having two gateways
with manual switchover.

Are there any vendors out there who do this kind of thing?

Alternatively, if I dont get a vendor response, could I build my own
gateway out of a micro-vax (or two) running Wollongong or UNIX?

Any replies appreciated.  Vendors should send to me direct so as not
to clutter the list.

Bob Tinker
David Taylor Naval Ship R&D Center
-----------[000015][next][prev][last][first]----------------------------------------------------
Date:      Tue, 4 Mar 86 10:06:24 EST
From:      Barry Shein <bzs%bostonu.csnet@CSNET-RELAY.ARPA>
To:        tcp-ip@sri-nic.ARPA
Subject:   EXOS TCP/IP Board

I received this request in the mail but unfortunately I am not familiar
with the hardware in question. I suggested he post to this list but
apparently it is not convenient for him to get the responses. If you
can help either mail to him or me or this board or whatever and I'll
make sure he gets copies. I think part of the problem is that his
communication with the vendor is thru telex and isn't working very
well to get his questions answered. Thanks in advance.

	-Barry Shein, Boston University


-----------Forwarded Message-------------

From harvard!seismo!mcvax!hslrswi!robert Mon Mar  3 17:23:05 1986

We would like to replace our Interlan NI-1010A with an EXOS 204 to
offload some of the network processing from out Vax-11/750.  But there
may be a problem in how this affects other network interfaces.  If the
TCP/IP is moved into hardware, how can it still be possible to run
networking through another interface - serial line IP or a DMR-11 for
instance ? It seems to me that if an Excelan board is installed, then
that can be the only networking interface, at least for TCP/IP.

Any advice you could give us in this matter would be very gratefully
received.

Many thanks in advance,
Cheers,
	Robert.

******************************************************************************
    Robert Ward,						   ___________
    Hasler AG, Belpstrasse 23, CH-3000 Berne 14, Switzerland	   |    _    |
								   |  _| |_  |
Tel.:	    +41 31 652319					   | |_   _| |
Bitnet:	    hslrswi!robert@cernvax.bitnet			   |   |_|   |
Arpa:	    hslrswi!robert%cernvax.bitnet@WISCVM.ARPA		   |_________|
Edunet:	    hslrswi!robert%cernvax.bitnet@UCBJADE.Berkeley.EDU
Uucp:	    ... {seismo,decvax,ukc, ... }!mcvax!cernvax!hslrswi!robert
******************************************************************************


-----------[000016][next][prev][last][first]----------------------------------------------------
Date:      4 Mar 1986 1048-EST
From:      Kevin Paetzold <PAETZOLD@MARLBORO.DEC.COM>
To:        tcp-ip@sri-nic
Subject:   looking for a recommendation
I need to get an IBM PC hooked up to an ethernet using the NRC Fusion
software.  Iam looking for a recommendation for the hardware configuration
of the PC itself (eg. howmuch memory) and for a recommendation on which 
ethernet board to stick in the pc.  any suggestions?

   --------
-----------[000017][next][prev][last][first]----------------------------------------------------
Date:      4 Mar 1986 14:17-PST
From:      Mike StJohns <StJohns@SRI-NIC.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   Need Unix Raw IP datagram generator.

Does  anyone  on  the net have or know of a unix program that can
generate random format IP datagrams?  I realize said program will
have to run under a super user.  Mike
-----------[000018][next][prev][last][first]----------------------------------------------------
Date:      4 Mar 1986 15:49:58 CST
From:      DDN-REQT@GUNTER-ADAM.ARPA
To:        tinker@DTIX.ARPA (Tinker)
Cc:        tcp-ip@SRI-NIC.ARPA, DDN-REQT@GUNTER-ADAM.ARPA, DDN-REQT@GUNTER-ADAM.ARPA
Subject:   Re: gateway sources sought
I'm not exectactly responding to your request.  I'm looking for similrar info.
Would you fotrrward any good responses you get to "AFDDN.BEACH@GUNTER-ADAM".
I'm with the Air Force DDN PMO by the way.  You may want to talk to Fotd Aewro     
(make that FORD) Aerospace.  They're working on a whizbang gateway that
is supposed to get A! accredited someday.
Thanks,
Lt darrel beachDarrel Beach
-------
-----------[000019][next][prev][last][first]----------------------------------------------------
Date:      Tue, 04 Mar 86 14:25:40 EDT
From:      TS0400%OHSTVMA.BITNET@WISCVM.WISC.EDU  (Bob Dixon)
To:        TINKER@DTIX
Cc:        TCP-IP@SRI-NIC.ARPA
Subject:   gateway sources sought
Here are 2 Arpa-to-Ether gateways to look into:
Proteon P4200
Communications Machinery Corp (CMC) DRN-3200
These are basically "turnkey" boxes.
-----------[000020][next][prev][last][first]----------------------------------------------------
Date:      4 Mar 1986 15:27:22 EST (Tuesday)
From:      Bill Morgart <bmorgart@mitre-gateway.arpa>
To:        tcp-ip@sri-nic
Cc:        bmorgart@mitre-gateway.arpa, daryl@mitre-gateway.arpa
Subject:   IP and TCP options
The MITRE Corporation has implemented the Department of Defense protocol
suite TCP, IP, and ICMP.  The implementation includes all the options
defined in MIL-STD 1777 (RFC-791), MIL-STD 1778 (RFC-793), and RFC-792.
We need to find other nodes in the internet that have implemented some or
all of the following features/options in order to evaluate the compatability
of our implementation with the real world.

We intend to address self returning messages to co-operating hosts and examine
timestamps, routes etc.
Please return this message with the features/options that your node supports
indicated in some manner.  Please include the internet address of your node
and type of node (host, gateway).  A real human point of contact would be
appreciated.

The following is an outline of the features/options we wish to test.

IP
	Type_of_Service
		Precedence
		Delay
		Throughput
		Reliability
	Options
		Routing
			Loose Source & Record Route
			Strict source & Record Route
			Record Route
		Timestamps
			type 0 -- timestamps only
			type 1 -- address & timestamp
			type 3 -- selective address & timestamp
		Stream ID
		Security
			basic
			extended


TCP
	connection establishment
		precedence
		options
			max. seg. size



Thanks for any and all help,

Daryl Crandall 		daryl@mitre-gateway.arpa	(703) 883-7278
Bill Morgart	 	bmorgart@mitre-gateway.arpa	(703) 883-6554

-----------[000021][next][prev][last][first]----------------------------------------------------
Date:      Tue, 04 Mar 86 15:36:02 EDT
From:      TS0400%OHSTVMA.BITNET@WISCVM.WISC.EDU  (Bob Dixon)
To:        IBM-NETS%BITNIC.BITNET@WISCVM.WISC.EDU , TCP-IP@SRI-NIC.ARPA
Subject:   Inverse Terminal Server
We have need for a device which I will tentatively call an inverse terminal
server. This device attaches to an ethernet on one end, and to some number
N of rs232 ports on the other side, just like a normal ethernet terminal
server. The inverse part comes about because the rs232 ports are to be
connected to a port-selection box. In operation, a user on some distant
host would telnet to the inverse terminal server, and then be connected to
any available port on the port-selection box, and then converse with the
port-selection box as to which ascii host he wished to be connected to,
just as if his terminal was directly connected to the port-selection box,
rather than telnetting across the ethernet. The purpose of this is to allow
ethernet users to log into hosts that are not on the ethernet, but which
are accessible via the port-selection box.
 Does such a device exist, or could something be adapted?
Any suggestions would be appreciated.
                                                   Bob Dixon
                                                   Ohio State University
-----------[000022][next][prev][last][first]----------------------------------------------------
Date:      Wed, 5 Mar 86 08:32:31 EST
From:      John Nolan <nolan@mimsy.umd.edu>
To:        PAETZOLD@MARLBORO.DEC.COM, tcp-ip@sri-nic.ARPA
Subject:   Re:  looking for a recommendation
Kevin,
I am using a IBM PC hooked to an ethernet with (currently) a 3COM ethernet
board. This is the 3C500 board which has been around forever. As we are
thinking of upgrading software to the FTP Software TCP/IP system, we are
starting to explore new boards. Candidates are the Interlan NI5010 board
and the 3COM 3C505 Hi Performance board. I have been pleased with the
3COM board except that when using it, I cannot use the RAM disk which I
can install on my Quadram memory/clock/port multifunction board. I haven't
tried the configuration with NRC's Fusion software. Mostly, the PC is used
with the XNS Protocols to print stuff on the Xerox laser printer. I have
heard bad stuff about NRC's Fusion; when I read their brochure, I thought
it was too good to be true.....I think it is too good to be true.

With regard to the actual PC configuration, I have currently a dual floppy
system with 512K of memory. I have never had problems with running out of
memory but have sorely lacked a hard disk (which I have ordered).

Sorry about all the rambling discussion above but I do run off at the 
mouth with no good reason.

FTP Software Inc is at (617)497-5066. Interlan is at (617)263-9929. 3COM
is at (415)961-9602.

If I can help more.....

john nolan@maryland

American Air Force Express.....Don't be deposed without it..F.E.Marcos
-----------[000023][next][prev][last][first]----------------------------------------------------
Date:      06 Mar 86 06:42:25 PST (Thu)
From:      Van Jacobson <van@lbl-csam.ARPA>
To:        tcp-ip@sri-nic.arpa
Subject:   Re: Mail Bridge Performance

At least some of the pathetic Milnet - Arpanet performance should be
blamed on EGP.  For example, EGP routes advertised on Milnet are all
via the east coast, usually bbn-milnet-gw.  Routing all the west coast
Mil-Arpa traffic through Boston increases our transit delay, wastes
bandwidth on the transcontinental trunks and probably helps to saturate
the overloaded bbn bridge.

I just took some ping data between lbl-csam and ucb-arpa:  Lbl-csam has
addresses 26.1.0.34 (milnet) and 128.3.0.24 (lbl-ether).  Ucb-arpa has
addresses 10.0.0.78 (arpanet) and 128.32.0.4 (ucb-ether).  This morning
around 3am, I pinged arpa from csam using both csam source addresses
and both arpa destination addresses.  100 packets were sent for each of
the four src/dest combinations then another 100 packets to each.  The
two runs for each combination were separated by about 15 minutes but
had essentially identical statistics.  Neither machine had active users
but there was sporadic inbound mail traffic.  The results were (all
times in ms.):

                   Median  Avg   S.D.  Min.   Max   %lost
  milnet - arpanet   195   361   360   100   2190     0
  milnet - ucbether  750   900   320   620    899     0
  lblether-arpanet  1060  1508  1043   500   4780     6
  lblether-ucbether 4430  4858  2757  1340  12160     7

The milnet-arpanet traffic used the correct route, millbl.  The
milnet-ucb traffic used the lbl-csam EGP route, milbbn, outbound and
millbl inbound.  The lbl-arpanet traffic used millbl outbound and the
ucb-arpa EGP route, milisi, inbound.  The lbl-ucb traffic used milbbn
outbound and milisi inbound.

The "min" numbers scaled linearly and show a factor of 10 increase in
delay due to EGP.  I don't understand why the avg./median numbers don't
scale linearly, why they show a factor of 20 increase in delay or why
packets were lost when routed through milisi but not when routed
through milbbn.  Given the TCP retransmit time algorithms in 4bsd, I do
understand why all our telnet/ftp users are complaining: they could
walk to campus faster then their packets get there.

We've been suffering with EGP for more than a year and there still
isn't a west coast EGP server for Milnet.  Perhaps adding one would
help users at both ends of the country.  (This wouldn't be a fix but it
might be a cheap patch while we wait for butterfly bridges, smarter or
local EGP servers, or a replacement for EGP.)

 - Van Jacobson (van@lbl-csam.arpa)
-----------[000024][next][prev][last][first]----------------------------------------------------
Date:      Thu, 6 Mar 86 06:51:02 est
From:      rhott@NSWC-OAS.ARPA
To:        IBM-NETS%BITNIC.BITNET@WISCVM.WISC.EDU, TCP-IP@SRI-NIC.ARPA, TS0400%OHSTVMA.BITNET@WISCVM.WISC.EDU
Subject:   Re:  Inverse Terminal Server
We have just such an animal (Inverse Terminal Server) here at NSWC.  It is
a CS/1 from Bridge Communications, Inc.  Bridge has several boxes that might
satisfy you needs, from the CS/1 (32 ports) to the CS/100 (4-14 ports) and
they have a new box, CS/200 (8 ports ??) that is even cheaper yet.

The cost is somewhere around 4k-5k range for the CS/100!

We have a TCP/IP version of the CS/1.  It is also available for XNS!

An address for Bridge is:
     Bridge Communications, Inc.
     2081 Stierlin Road
     Mountain View, CA 94043
     Telephone: 415/969-4400

Hope this helped!

Bob Hott     Naval Surface Weapons Center, Dahlgren, VA 22448
             Systems Integration and Networking Branch (Code K33)
.
-----------[000025][next][prev][last][first]----------------------------------------------------
Date:      Thu, 6 Mar 86 11:12:19 EST
From:      Ron Natalie <ron@BRL.ARPA>
To:        Tinker <tinker@dtix.arpa>
Cc:        tcp-ip@sri-nic.arpa
Subject:   Re: gateway sources sought

The only commercial companies that so far see competent to do that
sort of thing are PROTEON and BBN.  In addition, for no support
whatsoever, the BRL GATEWAY will do what you ask.  We currently
run two GATEWAYS to the MILNET at BRL (none of our hosts are currently
connected directly).  The switchover capability is pretty much automatic
once you put in EGP.  We only really do our own because, we don't have
just ethernets on the backend and our needs change a little quicker
than we could contract anyone outside to do.

I've looked at the WOOLENGONG stuff, and while it is an entirely
adequate TCP implementation, it really is lacking as a gateway system.
The only other real alternative is to find someone who will commercially
support the EGP in the 4 BSD network code and just use some random VAX.
However, the BSD EGP implementation still seems to need some attention
from time to time.

-Ron
-----------[000026][next][prev][last][first]----------------------------------------------------
Date:      6 Mar 1986 13:50:41 EST
From:      MILLS@USC-ISID.ARPA
To:        van@LBL-CSAM.ARPA, tcp-ip@SRI-NIC.ARPA
Cc:        MILLS@USC-ISID.ARPA
Subject:   Re: Mail Bridge Performance
In response to the message sent  06 Mar 86 06:42:25 PST (Thu) from van@lbl-csam.ARPA

Van,

Without detracting from your unimpeachable data and sound conclusions, I should
point out that EGP itself has nothing to do with the routing. The problem,
as we all understand, is intrinsic to the GGP routing algorithm used by the
LSI-11 gateways these many years and hopefully not long for this world. This
all would seem to suggest we support the replacemnt of these old gateways
with the more capable Buttergates as rapidly as possible.

Having said that, I continue to observe and report what I think are grossly
suboptimal behavior on the part of many network hosts, such as excessive
retransmissions, spasmotic packetization and spurious ACKs. The fact that some
of the delays measured by you and others are up in the tens of seconds (!)
suggests that something more clever than FIFO buffering, as John Nagle suggests
in a recent RFC, may be required no matter how lush the storage pool. In
addition, the explosive growth in new networks and hosts is already engulfing
the gateways and EGP updates. Personally, I am mosre worried about that last,
as it may require a major change in how our non-core gateways work. That's
where you should point your EGP spear.

Dave
-------
-----------[000027][next][prev][last][first]----------------------------------------------------
Date:      Thu 6 Mar 86 18:16:01-EST
From:      "J. Noel Chiappa" <JNC@XX.LCS.MIT.EDU>
To:        van@LBL-CSAM.ARPA, tcp-ip@SRI-NIC.ARPA
Cc:        JNC@XX.LCS.MIT.EDU
Subject:   Re: Mail Bridge Performance
	Nope. The problem is GGP, not EGP. (Not that EGP doesn't have
its little brain bubbles, which certain people will remember me
attacking brutally, but this time it's innocent.) I explain this every
six months, but new people keep making the same mistake.

	There are plenty of links between the MILNet and the ARPANet
in California, but the primary reason you aren't routing through them
is that GGP, which is the (ancient) routing protocol used for passing
info among core gateways, is being used in a way it was *not
designed* to work in. The GGP protocol is throwing away of lot of
information, but that *was legitimate* if you use GGP the way it was
supposed to be.
	Specifically, when a GGP routing update from gateway Y says
that it can get to net X, it doesn't say what the 'next hop' is, even
if that 'next hop' is on the *same net* as the two gateways which are
communicating. Why? Well, the way GGP was supposed to work, all the
gateways on a net were supposed to communicate with all the other
agetways, i.e. N^2 communication. In such a scenario, you'd be hearing
from the direct gateway to net N (gateway X) , as well as gateway Y,
and you'd find the direct connected one was closer.
	This model is no longer	applicable; all gateways do not
communicate directly; many talk only to their EGP peers, and the only
path that EGP peer (gateway Y) has to give routing info to its core
neighbours is GGP. GGP drops the information about the next hop
gateway (gateway X) being on the same net, with the result that all
the other gateways on that net take an *extra hop* through the EGP
peer (gateway Y) to get to network N.
	Even if they put in EGP speaking gateways on the West coast,
that *still* won't fix the problem unless you are an EGP peer with the
core gateway which is the EGP peer of the local net you are trying to
get to. The traffic will still take an extra hop from your core EGP
peer to the other core EGP peer.  If the gateway to that net is still
only peering with an EGP gateway on the East cost, *all traffic* to
that net from *everywhere* has to go across the country and through
that gateway. There's nothing anyone can do to fix that extra hop
except replace GGP.

	The information needed to fix all this is there in EGP; it's
the protocols *between) the core gateways that's broken. Once the
Butterflys go in, this problem will clear up *without any* changes
to EGP. (Not to say that there aren't lots of other problems with
EGP that won't be cleared up so easily!) EGP is not the problem.
	To the extent that people switch to a West Coast peer once one
goes in, the problem will diminish, it's true. So your suggested fix
would be a help, although you're pointing your finger at the wrong
culprit.

	Noel
-------
-----------[000028][next][prev][last][first]----------------------------------------------------
Date:      06 Mar 86 21:27:57 PST (Thu)
From:      Milo S. Medin (NASA ARC Code ED) <medin@orion.arpa>
To:        tcp-ip@sri-nic.arpa
Subject:   Network change at Ames

We here at Ames have cutover to a Class B network from a few multiple
Class C nets.  The changes are reflected in the latest host table.
Please update your host tables (if you still use them) if you haven't
done so already.

					Thanks,
					  Milo
-----------[000029][next][prev][last][first]----------------------------------------------------
Date:      06-Mar-86 18:28:47-UT
From:      mills@dcn6.arpa
To:        tcp-ip@sri-nic.arpa
Subject:   More accurate clocks
Folks,

Our clockwatching team, with kind help from the folk at U Maryland, Ford
Research and U Michigan, continues to refine the accuracies of timestamps
produced by the radio-clock equipped fuzzball hosts, including DCN1.ARPA
(128.4.0.1), UMD1.ARPA (128.8.0.1) and FORD1.ARPA (128.5.0.1). A painstaking
calibration using local-net paths between these hosts, which are in Vienna,
VA, College Park, MD, and Dearborn, MI, respectively, reveals UMD1.ARPA (WWVB
clock) to lead DCN1.ARPA (WWVB clock) by 4 +-2 milliseconds and FORD1.ARPA
(GOES clock) to lead DCN1.ARPA by 6 +-2 milliseconds.

Although we have at the moment no independent means (e.g. portable atomic
clock) to precisely calibrate these clocks with respect to NBS Standard Time,
the fact that two of them use low-frequency radio transmissions (WWVB) and the
third uses satellite transmisssions (GOES), as well as the fact that separate
less-accurate WWV high-frequency radio clocks in University Park, MD, and Ann
Arbor, MI, indicate agreement within expected nominals, suggests they can be
trusted to within a few milliseconds.

In the latest implementation the first derivative of offset (drift) is
separately estimated and the timestamps compensated accordingly, so the
highest accuracy is available with all protocols: ICMP Timestamp, NTP/TIME and
UDP/TIME without special addressing. The highest accuracy is available using
ICMP Timestamp, then the other two protocols in order. We estimate the
accuracy using ICMP Timestamp protocol to be better than +-10 milliseconds.

Note that UMD1.ARPA is normally reachable via MILNET paths (MARYLAND gateway),
while DCN1.ARPA and FORD1.ARPA are normally reachable via ARPANET paths
(DCN-GATEWAY). In any conceivable experiment involving nontrivial network
paths, the measurement errors due to these hosts or clocks should be
negligible. Under normal conditions, the clocks operate independently;
however, in case of failure, each clock is backed up by one of the others
using local-net paths. In other words, the service should be very reliable,
but with no protection at the moment against clocks that are operating but
indicate the wrong time.

The present configuration invites some interesting experiments which might
shed light on present ARPANET/MILNET network performance. You are welcome to
scheme such things, especially if you report your findings; however, we would
very much like you to avoid TCP/TIME and also limit the barrage to the above
hosts while avoiding our other timeteller fuzzthings, which use one of the
above hosts for timetelling anyway. However, the incurably curious and
persistent can still find the WWV clocks at their previously announced
addresses.

Dave
-------
-----------[000030][next][prev][last][first]----------------------------------------------------
Date:      07 Mar 86 06:10:21 PST (Fri)
From:      Van Jacobson <van@lbl-csam.ARPA>
To:        MILLS@USC-ISID.ARPA, "J. Noel Chiappa" <JNC@XX.LCS.MIT.EDU>
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: Mail Bridge Performance
With all due respect gentlemen, the problem I was trying to describe is
with the current implementation of EGP, not GGP.  I'm aware that GGP is
brain-damaged.  Replacing GGP involves replacing at least the 7 LSI-11
mail bridges and probably all 38 core gateways.  While this is clearly
necessary and should be done as quickly as possible, it's going to take
a while.  I proposed a relatively simple, quick "patch" that might
improve things in the interim.  (i.e., I wasn't going to complain about
GGP until EGP got fixed).  I can prove this patch would improve our
transit delay a factor of ten.  West coast sites similar to us (e.g.,
UCB) should see similar improvement.  Dave posted some traffic data
last May that showed a 25%+ East/West imbalance through the mail
bridges.  Things might improve nationally by whatever portion of this
was due to the current, lousy, EGP routes on the West coast.

Dave, I don't understand the statement that "EGP has nothing to do
with the routing".  Say I'm trying to get from a Vax on the lbl-ether
to a Vax on the ucb-ether, e.g.,
   rtsg --> lbl --> mil??? --?> ucbvax --> monet
The first milnet hop (lbl to MIL-whatever) is determined by EGP,
subsequent hops up to ucbvax are determined by GGP.  Lbl is a pure
gateway and doesn't get icmp redirects so the route advertised by our
EGP peer is all that determines the first hop.  If our EGP peer says
"use MILBBN", even the most wonderous GGP-replacement won't prevent
packets making two completely unnecessary trips across the country.

I must admit I've never been fond of EGP (the current implementation,
that is, I've got nothing against the protocol).  About 60% of all our
Internet traffic and 90% of our "interactive" traffic is to "local"
UCB, Stanford or LLNL hosts.  Because the traffic is well localized,
I've been making sporadic delay and throughput measurements to those
hosts since the '83 NCP/TCP switchover.  Generally, the measurements
show a slow, roughly linear degradation up to Oct, '84 (with a factor
of two step due to the Arpa/Milnet split in late '83).  With the EGP
switchover in late '84, things suddenly degraded by a factor of ten.
Since then, the data has been so "noisy" that it's difficult to
analyze.  [There was a clear milestone in early '86 though when delays
went to infinity (the EGP space wars).]

I'll finish this epic with one measurement I didn't put in the last
message.  You can estimate the damage that GGP is doing by using the
best first hop gateway and comparing the transit times to multi-homed
hosts.  E.g.,  I measure
    lbl-csam --> MILLBL --?> ucb-arpa 
using the local net addresses for csam & arpa and the milnet/arpanet
addresses.  Any difference in the two measurements should be due to
GGP.  The median time for the local net case is 500ms and for the
milnet/arpanet case it's 200ms so GGP hurts by a factor of ~2.5.  The
ratio stays about 2.5 if I try su-score or sri-iu and/or MILSRI instead
of ucb-arpa/MILLBL.  Compared to the 5 second times and factor of
20 that result from a bad EGP route, this is down in the mud.

 - Van
-----------[000031][next][prev][last][first]----------------------------------------------------
Date:      7 Mar 1986 11:24:40 EST
From:      MILLS@USC-ISID.ARPA
To:        van@LBL-CSAM.ARPA, JNC@XX.LCS.MIT.EDU
Cc:        tcp-ip@SRI-NIC.ARPA, MILLS@USC-ISID.ARPA
Subject:   Re: Mail Bridge Performance
In response to the message sent  07 Mar 86 06:10:21 PST (Fri) from van@lbl-csam.ARPA


Van,

The violence of our agreement may be leaving us all exhausted. By way of
explanation of my comment that EGP is not the problem, note that the
LSI-11s compute EGP routes just like GGP routes, in other words, as if
the EGP peer were a GGP peer with a funny way to propagate the updates. 
Some nonsense is necessary, of course, in biasing the hop counts, etc, but this
is invisible to the EGP peer. Tracy Mallory or Mike Brescia can beat me up
if this bog has been OBE.

This would be a real fun issue to drop on Mike Corrigan's Internet Engineering
Task Force. Boy, was that fun to say.

Dave
-------
-----------[000032][next][prev][last][first]----------------------------------------------------
Date:      Sun, 9 Mar 86 17:10:21 est
From:      romkey@BORAX.LCS.MIT.EDU (John Romkey)
To:        nolan@mimsy.umd.edu
Cc:        PAETZOLD@MARLBORO.DEC.COM, tcp-ip@sri-nic.ARPA
Subject:   re: looking for a recommendation
Regarding the 3COM ethernet cards for the PC and the Interlan ethernet
card:

I've worked quite a bit with the 3COM 3C500 card and the Interlan
NI5010 card and written drivers for each. Here are some observations
about the cards that might help people who have to decide between them.

Programming: I prefer the NI5010. Early 3C500 cards had
some pretty weird race conditions in the hardware; I don't know
whether these are gone now or not. They also had some programming
pitfalls that the Interlan card shares, but Interlan warns about them
in the documentation; I had to discover them the hard way with the
3COM card. I don't know if the programming documentation has been
updated. 3COM now has out the 3C501, which I've been told is
compatible with the 3C500, but I really know nothing about it.

Throughput: the 3C500 and the NI5010 are pretty much the same.
Although the Interlan card has two packet buffers (one for send, one
for receive) to the 3C500's one, I was told that if you dma into
the transmit buffer while received the card will sometimes screw up,
so my driver doesn't do that, and the hoped-for advantage of the card
is gone.

Reliability: both boards seems to work pretty well. I haven't used the
NI5010 board as much as the 3C500 board, but I haven't had many
problems with it either. One nasty with old NI5010 boards was the
thin ethernet connector on the back. It's an RCA jack and on old cards
it wasn't attached very firmly to the rest of the card. Plugging and
unplugging it from the ethernet several times broke some wires going
from the jack to the card. I believe Interlan has fixed this problem.

Prices: offhand, I don't know what the relative prices are for the boards.

Support: I have found Interlan more approachable than 3COM, much
easier to talk to. Interlan has always been fast and courteous when I
called them; I've had problems with 3COM.

I still only have specs on the 3C505 card (3COM's High Performance
interface), but it looks like it will run a fair amount faster on an
AT even if you only use it as a dumb ethernet interface, since it will
be able to take advantage of the AT's 16 bit bus.

My final note is that I would prefer going with a Proteon ring and
Proteon's p1300 ProNET interface (a joy to program - well, almost)
instead of ethernet, anyway.
					- John Romkey
					  FTP Software
					  (late of MIT)
-----------[000033][next][prev][last][first]----------------------------------------------------
Date:      10 Mar 1986 0946-EST
From:      Kevin Paetzold <PAETZOLD@MARLBORO.DEC.COM>
To:        romkey@BORAX.LCS.MIT.EDU, nolan@mimsy.umd.edu
Cc:        PAETZOLD@MARLBORO.DEC.COM, tcp-ip@sri-nic.ARPA
Subject:   re: looking for a recommendation
thabnks for the response.  we are buying a 3com 3c505.  turns out that
it is easier to buy it from nrc than from 3com.  the price was just
under 1100$.  

btw.  i got a lot of recommendations of your code from a lot of people.
you must be doing something right.  the only reason we went with nrc is
that we are using them for something else and we are just going to use
the pc for something to talk to to debug our hardware.

   --------
-----------[000034][next][prev][last][first]----------------------------------------------------
Date:      Mon, 10 Mar 86 13:44:59 EDT
From:      TS0400%OHSTVMA.BITNET@WISCVM.WISC.EDU  (Bob Dixon)
To:        IBM-NETS%BITNIC.BITNET@WISCVM.WISC.EDU , TCP-IP@SRI-NIC.ARPA
Subject:   Inverse Terminal Server
Thanks to everyone who responded to my inquiry on this subject.
I have received over 20 replies, all with good suggestions.
These networks provide an extremely useful resource, and access to information
that could not be obtained in other ways.

The majority opinion of those who responded is that the Bridge CS/1 and CS/100
provide both normal and inverse terminal server functions.

                                                         Bob Dixon
                                                         Ohio State University
-----------[000035][next][prev][last][first]----------------------------------------------------
Date:      12 Mar 1986 19:47:36 PST
From:      Dan Lynch <LYNCH@USC-ISIB.ARPA>
To:        mills@DCN6.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA, LYNCH@USC-ISIB.ARPA
Subject:   Re: More accurate clocks

I sure hope some folks out there are taking advantage of the
current mess with ARPANET/MILNET gateway misbehavior to 
learn what is wrong with the current scheme so that when it 
gets fixed we will also have some design rules added to the
body of knowledge we have built up over the past 17 years of
running and tuning this marvel.  The folks in ISo land are
busy writing specs for the brave new world while we are
(hopefully) providing a real example of how hard it is
to "do it right".  But in order to be of service to
mankind we gotta document our advances.

(And you wonder why most of the commercial world lives
with fixed routing?)

Dan
-------
-----------[000036][next][prev][last][first]----------------------------------------------------
Date:      13 Mar 1986 07:31:00 PST
From:      Dan Lynch <LYNCH@USC-ISIB.ARPA>
To:        "J. Noel Chiappa" <JNC@XX.LCS.MIT.EDU>
Cc:        van@LBL-CSAM.ARPA, tcp-ip@SRI-NIC.ARPA, LYNCH@USC-ISIB.ARPA
Subject:   Re: Mail Bridge Performance
Noel,  Your last paragraph leads me to believe that the Butterfly Gateways
will be running a new GGP algorithm.  Really?  If so, how is it
different than the currrent one?
Dan
-------
-----------[000037][next][prev][last][first]----------------------------------------------------
Date:      Sat 15 Mar 86 00:15:17-EST
From:      "J. Noel Chiappa" <JNC@XX.LCS.MIT.EDU>
To:        LYNCH@USC-ISIB.ARPA
Cc:        van@LBL-CSAM.ARPA, tcp-ip@SRI-NIC.ARPA, JNC@XX.LCS.MIT.EDU
Subject:   Re: Mail Bridge Performance
	It's not a new GGP algorithm, it's a whole new protocol and
algorithm. GGP is going to be replaced lock, stock, and barrel. That's
the whole idea of EGP; inside autonomous areas you can run any
protocol that you want for routing. There's no visibility across the
EGP boundary as to what routing protocol you are using inside your
autonoumous area. The new protocol is pretty complex; it's called
SPF. It's a lot like the algorithm the IMP's use, with changes to work
well with EGP. BBN can tell you more about it.

	Noel

PS: For what it's worth, on Thursday evening I tried to use an ARPAnet
host from a MILNET site (NOSC) for several hours now (trying to
compose this message as a matter of fact), and the response (at 9PM
West Coast time) was so terrible that I do not have words to describe
it. I tried several different source and destinations, in a variety
of combinations, and was continually getting connections time out,
etc. I tried to compose this message for about 2 hours, then gave up in
in disgust and quit. I have no idea why things were so bad so late,
but the performace of the system was utterly execrable.
-------
-----------[000038][next][prev][last][first]----------------------------------------------------
Date:      Sat, 15 Mar 86 22:01:11 est
From:      wjc@ll-vlsi (Bill Chiarchiaro)
To:        tcp-ip@sri-nic
Subject:   ARPAnet Usage

Does anyone have any figures for typical and peak number of packets per
minute (or second, or hour...) handled by ARPAnet IMPs?  What I am trying
to figure out is the following:
	I am thinking of making a radio gateway to a local-area network.
Assuming that the radio gateway can handle packets at about 60K bits per
second, how many nodes can be using the gateway before quality of service
falls below that offered by the ARPAnet?  I realize that the radio net-
work's link-level control will have a major influence on throughput, and
I expect to use something other than an ALOHA/Ethernet approach.

Thanks for any responses.

Bill
-----------[000039][next][prev][last][first]----------------------------------------------------
Date:      Sun, 16 Mar 86 02:47:57 EST
From:      Chris Torek <chris@gyre.umd.edu>
To:        tcp-ip@sri-nic.ARPA
Subject:   More network statistics
Some more statistics for you, made pretty much at random:

	% date
	Sun Mar 16 02:21:40 EST 1986
	% ping sac-milnet-gw
	...
	----sac-milnet-gw PING Statistics----
	80 packets transmitted, 67 packets received, 16% packet loss
	round-trip (ms)  min/avg/max = 410/11971/37350

Pretty wild variance, no?

Other gateways show similar statistics:

	----bbn-milnet-gw PING Statistics----
	89 packets transmitted, 67 packets received, 24% packet loss
	round-trip (ms)  min/avg/max = 230/3614/28830

	----arpa-milnet-gw PING Statistics----
	54 packets transmitted, 44 packets received, 18% packet loss
	round-trip (ms)  min/avg/max = 110/6850/24400

	----dcec-milnet-gw PING Statistics----
	44 packets transmitted, 32 packets received, 27% packet loss
	round-trip (ms)  min/avg/max = 200/10639/28140

	----isi-milnet-gw PING Statistics----
	32 packets transmitted, 26 packets received, 18% packet loss
	round-trip (ms)  min/avg/max = 250/9301/21880

[You can see my patience with statistics-gathering going steadily
down here :-).]

In all cases I did see some RFNM blocking; at least something out
there is doing flow control.  What this all means (besides `I cannot
get anything done at Berkeley with this going on') is beyond me,
but I am surprised to see this kind of variance at 2 AM Sunday
morning.

But it is not just the gateways!  Pinging a MILNET site shows the
same thing:

	----sri-nic.arpa PING Statistics----
	62 packets transmitted, 49 packets received, 20% packet loss
	round-trip (ms)  min/avg/max = 370/7923/29060

I just had another thought: ping ourselves:

	----26.2.0.57 PING Statistics----
	76 packets transmitted, 73 packets received, 3% packet loss
	round-trip (ms)  min/avg/max = 90/1879/13920

Now that seems odd.

(Boy will I be glad when we get an ARPA line; at least we will no
longer be tormenting the bridges so much.)

Musingly,
Chris
-----------[000040][next][prev][last][first]----------------------------------------------------
Date:      Mon, 17 Mar 86 21:31:44 EST
From:      Martin Schoffstall <schoff%rpics.csnet@CSNET-RELAY.ARPA>
To:        tcp-ip@sri-nic.ARPA
Subject:   TWG/VMS <-IP/RS232-> SL/4.2bsd
Has anyone successfully installed this and have it running?  We have
had a leased line running to a site doing mmdf style mail to them for
5 months while waiting for TWG to deliver the software (they lost the
order for at least two).  Now we're trying to install the link and we
are getting weird results like:

*TWG/VMS is not seeing any bsd packets.
*netstat -i on the TWG/VMS side shows N outgoing packets, netstat  -i
	on bsd shows 2xN incoming packets.
*telneting from twg/vms to bsd tells me that the network is unreachable
	(they are on two different networks but the routing table looks
		fine)

any help would be appreciated!!

marty schoffstall
schoff%rpics.csnet@csnet-relay	ARPA
schoff@rpics			CSNET
seismo!rpics!schoff		UUCP
martin_schoffstall@TROY.NY.USA.NA.EARTH.SOL	UNIVERSENET

RPI
Computer Science Department
Troy, NY  12180
(518) 271-2654

-----------[000041][next][prev][last][first]----------------------------------------------------
Date:      Tue, 18 Mar 86 12:23:52 est
From:      jbvb@BORAX.LCS.MIT.EDU (James B. VanBokkelen)
To:        tcp-ip@sri-nic.arpa
Subject:   HP 9000 Implementations
Are there any TCP/IP implementations for the HP9000?  With or without
HP/UX?  For Ethernet, or other media?  Public domain, or for sale?

jbvb@borax.lcs.mit.edu
James B. VanBokkelen


-----------[000042][next][prev][last][first]----------------------------------------------------
Date:      Tue 18 Mar 86 19:09:33-PST
From:      Mark Crispin <MRC%PANDA@SUMEX-AIM.ARPA>
To:        TOPS-20@SU-SCORE.ARPA, TCP-IP@SRI-NIC.ARPA
Subject:   major change to 1822 (IMP) software
     All sites should be aware of the changes to the ARPANET, Milnet, DRENET,
and other BBN IMP-based networks using the 1822 protocol that are announced
in RFC 979.  This report, "PSN END-TO-END FUNCTIONAL SPECIFICATION" is a MUST
read by the site personnel responsible for the network software for every
host on these networks.

     This report succumbs to the popular tradition of changing terminology
just as we've gotten accustomed to the old terminology.  Basically, an IMP
is now called a PSN (Packet Switch Node) and the protocol/software that runs
on the IMP is now called EE (End-to-End protocol and module).

     The most important changes are as follows:
(1) VDH hosts are no longer supported.  If you're still a VDH, you had better
    replace your VDH with a pair of ECU boxes, etc.  I think VDH's are extinct
    animals these days though.
(2) Uncontrolled (subtype 3) regular messages will no longer be supported.
    This is being replaced by a new Datagram service.

     All the other changes are upwards compatible and in general are wins.

     I am concerned about change (2) above.  TOPS-20 uses uncontrolled
regular messages and other operating systems may do so as well.  For what
TOPS-20 does with uncontrolled regular messages it is alright to substitute
the datagram service for uncontrolled regular messages, but since there will
be no period of overlap between the two I'm concerned about flag days.  I
would like to lobby for the new datagram service to be accessed by using a
subtype 3 regular message.

     Incidentally, I am wondering whether or not it would be a good idea to
always use datagrams for TCP/IP packets.  Perhaps this would increase
performance on the network if 1822 was spared the overhead of reliable
delivery?
-------
-----------[000043][next][prev][last][first]----------------------------------------------------
Date:      18 Mar 1986 16:45
From:      600213%ofvax@LANL.ARPA
To:        tcp-ip@sri-nic@lanl.ARPA
Subject:   HYPERLINK FOR PERKIN ELMER
Does anyone out there have any experiance with the HYPERLINK 
DDN host software from Internet Systems Corp. I have a PERKIN-ELMER
3230 using OS32. Any information on the use of this product either
as a host connected to an IMP via X.25, or just stuck on an ethernet
would be appreciated.

Thanks

Steve Finn
600213%ofvax@lanl

-----------[000044][next][prev][last][first]----------------------------------------------------
Date:      19 Mar 1986 04:28-PST
From:      CERF@USC-ISI.ARPA
To:        MRC%PANDA@SUMEX-AIM.ARPA
Cc:        TOPS-20@SU-SCORE.ARPA, TCP-IP@SRI-NIC.ARPA
Subject:   Re: major change to 1822 (IMP) software
Mark,

I would be very hesitant to put a lot of traffic on "uncontrolled" datagrams.
The term "uncontrolled" meant just that. No flow/congestion control; except
to discard a type 3 datagram if you had nothing else to do with it. When we
ran packetized voice tests on the ARPANET using the current type 3
datagrams, we interfered pretty severely with network performance.

I don't know how far the new end/end would go towards making this any better.
Andy Malis at BBN would be a good person to ask. My assumption for the moment
is that they have reduced the need for RFNM-like behavior (not to zero,
you still need acks on an end/end basis, but not one per packet) but this does
not put control onto the type 3 packets, as far as I know.

Vint
-----------[000045][next][prev][last][first]----------------------------------------------------
Date:      Wed, 19 Mar 86  6:50:40 EST
From:      Andrew Malis <malis@bbnccs.ARPA>
To:        Mark Crispin <MRC%PANDA@sumex-aim.arpa>
Cc:        TOPS-20@su-score.arpa, TCP-IP@sri-nic.arpa, malis@bbnccs.arpa
Subject:   Re: major change to 1822 (IMP) software
Mark,

Boy, I wrote the thing, and I didn't even receive the
distribution notice!  I wonder how many others I've missed ...

I initially (some time ago) resisted some of terminology changes
that you mentioned, but was overruled by many others that prefer
the new terminology (including DCA, by the way).  Some of these
changes go back several years now in BBN and DCA literature, but
are just making their way out to the rest of the world.  By the
way, the software that runs on the PSN is referred to as PSN
Release x, with the End-to-End being one module of the PSN.
Other modules include, for example, Store-and-Forward, Routing,
and X.25 L3.

VDHs, as you mention, are mostly extinct animals these days.

The story on uncontrolled messages is a bit more complicated.  Up
to now, the EE's flow control has been (in the absence of subnet
congestion control) the only governor of the amount of traffic a
host can submit to the network.  When you take that away by using
uncontrolled messages, you are really introducing the possibility
of debilitating congestion on the network.

As a result, the use of uncontrolled messages has been, shall we
say, controlled (administratively).  There are, I believe, no
hosts on the MILNET that have permission to send them, and only a
small number on the ARPANET (mostly associated with packet
speech). I know of no TOPS-20s that are currently allowed to
submit uncontrolled messages.  As an example, neither of the
hosts at SUMEX are enabled, and at the ISI complex, the only
enabled host is ISI-SPEECH11 (I just checked these).

After we decided to upgrade the uncontrolled messages into the
new datagram service, we also found that because of scheduling
constraints, we wouldn't be able to include it in PSN 7.0 (the
first new EE release).  Even if we had, its use would (due to the
absence of congestion control) have to be limited to the same
small set of hosts.

The good news is that subnet congestion control is actively under
development, and both it and datagrams are scheduled for PSN
Release 8.0.  At that time, we can experiment (on the ARPANET
first, of course) with always using datagrams for TCP/IP traffic.
That was one of the reasons why we decided to upgrade to the new
datagrams - the old uncontrolled messages just weren't useful
enough to support this.

By the way - the datagrams, when included, will be accessed by
good old subtype 3.

Regards,
Andy
-----------[000046][next][prev][last][first]----------------------------------------------------
Date:      19 Mar 86 11:46 EST
From:      JHodges @ DDN2.ARPA
To:        tcp-ip @ sri-nic.arpa, info-ibmpc @ usc-isib.arpa
Cc:        JHodges @ DDN2.ARPA
Subject:   TEMPEST LAN problem
I have a problem that I'm hoping someone out there can 
help me with.  I have a user who has a large number of TEMPESTed
Zenith PCs (IBM-compatible types) who wishes to connect these PCs to 
a secure LAN (Fiber Optic based).  He wishes also to implement
the usuall file-sharing capabilities and remote logins associated with
LANs.  The problem is that the user has been told that, if he opens
the PCs up to install a LAN card (which in itself would be TEMPESTed),
then Zenith will no longer guarantee the PCs (understandable) nor 
work on them.  In other words, any and all maintenance agreements
go out the window and (supposedly) Zenith is not willing to 
renegotiate a new maintenance agreement.  Further complicating matters,
the customer is not willing to use a third-party maintenance group.

Now, having said all of that, does anybody out there know of any
software which might allow the connection of the PCs to the LAN via
the PC's RS232 port, and also implement/allow file sharing?
Are there any other solutions which might be feasible (such as an
intelligent "front-end" which implements file sharing and remote
logon protocols for a group of PCs)?  I might mention that the 
PCs are placed in such a way that clustering of PCs to a local
LAN connection is possible.

Thanks in advance for your help!

Jim Hodges

-----------[000047][next][prev][last][first]----------------------------------------------------
Date:      19 MAR 86 15:10-MST
From:      STGEORGE%UNMB.BITNET@WISCVM.WISC.EDU
To:        TCP-IP@SRI-NIC.ARPA
Subject:   BITNET mail follows
SEND TCP-IP.*
-----------[000048][next][prev][last][first]----------------------------------------------------
Date:      Wed, 19 Mar 86 20:51:37 EST
From:      Andrew Malis <malis@bbnccs.ARPA>
To:        Mark Crispin <MRC%PANDA@sumex-aim.arpa>
Cc:        TOPS-20@su-score.arpa, TCP-IP@sri-nic.arpa, malis@bbnccs.arpa
Subject:   Re: major change to 1822 (IMP) software
My apologies if any of you receive this twice - as far as I can
tell, my original message went into a black hole.

Andy
-------
Date: Wed, 19 Mar 86  6:50:40 EST
From: Andrew Malis <malis@bbnccs.ARPA>
Subject: Re: major change to 1822 (IMP) software
In-Reply-To: Your message of Tue 18 Mar 86 19:09:33-PST
To: Mark Crispin <MRC%PANDA@sumex-aim.arpa>
Cc: TOPS-20@su-score.arpa, TCP-IP@sri-nic.arpa, malis@bbnccs.arpa

Mark,

Boy, I wrote the thing, and I didn't even receive the
distribution notice!  I wonder how many others I've missed ...

I initially (some time ago) resisted some of terminology changes
that you mentioned, but was overruled by many others that prefer
the new terminology (including DCA, by the way).  Some of these
changes go back several years now in BBN and DCA literature, but
are just making their way out to the rest of the world.  By the
way, the software that runs on the PSN is referred to as PSN
Release x, with the End-to-End being one module of the PSN.
Other modules include, for example, Store-and-Forward, Routing,
and X.25 L3.

VDHs, as you mention, are mostly extinct animals these days.

The story on uncontrolled messages is a bit more complicated.  Up
to now, the EE's flow control has been (in the absence of subnet
congestion control) the only governor of the amount of traffic a
host can submit to the network.  When you take that away by using
uncontrolled messages, you are really introducing the possibility
of debilitating congestion on the network.

As a result, the use of uncontrolled messages has been, shall we
say, controlled (administratively).  There are, I believe, no
hosts on the MILNET that have permission to send them, and only a
small number on the ARPANET (mostly associated with packet
speech). I know of no TOPS-20s that are currently allowed to
submit uncontrolled messages.  As an example, neither of the
hosts at SUMEX are enabled, and at the ISI complex, the only
enabled host is ISI-SPEECH11 (I just checked these).

After we decided to upgrade the uncontrolled messages into the
new datagram service, we also found that because of scheduling
constraints, we wouldn't be able to include it in PSN 7.0 (the
first new EE release).  Even if we had, its use would (due to the
absence of congestion control) have to be limited to the same
small set of hosts.

The good news is that subnet congestion control is actively under
development, and both it and datagrams are scheduled for PSN
Release 8.0.  At that time, we can experiment (on the ARPANET
first, of course) with always using datagrams for TCP/IP traffic.
That was one of the reasons why we decided to upgrade to the new
datagrams - the old uncontrolled messages just weren't useful
enough to support this.

By the way - the datagrams, when included, will be accessed by
good old subtype 3.

Regards,
Andy
-----------[000049][next][prev][last][first]----------------------------------------------------
Date:      Thu, 20 Mar 86 2:24:42 EST
From:      Ron Natalie <ron@BRL.ARPA>
To:        tcp-ip@sri-nic.arpa
Subject:   Poor X-core performance
I was doing some tests regarding the execrable (what a word!) cross-core
performance and have made the following observations:

	1.  Going between two MILNET hosts or between two ARPANET
		hosts seems fairly quick.
	2.  Pinging the gateways seems to be fairly quick.
	3.  Going from a net gatewayed to MILNET to the ARPANET side
		is slow.
	4.  Going from a host on the MILNET to the ARPANET side is slow,
		but a little faster than 3.

Now, I know a limitation of GGP forces packets from my host to go:

    BRL-HOST -> BRL-GATEWAY -> MILNET-GW -> ARPA-EGP-SPEAKER ->
	 ARPA-USER-GW -> ARPA-USER -> MILNET-GW -> MILNET-EGP-SPEAKER ->
		BRL-GATEWAY -> BRL-HOST

Now the IMPs control traffic between host pairs by the RFNM/blocking
procedure.  The BRL-GATEWAY->MILNET-GW doesn't seem to be much of a bottle
neck (though BRL-GATEWAY did seem to rank 11th in MILNET host packet counts).
However, it seems that a lot of net traffic in addition to BRLs gets piled
into the MILNET-GW -> ARPA-EGP-SPEAKER and MILNET-GW -> MILNET-EGP-SPEAKER
paths which probably is presenting the bottleneck.  I'll be able to test
this further later this week when I fix our system to allow me to set up
arbitrary IP options (anyone want to set up an ECHO host on the ARPANET?
ISI-ECHO doesn't, IPTO-ECHO seems to be there though).

-Ron
-----------[000050][next][prev][last][first]----------------------------------------------------
Date:      Thu, 20 Mar 86 17:19:57 PST
From:      brian@sdcsvax.ucsd.edu (Brian Kantor)
To:        tcp-ip@sri-nic.arpa
Subject:   ping/record route for 4.3BSD Unix wanted
Does anyone have a program that will allow me to 'ping' a host
and record the route the packets took.  This is for 4.3BSD Unix
if possible.

It would be very helpful for troubleshooting our multiplicity of
small subnets here on campus.

	Brian Kantor	UCSD Office of Academic Computing
			Academic Network Operations Group  
			UCSD B-028, La Jolla, CA 92093 (619) 452-6865

	decvax\ 	brian@sdcsvax.ucsd.edu
	ihnp4  >---  sdcsvax  --- brian
	ucbvax/		Kantor@Nosc 
-----------[000051][next][prev][last][first]----------------------------------------------------
Date:      Thu, 20 Mar 1986 13:09 O
From:      Henry Nussbacher, <Vshank%Weizmann.BITNET@WISCVM.WISC.EDU>
To:        <tcp-ip@sri-nic.ARPA>
Cc:        Mail message transfer agent working group, <mta-l%bitnic.BITNET@WISCVM.WISC.EDU> , <future-l%bitnic.BITNET@WISCVM.WISC.EDU> , Network Forum <info-nets@mit-mc.ARPA>
Subject:   Science magazine article
Required reading:

Science - Feb 28, 1986, pages 943-950:

Computer Networking for Scientists
by
Dennis M. Jennings,
Lawrence H. Landweber,
Ira H. Fuchs,
David J. Farber,
W. Richards Adrion

Find out about NSFnet, future plans for merging Arpanet, Csnet and Bitnet
into one Tcp/Ip based network and much more.

Hank
-----------[000052][next][prev][last][first]----------------------------------------------------
Date:      21 Mar 86 00:40 PST
From:      Jeff Makey <Makey@LOGICON.ARPA>
To:        TCP-IP@SRI-NIC.ARPA
Subject:   What's a reasonable time-to-live?
For a MILNET host, what is a reasonable range of values for the
time-to-live field of an IP header?  I recently increased mine from 10
to 20 seconds, but I get the impression from reading RFC 793 (the TCP
specification) that a value as large as 2 minutes is reasonable.  Maybe
it should depend on the timeouts my application software uses?

                       :: Jeff Makey
                          Makey@LOGICON.ARPA

-----------[000053][next][prev][last][first]----------------------------------------------------
Date:      20 Mar 1986 22:24-EST
From:      CERF@USC-ISI.ARPA
To:        JHodges@DDN2.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA, info-ibmpc@USC-ISIB.ARPA
Subject:   Re: TEMPEST LAN problem
Jim,

shot in the dark: SYTEK makes a lot of RS-232 S-XX (product number which
I forget) interfaces for its broad-band LAN. It is conceivable that they
can help - but if the LAN is not SYTEK's, I dunno...

Putting a Tempest CARD into a Tempest cage does NOT mean the result is
TEMPEST.

Why not have Zenith evaluate/inspect/test the card? Someone will have to
run TEMPEST certification all over with the card installed, in any case,
before you could reasonably expect approval to work in that new mode.

Vint Cerf
-----------[000054][next][prev][last][first]----------------------------------------------------
Date:      Fri, 21 Mar 86 01:54:42 est
From:      romkey@BORAX.LCS.MIT.EDU (John Romkey)
To:        JHodges@DDN2.ARPA
Cc:        tcp-ip@sri-nic.arpa, info-ibmpc@usc-isib.arpa
Subject:   re: TEMPEST LAN problem
FTP Software will be doing a SLIP (Serial Line IP) driver for its
TCP/IP product, which includes the standard Darpa protocols and also
the Berkeley Unix protocols. There is already a SLIP driver for 4.2
and Suns, from rick@seismo. SLIP is currently used for point-to-point
links between Unix systems. Once the PC SLIP is done, you'll be able
to also use it to connect a number of PC's to a VAX or Sun via serial
lines and have the VAX or Sun gateway packets between the PC's and any
other networks it was on.

FTP Software's address is:
	FTP Software, Inc.
	PO Box 150
	Kendall Square Branch
	Boston, MA  02142

	phone (617) 868-4878.
				- john romkey
				  late of MIT
				  now of ftp software

Biased? Of course I'm biased...
-----------[000055][next][prev][last][first]----------------------------------------------------
Date:      Fri, 21 Mar 86 12:08:28 pst
From:      minshall%ucbopal@BERKELEY.EDU (Greg Minshall)
To:        info-ibmpc@usc-isib.arpa, tcp-ip@sri-nic.arpa
Subject:   tcp-ip for PC's
The University of California at Berkeley issued an RFQ towards the
end of last year.  The RFQ asked for a combination of hardware and
software which would allow:

	1.  PC-net programs to run on ethernet, using TCP-IP protocols.
	2.  FTP and TELNET.
	3.  Programmatic interface to the TCP-IP-linklevel (and UDP),
		for writing custom applications.
	4.  Assurances that the product bid would, in some unspecified
		time, become a commercial product.

The RFQ was sent to a number of companies.  The responses were evaluated,
and the contract was given to Ungermann-Bass.

The Ungermann-Bass product (which is NOT a commercial product at this
time) puts TCP-IP on board, is NETBIOS compatible (so, the IBM PC networking
software runs on top of it), comes with user FTP, and
allows us to port our own 4.2 applications over (it is interesting,
though not surprising given our location, that we have worked hard to
try to get an interface that allows for the 4.2 networking calls to work
as in the 4.2 manual.  I'm not a bigot about how great they are; I just
think they are a [somewhat malleable] standard).

Of course, this is a new TCP implementation.  That means that certain
algorithms which impact the efficiency of the protocol are unlikely
to be optimal this early in its life.  On the other hand, the University's
RFQ requested a 20KBytes/second FTP file transfer rate, and the product
we are currently using outperforms that (to put the requested number
in perspective, unloaded Vax 750's seem capable of doing about 60 KBytes/
second, while an IBM 3081 using a DACU does barely 20 KBytes/second [though
there is more to the 3081/DACU performance than just this miserable number]).

The product we currently run does hostname to hostnumber translation
via static tables.  There has been considerable discussion within
Ungermann-Bass and within the University about the "right" way to do the
name lookups.  Basically, the question here is whether to use an
IEN116 name server or the new Domain Name Server.  The final product
delivered to the University will support one of these protocols.  This
final product should be delivered within the next few months.

My hope, certainly, is that this will become a commercial product very soon.
I believe this to be Ungermann-Bass's intention, too, but you'd have
to talk with Ungermann-Bass marketing people about this.  The University's
interest in this becoming a commercial product has to do with our desire
to have a good vendor support for the product.  One-of-a-kinds don't
have that kind of support; real live products may.

My one comment on other TCP-IP packages I've noticed so far is that
NETBIOS compatibility is a large, missing feature.  I worry a bit that
many of us say "foo" to IBM PC networking, but that many of our end
users (say small, non-computer oriented departments) are going to see
many of the PC networking features as being very useful.  It is also
true that allowing NETBIOS compatibility allows us to NOT develop
the function that PC networking already provides (remote disk access,
etc.).  Of course, one problem in NETBIOS support is that it is hard
to imagine two vendors mappings of PC Networking -> TCP/UDP/IP to
be compatible.  We would hope, vainly I'm sure, that there would
be some meeting of the minds between the various developers on this.

Greg Minshall
minshall@berkeley.edu
minshall@ucbcmsa.bitnet
(415)642-0530
-----------[000056][next][prev][last][first]----------------------------------------------------
Date:      Fri, 21 Mar 86 10:53:05 est
From:      tipler@dmc-crc.ARPA (Brad Tipler)
To:        600213%ofvax@LANL.ARPA, tcp-ip@sri-nic@lanl.ARPA
Cc:        tipler@dmc-crc.ARPA
Subject:   Re:  HYPERLINK FOR PERKIN ELMER
We have HYPERLINK running under VAX VMS. Although we are having trouble making
it work right now, it once worked fine. It is obviously very similar to its
ancestor ACCESS-T because it exhibited the same asymetrical performance ; 
FTPs in one direction were much faster then FTPs in the other direction.
We had it connected via and Interlan Ethernet to a VAX running 4.2 BSD.
We are hoping that the problem is with the Interlan boards.

It seems to me that HYPERLINK has no concept of multiple interfaces, or routing
for that matter.

To aid in our current debugging attempts, it would be nice if  it supplied more
information on why communication is not working, rather than just "it timed out".
To be fair I don't know of any suppliers which give you this sort of tool.

We are a beta test site and have found that HYPERLINK is quite willing to help
in the debugging to the best of their ability.

Brad.
-----------[000057][next][prev][last][first]----------------------------------------------------
Date:      Fri, 21 Mar 86 11:35 EST
From:      "John G. Ata" <Ata@RADC-MULTICS.ARPA>
To:        TCP-IP@SRI-NIC.ARPA
Subject:   DCEC gateway overloading

          Noticed this week that the response time between MILNET and
ARPANET has slowed down dramatically between 9 and 6 to the point that
for every packet we send, it is on the average retransmitted 3-4 times.
Now a 25% - 50% retrasmission rate is bad enough but tolerable; however
a 300-400% retrasmission rate is just unworkable.  Our host
(RADC-MULTICS) is using DCEC-GATEWAY as its gateway into the ARPANET,
and I believe that the gateway is somehow overloaded.  We don't seem to
have as much problems receiving data (probably routed through other
gateways), but it appears that DCEC-GATEWAY is dropping a large number
of packets from out host.
          Is there a reason for this happening, like temporary rerouting
of packets through DCEC for various reasons such as a downed gateway? Or
are we to expect this permanently.  This situation is clearly
unacceptable when trying to do real work.

                    John G. Ata

-----------[000058][next][prev][last][first]----------------------------------------------------
Date:      Fri, 21 Mar 86 17:30:36 est
From:      lixia@COMET.LCS.MIT.EDU (Lixia Zhang)
To:        Makey@LOGICON.ARPA
Cc:        tcp-ip@sri-nic
Subject:   Re:  What's a reasonable time-to-live?
Jeff,

You asked a good question, but I may not have a good answer.  As far as I know,
currently the TTL (time-to-live) field in IP header is used, at best, as a
gateway hop count, i.e. each gateway is required to decrease TTL at least
by 1.  (I'm not aware of any gateway implementation that measures the staying
time of the packet and decreases TOS accordingly. Some people may tell us if
they know.)

But even if gateways do decrease TTL by real time, it is not of much meaning
anyway, because TTL does not count the network packet transfer delay between
gateways.  (To see how long the net delay might be in worst case, consider
the fact that Arpanet source-IMP retransmission timer is 30 seconds!)

Sum up the above:
- TTL value is not the expiration life time of IP packets, since it does not
  take network transmission delays into account.
- TTL is now used as gateway hop count, it prevents packets from looping
  indefinitely inside the internet.  So setting TTL to a large value is not
  a good idea.

Lixia
-----------[000059][next][prev][last][first]----------------------------------------------------
Date:      21 Mar 1986 21:35:40 PST
From:      POSTEL@USC-ISIB.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Subject:   major changes to 1822

I agree that RFC 979 describes some very significant changes to the
Interface Between a Host and an IMP.  I agree that it is important for
anyone that is responsible for an ARPANET or MILNET network interface
driver to look at this very carefully.  It may also have impact in
other parts of the network code in ARPANET and MILNET hosts.

I am also concerned about the removal of the "uncontrolled" message
capability.  Back in the olden days this was called a "type 3"
message.

This feature was introduced in to the ARPANET to support packet
speech.  A key aspect of real time packet speech is that low delay is
important, and even more important is low variation in delay.  Speech
can tolerate the loss of a few packets now and then.  The reliability
mechanisms (timeout and retransmission if no ack) introduce far too
much variation in delay.  It is true that very little use has been
made of type 3 messages in recent years.  However, with the recent
work in multimedia protocols turning to focus on a multimedia real
time conferencing system, there may well be the need for type 3 again.

The notion that type 3 is only going away temporarily is a bit
misleading.  Type 3 goes away in the move from release 6 to release 7,
and something come in in the move from release 7 to release 8.  The
time between theses moves could well be over a year (based on the
recent history of such moves).  And the something that comes back is
supposed to have a multipacket mode, which implies a reassembly
timeout.  (The messages that time out are ones that wouldn't be
delivered  anyway, so as long as there are enough buffers that
subsequent messages get through while while the timeout is ticking
away, it may be ok.)

I got a bit confused about all these connections.  In 3.1.3 about
AHIP, it says that the host can specify the connection in the handling
type field.  In 3.1.4 about Standard X.25, it says something about
link numbers mapping to different connections.  Maybe these are
different types of connections?  If not, what gives, if so, how does
Standard X.25 control the type of connections discussed in 3.1.3?

It seems to me that the new end to end put a lot of faith on the new
congestion control.  Based only on the new end to end it seems that
each host could have in play in the network up to 127 messages (of 8
packets each) on each of 256 connections to each other host.  With
only 200 host on the net, each host could give its IMP over 50 million
packets to deliver before being blocked.  Clearly this is wrong. I
must have missed something.  It is going to be a lot harder for a host
to avoid being blocked than it is now (since the blocking condition
will be harder to calculate or predict).  OOPS, did i see a note that
the new congestion control does not go in till release 8?  If so, what
stops this potential overloading?

There sure is a lot of good stuff in this new end to end, especially
getting the X.25 support up to snuff, and the interoperability between
AHIP and Standard X.25 hosts.

Maybe it would help to have more publication of plans and ideas about
changes to the Host-IMP interface in the early stages.  Then there
could be more feedback about priorities and how plans for one part of
the system might interact with plans for another part (e.g., type 3
vis a vis multimedia conferencing).

--jon.
-------
-----------[000060][next][prev][last][first]----------------------------------------------------
Date:      Sat, 22 Mar 86 23:22:42 est
From:      romkey@BORAX.LCS.MIT.EDU (John Romkey)
To:        tcp-ip@sri-nic.arpa
Subject:   4.2 Unix talk protocol
Has anyone out there figured out the 4.2 talk protocol? I'd rather not
have to figure it out from the source code if I can avoid it...
Probably best just to reply to me if you have; I don't imagine
everyone else on this list is really interested. Thanks!
				- john romkey
-----------[000061][next][prev][last][first]----------------------------------------------------
Date:      23 Mar 1986 15:53:16 EST
From:      MILLS@USC-ISID.ARPA
To:        ron@BRL.ARPA, tcp-ip@SRI-NIC.ARPA
Cc:        MILLS@USC-ISID.ARPA
Subject:   Re: Poor X-core performance
In response to the message sent      Thu, 20 Mar 86 2:24:42 EST from ron@BRL.ARPA

Ron,

IPTO-ECHO is hiding behind a Buttergate on an Ethernet at the present time. You
might try DCN-WWVB (128.4.0.15), which is physically the same path as
DCN-GATEWAY (10.0.0.111) and in the same machine. If useful, I can easily
bring up an alias for 128.4.0.15 on a variant of the 10.0.x.111 address.
Remember the old Port Expanders?

Dave
-------
-----------[000062][next][prev][last][first]----------------------------------------------------
Date:      Mon, 24 Mar 86 10:20:56 EST
From:      Andrew Malis <malis@bbnccs.ARPA>
To:        POSTEL@usc-isib.arpa
Cc:        tcp-ip@sri-nic.arpa, malis@bbnccs.arpa
Subject:   Re: major changes to 1822
Jon,

I have already discussed the uncontrolled messages, in my reply
to Mark's message.  I would like to add that when the datagrams
are implemented in Release 8, any datagrams containing 128 or
fewer octets will be sent as a single packet, and these should
present the same delay characteristics as the current
uncontrolled messages.  I would also like to add that you only
get the low variation in delay when the network is relatively
uncongested, which was always the case in the old ARPANET when
the packet speech work was actively underway.  As you probably
know, the old ARPANET was only ever using a small percentage of
its total capacity.  The current ARPANET is being run at a higher
utilitization than was the combined ARPANET/MILNET just before
the split, and that was more highly utilitized than the ARPANET
of the 70s and even early 80s.  At the current utilization
levels, I would want to run a series of experiments before I
could predict the delay characteristics of even the old
uncontrolled messages, and their current suitability for speech.

On congestion control:

Store-and-forward congestion control is what is scheduled for
Release 8.  It is designed to allow the store-and-forward
subnetwork to feed back to source PSNs congestion information
concerning the route a packet will take, and to control the rate
at which packets are submitted into the subnet.  The source PSNs'
end-to-end, in turn, will use this feedback to control the rate
at which hosts are allowed to submit traffic to the net.  This
congestion control is necessary before we begin any widespread
use of datagrams.

Release 7 includes end-to-end congestion control.  This monitors
the availability of resources in the source and destination PSNs
of a traffic flow, and, if necessary, slows down the rate of
submission from a host if it is causing congestion (lack of
resources) in either the source or destination PSNs.  In each
PSN, there are configuration parameters the amount of resources
that each host can use (so that one host cannot saturate a PSN
and lock out other hosts).  So, while the new EE contains the
capability for hosts to create a large number of connections,
they will only be able to submit unacknowledged traffic until
they begin to congest either source or destination PSN.  At that
point, they will be slowed down by using whatever means in
available in the host-IMP protocol.  For X.25 hosts, this
includes witholding acks and issuing RNRs; for AHIP hosts, this
may mean issuing incompletes or blocking.

It is true that since the PSN is now willing to buffer the ninth
(or tenth, or ...) outstanding message on an AHIP connection, it
will be much harder to predict when a host will be blocked.
However, hosts should be blocked LESS often than before, since
the old EE would also blocked if faced with a resource shortage.

On connections: 

The new EE has one type of connection, and is now allowing more
than one connection to exist simultaneously between AHIP hosts
(or between an AHIP host and a Standard X.25 host) where the old
EE only allowed one.  This gives these hosts the same
cababilities that Basic X.25 hosts enjoy, and is meant to be for
for the hosts' benefit (especially for Standard X.25 hosts, which
can now use a separate LCN for each logical flow to an AHIP host,
rather than being forced to multiplex all of its traffic over the
same LCN).

Andy
-----------[000063][next][prev][last][first]----------------------------------------------------
Date:      24 Mar 1986 1502-PST (Monday)
From:      Barry Leiner <leiner@RIACS.ARPA>
To:        Mark Crispin <MRC%PANDA@SUMEX-AIM.ARPA>
Cc:        TOPS-20@SU-SCORE.ARPA, TCP-IP@SRI-NIC.ARPA
Subject:   Re: major change to 1822 (IMP) software
Mark,

You might be interested to know that I have been lobbying with the DDN
and BBN for several years now to run an experiment using IP over
ARpanet uncontrolled packets (subtype 3).  My reasoning was that, since
some statistics I saw showed that something like half the packets were
arpanet acknowledgments (RFNMs), it would seem that a significant
amount of traffic loading on the net might be eliminated and we might
actually get better performance.  However, thus far that experiment has
not been performed.

Barry

----------
-----------[000064][next][prev][last][first]----------------------------------------------------
Date:      Tue, 25 Mar 86 0:28:49 EST
From:      Mike Muuss <mike@BRL.ARPA>
To:        Brian Kantor <brian@sdcsvax.ucsd.edu>
Cc:        tcp-ip@sri-nic.arpa
Subject:   Re:  ping/record route for 4.3BSD Unix wanted
4.3 BSD includes the PING program as standard.  (Did it go in /etc/ping?)

I will mail you source under separate cover, just in case.
	Best,
	 -Mike
-----------[000065][next][prev][last][first]----------------------------------------------------
Date:      Tue, 25 Mar 86 09:30:49 pst
From:      ucdavis!midacs!nsadmin@ucbvax.berkeley.edu
To:        mod.protocols.tcp-ip, tcp-ip@sri-nic.arpa
Subject:   Wanted: TCP/IP based command server system

Moderator:

  Can you supply any pointers, or if appropriate, post this request to
mod.protocols.tcp-ip.  I have tried to posting to net.wanted.sources with
no responce.

  I am looking for a command server based over TCP-IP.  Due to security
constraintsat our site, rsh is not satisfactory.  I would like something
similar to uux, i.e. a list of commands, possibly per host/user that the
server would excute.

  Does something like this exist?  Any help would be greatly appreciated.

Thanks...


-----------[000066][next][prev][last][first]----------------------------------------------------
Date:      25 Mar 86 14:16:02 mst
From:      Greg McArthur 303-497-1291 <greg%ncar.csnet@CSNET-RELAY.ARPA>
To:        tcp-ip@sri-nic.ARPA
Cc:        
Subject:   An Announcement


                               Announcing the
                  1986 NCAR SUMMER SUPERCOMPUTING INSTITUTE
                    16 - 27 June 1986   Boulder, Colorado



The University Corporation for Atmospheric Research (UCAR) is pleased to
announce that a Supercomputing Institute will be held this summer at the
National Center for Atmospheric Research (NCAR) located in Boulder, Colorado.
The Institute is being sponsored by the National Science Foundation's Office
of Advanced Scientific Computing (NSF/OASC) with assistance from NSF's Divi-
sion of Ocean Sciences.  The Institute will be managed and coordinated by
NCAR's Scientific Computing Division (SCD).

Background

The Institute is a two-week intensive training experience designed to provide
an understanding of how supercomputing capabilities can augment scientific
research.  Learning how to apply supercomputing methods to a variety of
investigations that require large-scale computation is a key objective of the
Institute.  To meet this objective, the Institute's curriculum has been care-
fully arranged to give each participant an opportunity to explore new
approaches to using supercomputing technology.  Lecture and laboratory ses-
sions are geared to maximize the learning experience by providing real-world
applications based upon current research efforts employing supercomputers.

A maximum of 25 senior graduate students, post-doctoral fellows, and junior
faculty will be selected from national, accredited universities and research
institutions that confer advanced degrees in the atmospheric and physical
oceanographic sciences, solar physics, and related disciplines.  Applications
from individuals attending any institution meeting this requirement will be
considered.

Institute Curriculum

The 1986 Supercomputing Institute will cover the following topics:

+ Operating Systems and Machine Configurations
+ Vectorization and Optimization Techniques
+ Parallelism
+ Numerical Techniques
+ Software Availability and Quality
+ Communications and Networking
+ Graphics

Institute Benefits

Successful candidates will have all travel, per diem, and accommodation
expenses paid for by the Institute.  In addition, computing time will be made
available to all participants on the CRAY-1 supercomputer at NCAR.  Support
services consistent with all institute-related computing will also be pro-
vided.

Application Requirements

To be considered for admission to the 1986 Supercomputing Institute, an
applicant should have a minimum graduate GPA of 3.5 (post-docs and junior
faculty excepted), provide two letters of recommendation indicating the
applicant's research capabilities and the potential for their research to
advance disciplinary knowledge, and an abstract of not more than 250 words
relating how the applicant's research endeavors (either planned or underway)
would benefit from applying supercomputing technology to their investiga-
tions.  Successful candidates should have some knowledge of FORTRAN 77 and be
working on research projects that already use supercomputers or anticipate
using them in the near future.

Applicants will be notified of their acceptance into the Institute on 16 May
1986.

Application Deadline

Individuals who meet the above requirements are encouraged to apply for
admission to the 1986 Summer Supercomputing Institute.   Please complete the
attached application form and mail it, along with all supporting materials,
to:

Dr. Gregory R. McArthur
1986 NCAR Summer Supercomputing Institute
Scientific Computing Division
National Center for Atmospheric Research
P.O. Box 3000
Boulder, Colorado  80307

IMPORTANT NOTE:  All materials must be received by 1 May 1986.

-----------------------------------------------------------------------

APPLICATION FORM:

Name:                                                                  
     ------------------------------------------------------------------
Address:                      City              State       Zip        
        ----------------------    --------------     -------   --------
University/Institution Name:                                           
                            -------------------------------------------
Department:                                                            
           ------------------------------------------------------------
Telephone: Home (  )                      Work (  )                    
                    ----------------------         --------------------

I am a    Graduate Student      Post-Doctoral Fellow
       --                   ---                     
          Junior Faculty        Other:              
       --                   ---       --------------

I am a U.S. Citizen:     Yes      No:  I am a citizen of:              
                      ---      ---                       --------------

Please include two letters of recommendation, an abstract describing
your research, and your current GPA (if applicable) with this
application.

         THIS APPLICATION MUST BE RECEIVED NO LATER THAN 1 MAY 1986.


-----------[000067][next][prev][last][first]----------------------------------------------------
Date:      Tue 25 Mar 86 20:12:43-PST
From:      Bob Knight <KNIGHT@SRI-NIC.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Cc:        cf-staff@SRI-NIC.ARPA, feinler@SRI-NIC.ARPA, stjohns@SRI-NIC.ARPA
Subject:   The NIC and FTPing host tables
Hi - it's taken me quite a while to draft this message.  However, things
are getting intolerable, and I feel that it's appropriate to broach the
subject.

Quite frankly, we're experiencing tremendous network load from people
(automatically) FTPing the host tables when we release one.  There are
several modes of behaviour which are most offensive:

	o  Many people choose a convenient time, such as midnight.  The
	   consequences are that we get about 10 FTP server jobs running
	   simultaneously.  

	o  Some sites with multiple hosts in close proximity have their
	   hosts get their tables from us, rather than having a single
	   host at the site get it and propagate it.  A sample site had
	   THREE FTP's going from distinct hosts, all getting the host
	   table.

	o  Some sites simply FTP the host table every day, whether it
	   needs it or not.  This is anti-social.

     I feel that a simple and workable solution is for some major sites
(BBN, ISI, Stanford, MIT - this is by no means a request or finger point)
to serve as "host table servers", thus relieving the load on the NIC.
Perhaps a policy implementation modelled after domains is in order.  I do
know that if things don't change, we'll cut back on the frequency of host
table releases from sheer necessity.

     Discussion?

Bob
-------
-----------[000068][next][prev][last][first]----------------------------------------------------
Date:      25 MAR 86 20:29-EST
From:      WITLICKI%WILLIAMS.BITNET@WISCVM.WISC.EDU
To:        TCP-IP@SRI-NIC.ARPA
Subject:   TCP/IP software on DEC Ethernet boards
    Does anybody out there have any experience using TCP/IP software
with the new DELUA Ethernet board from DEC?  Is it software
compatible with the DEUNA board which it apparently replaces...

---- Randy Witlicki, Williams College, Williamstown, Massachusetts

Bitnet  :  Witlicki@Williams
ArpaNet :  Witlicki%Williams.Bitnet@Wiscvm
-----------[000069][next][prev][last][first]----------------------------------------------------
Date:      Tue, 25 Mar 86 21:01:34 EST
From:      Mike Muuss <mike@BRL.ARPA>
To:        TCP-IP@sri-nic.arpa
Subject:   Growth
I recently learned of the deal struck between NSF and DARPA
which will result in the ARPANET backbone to be expanded by
as many as 40 additional IMPs in support of university connection
to NSF supercomputer assets.

This makes me wonder about the desirability of continuing the split if
both halves are now going to become full production networks, rather
than having them divided by protocol_experiments -vs- production use,
where production use was defined as any non- link-level or IMP-level
or IP-level communications work.  (ie, the "protocol_experimenter"
group was defined to be very small).  It seems to me that the
trunking expenses of running TWO cross-country networks is likely
to be substantial.  I understand the desire for trunk encryption
and node site security don't mix well with the university setting,
but this may grow to a very expensive luxury.  I can imagine trunking
costs in the millions of dollars/year to enhance ARPANET back up
to it's former size.  Anybody care to comment?

I also found it a bit shocking to learn of a policy change of this
magnitude in SCIENCE magazine, rather than here on the net.
Harumph.

At the risk of sounding like a broken record, the performance problems with
the core gateway system and EGP will *have* to be fixed before the
NSF Universities start comming online.

I would like to point out that the Army and NASA have their supercomputers
on the *MILNET*, and the NSF supercomputers will be on the *ARPANET*,
and the Universities are split between both.  Expect cross-core
congestion of a magnitude so far only dreamed of by BBN NOC staff
in their worst nightmares...

By the way, if there are any MILNET trunk capacity planners on this
list, please take note:  The first Army supercomputer will be getting
installed at BRL in the June-July timeframe, with the second one
operating by Christmas.  Any chance all those extra trunks for our
IMP might get installed before we double our present traffic levels?
Better beef up the core, too!

I apologise for often sounding like the prophet of Doom.  Overall,
we at BRL are highly pleased with the InterNet.  I just worry
overmuch about the growing pains, as the system succeeds, succeeds,
and then succeeds some more.

	Best,
	 -Mike
-----------[000070][next][prev][last][first]----------------------------------------------------
Date:      Tue 25 Mar 86 22:33:00-EST
From:      "J. Noel Chiappa" <JNC@XX.LCS.MIT.EDU>
To:        tipler@DMC-CRC.ARPA, 600213%ofvax@LANL.ARPA
Cc:        JNC@XX.LCS.MIT.EDU, tcp-ip@SRI-NIC.ARPA
Subject:   Re:  HYPERLINK FOR PERKIN ELMER
	Brad:

	It turns out that a common problem with a lot of host implementations
is that they take ICMP Error messages that the gateways go to some effort
to produce, and ..... throw them on the floor! I know that when I was trying
to test out the ICMP Error messages I added to my gateway a year or so ago,
I don't think there was a single kind of machine at MIT that I found that
would do anything with those messages. It's silly, cause the gateways often
*do* know why you are losing, but all you ever get from 'telnet foo' is
'timed out'. Grrr.
	As far as multiple interfaces go, the IP architecture doesn't
have any good way of dealing with that other than becoming a gateway.
(I have flamed extensively on why it's bad to try to add rudimentary
gateway funtionality to IP implementations that are primarily intended
for hosts, so I won't bore everyone by repeating that!) I am trying to
get changes made to the IP spec to help multi-homed hosts that want to
stay pretty simple, but there's nothing at the moment. Hopefully that
will change soon.

	Noel
-------
-----------[000071][next][prev][last][first]----------------------------------------------------
Date:      26 Mar 1986  8:00:55 EST (Wednesday)
From:      T. Michael Louden (MS W422) <louden@mitre-gateway.arpa>
To:        makey@logicon
Cc:        tcp-ip@sri-nic, louden@mitre-gateway.arpa
Subject:   Re: What's a reasonable time-to-live?
If no one comes up with a good reason for any one TTL value,
you might try 15 since this is the value recommend in the standard (MIL-STD-1777)
This is the current DDN standard.
.

-----------[000072][next][prev][last][first]----------------------------------------------------
Date:      Wed 26 Mar 86 08:42:01-EST
From:      Dennis G. Perry <PERRY@IPTO.ARPA>
To:        POSTEL@USC-ISIB.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA, perry@IPTO.ARPA
Subject:   Re: major changes to 1822
I would like to strongly support the last part of Jon's message about
planning changes in the Arpanet and getting those ideas out into the
community early for discussion.  I am not in favor of someone or some
organization deciding what is best without a full and open discussion
(unless, of course, it is me making the decission :-)).

dennis
-------
-----------[000073][next][prev][last][first]----------------------------------------------------
Date:      Wed 26 Mar 86 13:41:48-EST
From:      "J. Noel Chiappa" <JNC@XX.LCS.MIT.EDU>
To:        mike@BRL.ARPA, brian@SDCSVAX.UCSD.EDU
Cc:        tcp-ip@SRI-NIC.ARPA, JNC@XX.LCS.MIT.EDU
Subject:   Re:  ping/record route for 4.3BSD Unix wanted
	Record route isn't the best tool. If a gateway decides to use
as a next hop a machine which is dead, then the Ping packet will
disappear into a black hole. A much better tool is an ICMP packet
containing as data an address, which, when sent to a gateway, causes
the gateway to return information about the next hop that it would use
to route the packet, i.e. the address that it would send the packet
out to, etc. It's quite easy to build a tool which uses this to find
out what route a packet would take. This is one more of the 'extended
ICMP' messages I'm trying to get added to the ICMP spec.

	NOel
-------
-----------[000074][next][prev][last][first]----------------------------------------------------
Date:      Wed, 26 Mar 86 15:46:36 EST
From:      Mike Muuss <mike@BRL.ARPA>
To:        Lixia Zhang <lixia@comet.lcs.mit.edu>
Cc:        Makey@logicon.arpa, tcp-ip@sri-nic.arpa
Subject:   Re:  What's a reasonable time-to-live?
On the other hand, values smaller than 10-12 are a bad idea, given
the burgeoning growth of LANs hanging off of Campus nets (CANs!),
hanging off several interconnected WANs.  Consider also the NSF
idea of LANs to CANs to state-area-nets to regional-area-nets to WANs.
Don't preclude talking to somebody next year, just because he was too
far away (hop counts).

I have already hit a circumstance where the default 4.2 MAX_TTL
value was too small for a connection I wanted to make, and I was
unhappy about having to regen all my kernels.

My own preference is to use a largish number (like 50).  This will
extinguish packets caught in a routing loop, but isn't likely to
prevent me from talking to somebody far far away.
	Best,
	 -Mike
-----------[000075][next][prev][last][first]----------------------------------------------------
Date:      26 Mar 1986 19:28:27 PST
From:      POSTEL@USC-ISIB.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Subject:   Death to IEN 116

Please kill any implementation of IEN 116.

All translation of host or domain names to IP Internet addresses
should use the domain name service described in RFCs 882, 883, 920,
973, and 974.

--jon.
-------
-----------[000076][next][prev][last][first]----------------------------------------------------
Date:      Wed, 26 Mar 86 19:39:55 EST
From:      jas@proteon.arpa
To:        WITLICKI%WILLIAMS.BITNET@WISCVM.WISC.EDU
Cc:        tcp-ip@sri-nic.arpa
Subject:   Re: TCP/IP software on DEC Ethernet boards
The DELUA is NOT software compatible at the device level. This is
why it won't work until VMS V4.4 or something like that (late
summer). Unless they pull a dumb one, the device driver will
have the same interface as the XEDRIVER/XQDRIVER (DEUNA/DEQNA).
If your TCP/I goes through the "shared DEUNA driver", it will
work on VMS, but only when the new driver arrives. As for
4.2BSD, you may wait a long time, or until Ultrix 2.0 (?).
-------

-----------[000077][next][prev][last][first]----------------------------------------------------
Date:      26 Mar 86 22:58 EST
From:      Rudy.Nedved@A.CS.CMU.EDU
To:        POSTEL@USC-ISIB.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: Death to IEN 116
Jon,

I have no problem with not using IEN 116. Parts of CMU use it for
making name lookups on the IBM PC's running MIT PC/IP liviable and
we will be phasing that out (very actively).

However,"All translation of host or domains to IP Internet addresses"
is a very exclusive statement. What about very powerful resolvers that
will accept a name request and do what the domain system expects?

Confused,
-Rudy
-----------[000078][next][prev][last][first]----------------------------------------------------
Date:      Thu, 27 Mar 86 19:21:55 PST
From:      Murray.pa@Xerox.COM
To:        TCP-IP@SRI-NIC.ARPA
Cc:        Murray.pa@Xerox.COM, JLarson.pa@Xerox.COM
Subject:   Anybody normally send packets with unused options?
I'm doing my homework in preparation for teaching Cedar about IP and
TCP. Cedar doesn't really like variable length chunks in the middle of
records. I can use pointers but it seems worthwhile to explore
alternatives.

I'm thinking of pretending that the options fields live at the end of
the buffer and making the low level driver slosh things around to put
everything in the correct place. Since the options fields are almost
always unused, the sloshing won't normally take any cycles. It looks
like I can have my cake and eat it too.	

The question is, has somebody else already sliced the cake some other
way? For example, is some system normally sending a few empty option
bytes to trick the alignment into comming out clean in their
environment?

Does anybody have any data on how often non-empty options are used? John
and I watched some traffic arriving at XEROX.COM, and we didn't see any.
-----------[000079][next][prev][last][first]----------------------------------------------------
Date:      Thu, 27 Mar 86 22:17:32 EST
From:      Mike Muuss <mike@BRL.ARPA>
To:        tcp-ip@sri-nic.arpa
Cc:        PMBS@BRL.ARPA, Medin@orion.arpa, Howard@BRL.ARPA
Subject:   T1 on IMPs
Today I heard about something that if true, may be truely wonderful.

The claim was made that C/30E IMPs (sorry, PSNs) with PSN 6.0 software,
and no other hardware changes, are now capable of supporting T1
(1.544 Mbps) trunks, and also capable of supporting T1 host attachment
for X.25 Standard Mode hosts.

If this is true, I feel unhappy about not having heard it on the net first.
If this is false, the folks at WSMR are going to be very disappointed,
because they think they are upgrading their entire C/30 campus net (CAN)
next month.

	Curious,
	 -Mike
-----------[000080][next][prev][last][first]----------------------------------------------------
Date:      Fri, 28 Mar 86 08:57:27 -0500
From:      Dennis Rockwell <dennis@SH.CS.NET>
To:        Bob Knight <KNIGHT@sri-nic.ARPA>
Cc:        tcp-ip@sri-nic.ARPA, cf-staff@sri-nic.ARPA, feinler@sri-nic.ARPA, stjohns@sri-nic.ARPA
Subject:   Re: The NIC and FTPing host tables
I have some questions about this:

Does anybody have a 4.2BSD implementation of the hostnames (port 101)
server?

Does the hostname server present a smaller load to SRI-NIC?

What number of people use the hostnames server instead of anonymous FTP?

Dennis Rockwell
CSNET Technical Staff
-----------[000081][next][prev][last][first]----------------------------------------------------
Date:      28 Mar 1986 11:56:22 PST
From:      POSTEL@USC-ISIB.ARPA
To:        Rudy.Nedved@A.CS.CMU.EDU
Cc:        tcp-ip@SRI-NIC.ARPA, POSTEL@USC-ISIB.ARPA
Subject:   Re: Death to IEN 116
In response to your message sent  26 Mar 86 22:58 EST

Rudy:

You say:

"However, 'All translation of host or domains to IP Internet
addresses' is a very exclusive statement.  What about very powerful
resolvers that will accept a name request and do what the domain
system expects?"

leaves you confused.

If your powerful resolver does what the name system expects, i don't
see why you should be confused.

--jon.
-------
-----------[000082][next][prev][last][first]----------------------------------------------------
Date:      28 Mar 1986 1251-PST (Friday)
From:      Barry Leiner <leiner@RIACS.ARPA>
To:        Mike Muuss <mike@BRL.ARPA>
Cc:        TCP-IP@sri-nic.ARPA
Subject:   Re: Growth
Mike,

I share your concern about growth in traffic, particularly that between
milnet and arpanet, given the performance of the "mail bridges"
nowadays.

However, things are not quite as bad as your message made them sound.

1.  The arpanet is NOT being expanded by 40 imps.  Rather, the plan is
to expand it by roughly 25%, which means roughly 40 PORTS, or 15 imps.

2.  Hopefully, with the installation of the butterfly gateways,
we'll see a substantial performance improvement (meaning things won't
be as terrible as they are currently).

3.  New end to end protocols are likely needed to support the
scientific and supercomputer networking coming on line.  Hopefully, our
task force will be able to identify those requirements and get infront
of the requirement.

Regards,

Barry

----------
-----------[000083][next][prev][last][first]----------------------------------------------------
Date:      Fri, 28 Mar 86 10:09:53 EST
From:      Andrew Malis <malis@bbnccs.ARPA>
To:        Mike Muuss <mike@brl.arpa>
Cc:        tcp-ip@sri-nic.arpa, PMBS@brl.arpa, Medin@orion.arpa, Howard@brl.arpa, malis@bbnccs.arpa
Subject:   Re: T1 on IMPs
Mike,

Sorry to have to dispel the rumor.  C/30Es cannot handle T1.  The
maximum trunk and synchronous host speed is 56KB, as always.  The
main difference between C/30s and C/30Es is more memory.

Andy
-----------[000084][next][prev][last][first]----------------------------------------------------
Date:      28 Mar 1986 13:51:05 PST
From:      POSTEL@USC-ISIB.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Subject:   Bob Knight:

Hi.  It seems to me you have yet another reason to try to speed up the
conversion to the Domain Name System and the use of name servers.  I'd
suggest that if you can identify any of the sites that are making
excessive use of FTP to get the old HOSTS.TXT file you contact the host
administrator and ask what their schedule is for converting to use of
name servers.

--jon.
-------
-----------[000085][next][prev][last][first]----------------------------------------------------
Date:      28 Mar 86 14:49:00 PST
From:      <art@acc.arpa>
To:        "tcp-ip" <tcp-ip@sri-nic>
Subject:   RE: RE: T1 on IMPs
> Mike,
> 
> Sorry to have to dispel the rumor.  C/30Es cannot handle T1.  The
> maximum trunk and synchronous host speed is 56KB, as always.  The
> main difference between C/30s and C/30Es is more memory.
> 
> Andy

I though there were some 230KB/sec trunks in the network for particular
high volume links (maybe that was 316 or Pluribus IMPS?).

I know from experience that running an HDH port at 316KB/sec (pgm error)
confuses the hell out of the IMP's port.  I've heard that C30s have
an effective maximum throughput in the range of 250-300KB/sec.

What about C300 or other new technology IMPs?  If 1822 is going to be
replaced by X.25, service much greater than 56KB/sec will be needed
by some sites.  We'd love to see IMPs with T1 rate interface capacity.

					<Art@ACC.ARPA>

------
-----------[000086][next][prev][last][first]----------------------------------------------------
Date:      Fri, 28 Mar 86 12:31:08 est
From:      karn@mouton.ARPA (Phil R. Karn at mouton.ARPA)
To:        tcp-ip@SRI-NIC.ARPA
Subject:   Re:  The NIC and FTPing host tables
A most helpful service to us running UNIX sites would be to maintain an
up-to-date copy of hosts.txt on a UNIX system, compressed with the popular
"compress" program. This algorithm typically reduces the size of hosts.txt
by 75% or so, a major win when squeezing it through a tiny and expensive
network path like our CSNET/X.25 link. A well-tuned 4.3BSD system should be
used for the repository; vanilla 4.2 TCP's retransmission algorithms are so
bad that I have little doubt it is THE cause of the lousy Internet
performance people comment about here so often.

Speaking of 4.2BSD TCP, how come DARPA hasn't put pressure on the vendors to
get their acts together? We have a large collection of VAXen, Pyramids, CCIs
and especially Suns. While I have tried very hard to make sure the machines
for which we have source have socially well-adjusted TCPs, there is little I
can do about our many object-only systems except to wait for new releases
from the vendors.  And none of them seem to consider TCP performance to be
the high-priority problem it is.

Phil

-----------[000087][next][prev][last][first]----------------------------------------------------
Date:      Fri, 28 Mar 86 12:31:42 EST
From:      Mitch Tasman <mtasman@bbncct.ARPA>
To:        tcp-ip@sri-nic.arpa
Subject:   Re: IP TTL
     For an IP datagram containing a TCP segment, the value for the IP TTL
field has been specified as "one minute".

     This is stated in both specifications for the Transmission Control
Protocol:  on page 51 of RFC 793, in the "TCP/Lowel-Level Interface"
section, and on page 140 of the 12 August 1983 edition of MIL-STD-1778
in the section "9.4.6.3.19 Format net params".

 
					      Mitch Tasman
					      BBN Communications Corp.
-----------[000088][next][prev][last][first]----------------------------------------------------
Date:      28 Mar 1986 13:52:51 EST
From:      MILLS@USC-ISID.ARPA
To:        JNC@XX.LCS.MIT.EDU, mike@BRL.ARPA, brian@SDCSVAX.UCSD.EDU
Cc:        tcp-ip@SRI-NIC.ARPA, MILLS@USC-ISID.ARPA
Subject:   Re:  ping/record route for 4.3BSD Unix wanted
In response to the message sent  Wed 26 Mar 86 13:41:48-EST from JNC@XX.LCS.MIT.EDU

Noel,

It would be wonderful if you could submit your proposal on "extended
ICMP messages" to the task forces (INARC, which I chair, would be only
ecstatic to receive that, but others might also). Wonder would be even
more enhanced if your ideas were stuffed into an RFC. On the other hand,
if I read your message literally, you want to get your ideas "into
the ICMP spec," which might be interpreted to bypass the public comment
phase. I'm sure you don't mean to imply that.

The idea of an ICMP Spy message has been rattling around for some time.
It might be a time whose opportunity has come.

Dave
-------
-----------[000089][next][prev][last][first]----------------------------------------------------
Date:      Fri, 28 Mar 86 14:42:51 est
From:      karn@mouton.ARPA (Phil R. Karn at mouton.ARPA)
To:        tcp-ip@sri-nic
Subject:   Re: What's a reasonable time
I agree with Mike. TTL values tend to be hard to change, especially in large
numbers of object-only systems.  This value (15) has already caused a
problem with vanilla 4.2BSD gateways, which for some strange reason decided
to decrement TTL by 5 on each hop instead of 1.  Fixing the gateways was
easier than fixing all the hosts (because there are fewer of the former than
the latter) but I don't want to think about what will happen when the
Internet diameter reaches 15.

How often do routing loops occur? If they are rare events, then why not just
make TTL = 255? If loops do occur, this will provide an adequate incentive
to fix the routing algorithms. :-)

Phil
-----------[000090][next][prev][last][first]----------------------------------------------------
Date:      Fri, 28 Mar 86 15:03:04 EST
From:      Ra <root%bostonu.csnet@CSNET-RELAY.ARPA>
To:        KNIGHT@sri-nic.ARPA, tcp-ip@sri-nic.ARPA
Cc:        cf-staff@sri-nic.ARPA, feinler@sri-nic.ARPA, stjohns@sri-nic.ARPA
Subject:   Re:  The NIC and FTPing host tables

I'm new to some of this but wouldn't the obvious thing to do would be
to put up a difference file (perhaps with a hook in the name so the
difference from version 701 (eg) to the current were called something
like HDIFF701.TXT, that is, the string to FTP could be built on the
fly if you knew your current version.) The difference could then be
patched in at the local host and away you go, quickly.

Obviously that implies a few applications but I think they would be
basically trivial, analogues already exist (eg. UNIX' diff, patch.)
And for those who slavishly connect anyhow I suppose a format for
a null patch file could be created. Or is this all a moot issue?

	-Barry Shein, Boston University

-----------[000091][next][prev][last][first]----------------------------------------------------
Date:      Fri 28 Mar 86 15:59:20-EST
From:      Dennis G. Perry <PERRY@IPTO.ARPA>
To:        MILLS@USC-ISID.ARPA
Cc:        JNC@XX.LCS.MIT.EDU, mike@BRL.ARPA, brian@SDCSVAX.UCSD.EDU, tcp-ip@SRI-NIC.ARPA, perry@IPTO.ARPA
Subject:   Re:  ping/record route for 4.3BSD Unix wanted
Let me encourage anybody with new ideas to submit them into the Internet
task forces for study and even better as an RFC.

I have been talking with the NBS people and encouraging them to do the same
for issues they have brought up.  I would especially like them to do this
before things get into the ISO/ANSII standardization areas.

The same applies to the NSF now that they are beginning to see the benefits
of supporting networking and related issues.

Lets work together and let the community explore the issues and have the 
widest possible debate.  It is more interesting this way and hopefully we
might discover and docuement the best way to do things.

dennis
-------
-----------[000092][next][prev][last][first]----------------------------------------------------
Date:      Fri 28 Mar 86 16:20:55-EST
From:      Dennis G. Perry <PERRY@IPTO.ARPA>
To:        mike@BRL.ARPA
Cc:        TCP-IP@SRI-NIC.ARPA, perry@IPTO.ARPA
Subject:   Re:  Growth
Let me take a crack at answering you concerns, Mike.  An MOA between DARPA
and NSF was finally signed in October, 1985.  The initial increment of the
Arpanet is about 25% growth.  This does not necessarly mean 25% more IMPs.

What it does mean is that DDN will do a growth analysis of the network
as each increment comes into for processing.  The initial increment is
about 20 sites.  The network modeling is now in progress.  If extimated
traffic indicates IMP growth (I guess that is now PSN growth), then PSN
will be added where engineering says they need to be added.

The second increment of sites to be added will come in sometime in the
summer or fall.  These are sites that require access to the NSF supercomputers.

The second phase is to allow NSF grantees general access to the Arpanet, not
just for supercomputer access.  

All this growth requires that the problems be properly addressed.  Get you
concerns into the IAB and its tasks forces so that decissions can be properly
discussed and formulated.

Ultimately the idea that is developing is to form a National Research
Internet, with the Arpanet and Arpa-Internet being the base of such a
development.  Interested agencies in this idea are DARPA, parts of DoD,
DOE, parts of NASA (namely NASA/NAS), NSF, and others.  The idea has
been put forth in a draft proposal to the FCCSET committee on supercomputers
from the networking subcommittee. (This is a White House committee being
chaired by Decker in DOE with the subcommittee being chaired by Cavalinni
of DOE.)  Other developments still under discussion is to upgrade the
Arpanet to T1 links or higher and more fully integrate the WBnet into the
Internet to provide even higer bandwidth.

The National Research Internet would not only be a state of the art network
for researchers to use to gain access to addvanced computational resources,
but would be maintained as state of the art by research into networking
and interneting.  Subtantial engineering support will be necessary to solve
the operational problems as well.

I have been working on a paper for such a model with several others and
we hope to have this out for discussion soon.

dennis
-------
-----------[000093][next][prev][last][first]----------------------------------------------------
Date:      Fri, 28 Mar 86 16:33:03 EST
From:      sasaki@harvard.HARVARD.EDU (Marty Sasaki)
To:        jas@proteon.arpa
Cc:        WITLICKI%WILLIAMS.BITNET@WISCVM.WISC.EDU, tcp-ip@sri-nic.arpa
Subject:   TCP/IP software on DEC Ethernet boards
   Date: Wed, 26 Mar 86 19:39:55 EST
   From: jas@proteon.arpa

   The DELUA is NOT software compatible at the device level. This is
   why it won't work until VMS V4.4 or something like that (late
   summer)...

Wrong, wrong wrong! (Well, maybe a little wrong.) We are a test site
for the DELUA. VMS 4.3 comes with the new driver that will work on
both the DEUNA and the DELUA. At the qio level things work fine. We
run DECNET and the Tekronix TCP/IP on the DELUAs with no problems and
no changes. Unless your TCP/IP does calls directly into the driver
routines there should be no problem. Using the i/o packet interface
should work.

The DELUA is very close to a DEUNA if you set the bits right. The UNIX
wizard on Harvard had one of his BSD 4.2 system talking to the DELUA
after 15 minutes. He just changed some bits in the DEUNA driver header
file, recompiled and loaded the driver.

Of course this all changes when DEC starts down-loading code into the
DELUA to actually do network functions...

		Marty Sasaki
-----------[000094][next][prev][last][first]----------------------------------------------------
Date:      Fri, 28 Mar 86 18:58:23 est
From:      romkey@BORAX.LCS.MIT.EDU (John Romkey)
To:        Rudy.Nedved@A.CS.CMU.EDU
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   re: Death to IEN 116
   Date: 26 Mar 86 22:58 EST
   From: Rudy.Nedved@A.CS.CMU.EDU

   I have no problem with not using IEN 116. Parts of CMU use it for
   making name lookups on the IBM PC's running MIT PC/IP liviable and
   we will be phasing that out (very actively).

The (probably) final release of MIT's PC/IP should be available soon
(within a week, I think) through the MIT Microcomputer Center. This
release supports domain name resolution in addition to the old IEN 116
name resolver.

The Microcomputer Center's address is:
	MIT Microcomputer Center
	Room 11-209
	77 Massachusetts Avenue
	Cambridge, MA  02139

	Telephone (617) 253-6325
					- john romkey
					  late of MIT
-----------[000095][next][prev][last][first]----------------------------------------------------
Date:      Fri, 28 Mar 86 22:55:46 est
From:      romkey@BORAX.LCS.MIT.EDU (John Romkey)
To:        Murray.pa@Xerox.COM
Cc:        TCP-IP@SRI-NIC.ARPA, JLarson.pa@Xerox.COM
Subject:   Anybody normally send packets with unused options?
   Date: Thu, 27 Mar 86 19:21:55 PST
   From: Murray.pa@Xerox.COM

	...

   The question is, has somebody else already sliced the cake some other
   way? For example, is some system normally sending a few empty option
   bytes to trick the alignment into comming out clean in their
   environment?

   Does anybody have any data on how often non-empty options are used? John
   and I watched some traffic arriving at XEROX.COM, and we didn't see any.

PC/IP (PC/TCP, whatever it's called today) has explicit routines for
allocating "IP" and "UDP" packets.  The routines take two arguments: a
packet size and an IP options length.  The UDP alloc routine calls the
IP alloc routine, which gets a packet buffer and builds a partial IP
header in it, setting the IP header size field and saves space for the
options. IP and UDP based protocols use macros to find the address to
start storing data in the packets; these macros take the options field
into consideration.

The headers get finished up when you call the appropriate send routines.

That code is based on some Version 6 Unix code done locally a long
time back, maybe even on some ancient BCPL code.
					- john romkey
					  ftp software
-----------[000096][next][prev][last][first]----------------------------------------------------
Date:      Sat, 29 Mar 86 11:09:28 cst
From:      jsq@zotz.CS.UTEXAS.EDU
To:        tcp-ip-RELAY@SRI-NIC.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re:  Death to IEN 116
Bridge Communications uses IEN116 for their terminal servers.
I have no idea what it would take to get them to change.

-----------[000097][next][prev][last][first]----------------------------------------------------
Date:      Sat, 29 Mar 86 14:13:36 est
From:      martin%sabre@mouton.ARPA (Martin J Levy)
To:        tcp-ip@sri-nic.arpa
Cc:        dennis@sh.csnet
Subject:   Re: The NIC and FTPing host tables

i have a hostname server for 4.2bsd, it's very simple to write. we
use it inside bellcore to distrubute the hosts.txt file. if you have
a 4.2/4.3 host running /etc/inetd it's even simpler to write (it can be
a shell script!).

one solution for 4.2bsd hosts that are doing ftp's to the NIC, would be
to put this line at the start of the shell script that runs out of
cron.

	interface="vv0"
	hostnumber=`/etc/ifconfig $interface|awk '{print $2}'|sed 's/\./ + /g'`
	sleeptime=`expr \\( $hostnumber \\) % 60 \\* 60`
	sleep $sleeptime

this way even if lots of hosts start a shell off at 12pm, they will
be spread around by 1 hour, based on the IP address of the host.

martin levy.
bellcore
-----------[000098][next][prev][last][first]----------------------------------------------------
Date:      Sat, 29 Mar 86 14:34:38 EST
From:      Rick Adams <rick@seismo.CSS.GOV>
To:        POSTEL@USC-ISIB.ARPA, tcp-ip@SRI-NIC.ARPA
Subject:   Re:  Bob Knight:
There is not necessarily a correlation between converting to the
domain name system and keeing a current hosts.txt on line.

seismo converted to the domain name system in july (Remember when everyone
was supposed to convert? I believed it). We still keep an up to date
copy of hosts.txt around. Under the domain system there is no
way to do searches throughout the entire name space (e.g.
grep isi hosts.txt for unix fans). Several of our users don't
really know what machine they want. The big offenders are people
who are at ISI or Stanford. They give addresses like "username on the
A machine at isi" which I can figure out to mean username@usc-isi.ARPA.
Another examble is user@ai, which eventually translates to user@su-ai.ARPA.

Scanning the hosts.txt file for substring matches is the only way
I can get to these people.

---rick
-----------[000099][next][prev][last][first]----------------------------------------------------
Date:      29-Mar-86 20:45:40-UT
From:      mills@dcn6.arpa
To:        tcp-ip@sri-nic.arpa
Subject:   On TTLs, heros and loops
Phil, Art and folks,

We happen to use a TTL of 30, so that the maximum time a looper can persist is
less than the reassembly timeout. We arrived at values of this order after
seven years of hacking in these swamps, breaking and being broken in wonderful
ways, but I would not heroically defend our particular choice.

The plea of this note is to resist the urge to crank up the TTL in order to
survive legitimate paths involving many gateway hops. First, consider the
issue of a reasonable lower bound. The largest number of hops reported by the
core system in EGP updates is now five, but values even that high probably
indicates GGP is broken and "counting to infinity." The EGP hop count
represents a lower bound on the core portion of the path plus whatever the EGP
peer stuffs into the hop-count field, usually zero. Our swamp can involve four
additional hops for nets, subnets and the like, which is probably not
unreasonable for places like MIT, CMU and Stanford as well. Assuming swamps
like ours at both ends of the core path suggests a path with something over
twelve hops is unlikely but possible. I conclude that 15 is a defensible lower
bound and that 30 is probably adequate until such time as the Internet goes
intergalactic.

Now consider the costs of setting the TTL too high. If the rate of distruction
of IPgrams due to loops is estimated by the incidence of ICMP Time Exceeded
messages, loops don't occur too often. From our experience that rationale is
faulted, since such loops usually result in the consumption of all buffer
resources, including those necessary to forward the ICMP message. What makes
this acute is the fact that the sender begins to retransmit, which inserts
more stuff in the loop. Obviously, it would be desirable that no IPgram could
survive longer than the minimum retransmission interval, but that is clearly
impractical in the present architecture.

The transmission time of a 576-octet IPgram on a 9600-bps link is in the order
of a half second and at lower speeds you don't wanna think about it. Even with
a TTL of 15 a loop involving such a link, which is reasonably common,
resources can be strained for periods longer than the typical TCP
retransmission timeout. Actually, following formation of the loop, what
usually happens is that during a period of a few minutes intense congestion
sets in until the customers all give up. Then every few minutes somebody,
usually a mail daemon, honks a TCP SYN segment and assumes an unrealistically
low retransmission timeout, which often is enough to trip the system again
into congestive collapse, especially if TTLs much greater than 15-30 are used.
We all know that mailers are exquisitely persistent.

Within the GGP core system we all know that loops can be particularly painful,
since in many scenarios a net bobbing up or down triggers a transient routing
loop, together with a spasm of updates that can last minutes while the
distances "count to infinity." It's a good thing infinity is a small number
(less than ten). Recent observation of our EGP tables indicates an alarming
number of nets, perhaps five at a time, seem to be doing just that. It is not
clear what is causing this; however, EGP gateway designers should realize that
dispersing reachability changes throughout the Internet is painful and slow,
so that internal changes should be filtered before being advertised
externally.

The biggest danger for loop formation may well be subnets and default
gateways. What happens is that somebody glitches a routing table in a gateway
handling the routing for a particular net and set of subnets and then defaults
everything else to the nearest friendly EGP gateway, which doesn't know about
subnets. Then, usually as the result of mismatched host-name/address tables
and routing tables, some innocent honks an IPgram at a host on some subnet not
in the routing table, so that a loop is formed between the subnet gateway and
the EGP gateway. The lesson for subnet gateway designers is acute and clear,
especially in scenarios like the above where two-thirds of the gateways on a
path might well be vulnerable to subnet loops like this.

My conclusion is that the dangers of overstating the TTL far outweigh the
dangers of understating it. I consider values between 15 and 30 to be
appropriate and larger values to be not only ill-advised, but potentially
damaging to the health of the entire community.

Dave
-------
-----------[000100][next][prev][last][first]----------------------------------------------------
Date:      30 Mar 1986 15:02-PST
From:      STJOHNS@SRI-NIC.ARPA
To:        dennis@CSNET-SH.ARPA
Cc:        KNIGHT@SRI-NIC.ARPA, tcp-ip@SRI-NIC.ARPA cf-staff@SRI-NIC.ARPA, feinler@SRI-NIC.ARPA
Subject:   Re: The NIC and FTPing host tables
In  answer  to the second question "Does the hostname server...",
No.  They are both TCP connections and they both cause a  job  to
be started to serve them.

Mike
-----------[000101][next][prev][last][first]----------------------------------------------------
Date:      Sun, 30 Mar 86 23:48:20 pst
From:      Earl Craighill <ejc%sri-gemini@sri-tsc>
To:        Andrew Malis <malis@bbnccs.ARPA>
Cc:        tcp-ip@sri-nic.ARPA, POSTEL@usc-isib.ARPA
Subject:   Re: major changes to 1822

There is some data on ARPAnet performance for real-time traffic using
type-3 packets.  We conducted exercises (experiments on the internet
are VERY difficult to set up or interpret) using packet speech over a
variety of networks (ARPAnet, WB SATnet, Atlantic SATnet, PRnet,
RINGnets, LEXnet) in June of 1983.  The sites were NTA in Norway,
Lincoln Laboratory in Lexington, Mass.  and SRI in California.  We
estimated the likely range of delays in the ARPAnet for two segments--
SRI to LL, and LL to CSS.  The maximum time for SRI-LL was 2200 ms.
The minimum route at that time was 11 hops, so the minimum time
(calculated, not measured) was about 300 ms.  The LL-CSS route (min.
path 3 hops) was 90 to 550 ms.  I won't mention the delays on the other
nets, even the satellite nets look better than the DARPAnet.  With
increased traffic, these numbers have to increase.  Also, high thruput
wasn't the difficulty since we used low-rate (2400 baud) coded speech.

This data indicates that type-3 wasn't great for real-time traffic.  Of
course, the ARPAnet wasn't designed for real-time traffic.  But the new
design shouldn't assume that datagrams are the "right" method for low
delay service.  Further, flow-control algorithms may protect the
network as a whole, but won't help low-delay service, especially with
the anticipated growth in traffic (however, we did make effective use
of feedback from the PRnet to throttle back our offered rate).  What is
needed is flow-enhancement, say, some sensible use of priority in PSN
queue management.  Our experiments on the PRnet indicate that some
portion of "preferred" traffic can be supported without bringing the
network to its knees.  

I would encourage more experimentation, especially at the subnet level.
Our measures were very coarse and could not identify reasons for the
high delays (imps throwing away packets, long holding times in queues,
rerouting because of congestion, ??).  The current picture does not
look good.  Hopefully, some experimental data may lead to a different
transport mechanism that will provide low-delay service.

Earl
-----------[000102][next][prev][last][first]----------------------------------------------------
Date:      Mon, 31 Mar 86 07:19 EST
From:      JOHNSON%northeastern.csnet@CSNET-RELAY.ARPA
To:        tcp-ip@sri-nic.ARPA, johnson%northeastern.csnet@CSNET-RELAY.ARPA
Subject:   Concerning unused options fields in IP packet headers.
     I'm just learning tcp/ip myself.  In reading rfc 791,  (which 
may have been replace by now, I don't know yet) I found the following 
general statement in section 3.2:

     "In general an [IP] implementation must be conservative in its 
sending behavior, and liberal in its receiving behavior."

     Concerning assumption about IP packets this seems like a VERY good
idea.  To protect one's own system, one should hope for good input but
expect problems from incoming packets.  In general I've found this to be
the best way with most software.  Yes it's more work and on some systems
it can be very difficult.  However, covering your tail pointer is
usually a good idea.  It can be very embarrassing to have your wonderful,
totally cosmic piece of software crash in public.  It's even more
embarrassing when a real-time system like tcp/ip goes down because you
drop packets on the floor and then have to get a broom and a bucket to
clean things up.  At the speeds of some parts of the network, the mess
can get really big really fast. 

     I don't know about anyone else but I don't think I'd assume
anything about what options look like in a packet.  We all know what
happens when assumptions are made. 

Chris Johnson
Sr System Programmer
Northeastern University
-----------[000103][next][prev][last][first]----------------------------------------------------
Date:      Mon, 31 Mar 86  8:35:20 EST
From:      Andrew Malis <malis@bbnccs.ARPA>
To:        Earl Craighill <ejc%sri-gemini@sri-tsc.arpa>
Cc:        Andrew Malis <malis@bbnccs.arpa>, tcp-ip@sri-nic.arpa, POSTEL@usc-isib.arpa
Subject:   Re: major changes to 1822
Earl,

Thanks for your informative message.  When we started working on
the new End-to-End, providing a low-delay service was not one of
our explicit goals.  However, improving the priority queuing
characteristics inside the PSN itself was one our goals, and one
that we have paid much attention to.  As a result, provided that
the hosts use the different priority levels intelligently,
high-priority traffic should experience a lower delay through the
new EE than low-priority traffic.

Of course, the upcoming release of the PSN only has these changes
in the EE section of the code.  Upgrading the store-and-forward
to also use the updated priority queuing is under way, and
scheduled for a later release of the PSN.

Andy
-----------[000104][next][prev][last][first]----------------------------------------------------
Date:      Mon, 31 Mar 86 09:46 EST
From:      David C. Plummer <DCP@SCRC-QUABBIN.ARPA>
To:        Ra <root%bostonu.csnet@CSNET-RELAY.ARPA>, KNIGHT@SRI-NIC.ARPA, tcp-ip@SRI-NIC.ARPA
Cc:        cf-staff@SRI-NIC.ARPA, feinler@SRI-NIC.ARPA, stjohns@SRI-NIC.ARPA
Subject:   Re:  The NIC and FTPing host tables
    Date: Fri, 28 Mar 86 15:03:04 EST
    From: Ra <root%bostonu.csnet@CSNET-RELAY.ARPA>


    I'm new to some of this but wouldn't the obvious thing to do would be
    to put up a difference file (perhaps with a hook in the name so the
    difference from version 701 (eg) to the current were called something
    like HDIFF701.TXT, that is, the string to FTP could be built on the
    fly if you knew your current version.) The difference could then be
    patched in at the local host and away you go, quickly.

    Obviously that implies a few applications but I think they would be
    basically trivial, analogues already exist (eg. UNIX' diff, patch.)
    And for those who slavishly connect anyhow I suppose a format for
    a null patch file could be created. Or is this all a moot issue?

It isn't moot, but it doesn't address a larger issue: the general need
for secondary servers.  I agree with the initial message that spawned
this conversation, and I think network databases that have hundreds to
thousands of potential clients should have secondary servers.  For that
matter, it would be nice if some secondary server mechanism were in
place.  The host table is but one database that needs secondary servers.
The domain system needs them as well.  [It may already for all I know;
our current implementation tries to connect to sri-nic in order to find
out that BBNA is the resolver for .BBN.COM.  Maybe our implementation
isn't mature enough.]  There are at least TWO big wins to secondary
servers: (1) off loading the primary, (2) data availability when the
primary is down (or more generally, multiple data availability).

-----------[000105][next][prev][last][first]----------------------------------------------------
Date:      31 Mar 1986 12:49:52 PST
From:      POSTEL@USC-ISIB.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Subject:   host.txt vs. domain servers

Rick Adams:

I agree that it is nice to have a way to figure out the real address
when all you've got to go on is a nickname or a partial name.  But,
one of these days the host.txt file is going to die.  So one of the
things that needs to be done is to educate users to realize that when
they tell other people to send them mail the have to give the full
official name of their host.  This is one motivation for using the
full official name of the host on everything the host does.  It gets
the users used to seeing it, and using it.  Local nicknames can be
nice but they do have some disadvantages.  So while using hosts.txt
for figuring out what names should have been used when you get stuck
with a nickname is a neat trick, also realize that it won't work
forever.

--jon.

-------
-----------[000106][next][prev][last][first]----------------------------------------------------
Date:      Mon 31 Mar 86 12:37:50-EST
From:      "J. Noel Chiappa" <JNC@XX.LCS.MIT.EDU>
To:        Murray.pa@XEROX.COM, TCP-IP@SRI-NIC.ARPA
Cc:        JLarson.pa@XEROX.COM, JNC@XX.LCS.MIT.EDU
Subject:   Re: Anybody normally send packets with unused options?
	I guess I have a range of reactions to this. First, if you are
really that concerned about performance, there are tons of ways to
lose, many of them in how you structure the software and the
interchange of data between the network layer and the application.
Space does not permit expanding on this, but see RFC817, in the IP
Implementors Guide (*everyone* building anything should read the set
of memos written by Dave Clark in that volume). I would think that
copying around a few bytes of options is not a big concern.
	Second, although it may be true now that there usually aren't
options, it may not be true in the future, so I wouldn't plan your
performance around that. Third, I wouldn't make the data part of a
record anyway, but consider the data and options as separate
(optional) blocks. Use the pointers and grin.

	Noel
-------
-----------[000107][next][prev][last][first]----------------------------------------------------
Date:      Mon 31 Mar 86 12:47:24-EST
From:      "J. Noel Chiappa" <JNC@XX.LCS.MIT.EDU>
To:        POSTEL@USC-ISIB.ARPA, Rudy.Nedved@A.CS.CMU.EDU
Cc:        tcp-ip@SRI-NIC.ARPA, JNC@XX.LCS.MIT.EDU
Subject:   Re: Death to IEN 116
	Jon, I think what he was trying to get at was that the full
blown Domain Name thing is a big mechanism, and perhaps what would be
better for small workstations such as PC's is a thing where the PC can
say to some sort of local translation server 'what's the IP address of
XX.LCS.MIT.EDU', instead of doing it all itself. Such an approach
would have the added benefit of less traffic, since there'd be one big
cache instead of lots of little ones. For talking to such a translation
server, a protocol such as IEN116 is perfect. I think some people at
MIT do this already.

	Noel
-------
-----------[000108][next][prev][last][first]----------------------------------------------------
Date:      Mon 31 Mar 86 18:09:16-PST
From:      Vivian Neou <Vivian@SRI-NIC.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   Digested messages
Due to a problem with the NIC mailer, it has come to my attention that 
some of the recent TCP-IP messages may not have made it out to most of
the list.  I've taken those messages and digestified them.  My apologies
to those of you who already got the messages, but to insure that everyone
does receive them, I think the safest course is just to remail the whole
bundle.

Vivian
----------
Date: 30 Mar 1986 15:02-PST
Subject: Re: The NIC and FTPing host tables
From: STJOHNS@SRI-NIC.ARPA
To: dennis@CSNET-SH.ARPA
In-Reply-To: The message of Fri, 28 Mar 86 08:57:27 -0500 from Dennis Rockwell <dennis@SH.CS.NET>

In  answer  to the second question "Does the hostname server...",
No.  They are both TCP connections and they both cause a  job  to
be started to serve them.

Mike
-------
Date: Sun, 30 Mar 86 23:48:20 pst
From: Earl Craighill <ejc%sri-gemini@sri-tsc>
Subject: Re: major changes to 1822

There is some data on ARPAnet performance for real-time traffic using
type-3 packets.  We conducted exercises (experiments on the internet
are VERY difficult to set up or interpret) using packet speech over a
variety of networks (ARPAnet, WB SATnet, Atlantic SATnet, PRnet,
RINGnets, LEXnet) in June of 1983.  The sites were NTA in Norway,
Lincoln Laboratory in Lexington, Mass.  and SRI in California.  We
estimated the likely range of delays in the ARPAnet for two segments--
SRI to LL, and LL to CSS.  The maximum time for SRI-LL was 2200 ms.
The minimum route at that time was 11 hops, so the minimum time
(calculated, not measured) was about 300 ms.  The LL-CSS route (min.
path 3 hops) was 90 to 550 ms.  I won't mention the delays on the other
nets, even the satellite nets look better than the DARPAnet.  With
increased traffic, these numbers have to increase.  Also, high thruput
wasn't the difficulty since we used low-rate (2400 baud) coded speech.

This data indicates that type-3 wasn't great for real-time traffic.  Of
course, the ARPAnet wasn't designed for real-time traffic.  But the new
design shouldn't assume that datagrams are the "right" method for low
delay service.  Further, flow-control algorithms may protect the
network as a whole, but won't help low-delay service, especially with
the anticipated growth in traffic (however, we did make effective use
of feedback from the PRnet to throttle back our offered rate).  What is
needed is flow-enhancement, say, some sensible use of priority in PSN
queue management.  Our experiments on the PRnet indicate that some
portion of "preferred" traffic can be supported without bringing the
network to its knees.  

I would encourage more experimentation, especially at the subnet level.
Our measures were very coarse and could not identify reasons for the
high delays (imps throwing away packets, long holding times in queues,
rerouting because of congestion, ??).  The current picture does not
look good.  Hopefully, some experimental data may lead to a different
transport mechanism that will provide low-delay service.

Earl
-------
Date: Mon, 31 Mar 86  8:35:20 EST
From: Andrew Malis <malis@bbnccs.ARPA>
Subject: Re: major changes to 1822
In-Reply-To: Your message of Sun, 30 Mar 86 23:48:20 pst

Earl,

Thanks for your informative message.  When we started working on
the new End-to-End, providing a low-delay service was not one of
our explicit goals.  However, improving the priority queuing
characteristics inside the PSN itself was one our goals, and one
that we have paid much attention to.  As a result, provided that
the hosts use the different priority levels intelligently,
high-priority traffic should experience a lower delay through the
new EE than low-priority traffic.

Of course, the upcoming release of the PSN only has these changes
in the EE section of the code.  Upgrading the store-and-forward
to also use the updated priority queuing is under way, and
scheduled for a later release of the PSN.

Andy
-------
Date: Mon, 31 Mar 86 09:46 EST
From: David C. Plummer <DCP@SCRC-QUABBIN.ARPA>
Subject: Re:  The NIC and FTPing host tables
In-Reply-To: The message of 28 Mar 86 15:03-EST from Ra <root%bostonu.csnet@CSNET-RELAY.ARPA>

    Date: Fri, 28 Mar 86 15:03:04 EST
    From: Ra <root%bostonu.csnet@CSNET-RELAY.ARPA>


    I'm new to some of this but wouldn't the obvious thing to do would be
    to put up a difference file (perhaps with a hook in the name so the
    difference from version 701 (eg) to the current were called something
    like HDIFF701.TXT, that is, the string to FTP could be built on the
    fly if you knew your current version.) The difference could then be
    patched in at the local host and away you go, quickly.

    Obviously that implies a few applications but I think they would be
    basically trivial, analogues already exist (eg. UNIX' diff, patch.)
    And for those who slavishly connect anyhow I suppose a format for
    a null patch file could be created. Or is this all a moot issue?

It isn't moot, but it doesn't address a larger issue: the general need
for secondary servers.  I agree with the initial message that spawned
this conversation, and I think network databases that have hundreds to
thousands of potential clients should have secondary servers.  For that
matter, it would be nice if some secondary server mechanism were in
place.  The host table is but one database that needs secondary servers.
The domain system needs them as well.  [It may already for all I know;
our current implementation tries to connect to sri-nic in order to find
out that BBNA is the resolver for .BBN.COM.  Maybe our implementation
isn't mature enough.]  There are at least TWO big wins to secondary
servers: (1) off loading the primary, (2) data availability when the
primary is down (or more generally, multiple data availability).

-------
Date:     Mon, 31 Mar 86 07:19 EST
From:     JOHNSON%northeastern.csnet@CSNET-RELAY.ARPA
Subject:  Concerning unused options fields in IP packet headers.

     I'm just learning tcp/ip myself.  In reading rfc 791,  (which 
may have been replace by now, I don't know yet) I found the following 
general statement in section 3.2:

     "In general an [IP] implementation must be conservative in its 
sending behavior, and liberal in its receiving behavior."

     Concerning assumption about IP packets this seems like a VERY good
idea.  To protect one's own system, one should hope for good input but
expect problems from incoming packets.  In general I've found this to be
the best way with most software.  Yes it's more work and on some systems
it can be very difficult.  However, covering your tail pointer is
usually a good idea.  It can be very embarrassing to have your wonderful,
totally cosmic piece of software crash in public.  It's even more
embarrassing when a real-time system like tcp/ip goes down because you
drop packets on the floor and then have to get a broom and a bucket to
clean things up.  At the speeds of some parts of the network, the mess
can get really big really fast. 

     I don't know about anyone else but I don't think I'd assume
anything about what options look like in a packet.  We all know what
happens when assumptions are made. 

Chris Johnson
Sr System Programmer
Northeastern University
-------
Date: Mon 31 Mar 86 12:37:50-EST
From: "J. Noel Chiappa" <JNC@XX.LCS.MIT.EDU>
Subject: Re: Anybody normally send packets with unused options?

	I guess I have a range of reactions to this. First, if you are
really that concerned about performance, there are tons of ways to
lose, many of them in how you structure the software and the
interchange of data between the network layer and the application.
Space does not permit expanding on this, but see RFC817, in the IP
Implementors Guide (*everyone* building anything should read the set
of memos written by Dave Clark in that volume). I would think that
copying around a few bytes of options is not a big concern.
	Second, although it may be true now that there usually aren't
options, it may not be true in the future, so I wouldn't plan your
performance around that. Third, I wouldn't make the data part of a
record anyway, but consider the data and options as separate
(optional) blocks. Use the pointers and grin.

	Noel
-------
Date: Mon 31 Mar 86 12:47:24-EST
From: "J. Noel Chiappa" <JNC@XX.LCS.MIT.EDU>
Subject: Re: Death to IEN 116
In-Reply-To: Message from "POSTEL@USC-ISIB.ARPA" of Fri 28 Mar 86 11:56:22-EST

	Jon, I think what he was trying to get at was that the full
blown Domain Name thing is a big mechanism, and perhaps what would be
better for small workstations such as PC's is a thing where the PC can
say to some sort of local translation server 'what's the IP address of
XX.LCS.MIT.EDU', instead of doing it all itself. Such an approach
would have the added benefit of less traffic, since there'd be one big
cache instead of lots of little ones. For talking to such a translation
server, a protocol such as IEN116 is perfect. I think some people at
MIT do this already.

	Noel
-------
Date: 31 Mar 1986 12:49:52 PST
From: POSTEL@USC-ISIB.ARPA
Subject: host.txt vs. domain servers
To:   tcp-ip@SRI-NIC.ARPA


Rick Adams:

I agree that it is nice to have a way to figure out the real address
when all you've got to go on is a nickname or a partial name.  But,
one of these days the host.txt file is going to die.  So one of the
things that needs to be done is to educate users to realize that when
they tell other people to send them mail the have to give the full
official name of their host.  This is one motivation for using the
full official name of the host on everything the host does.  It gets
the users used to seeing it, and using it.  Local nicknames can be
nice but they do have some disadvantages.  So while using hosts.txt
for figuring out what names should have been used when you get stuck
with a nickname is a neat trick, also realize that it won't work
forever.

--jon.

-------
-------

END OF DOCUMENT