The 'Security Digest' Archives (TM)

Archive: About | Browse | Search | Contributions | Feedback
Site: Help | Index | Search | Contact | Notices | Changes

ARCHIVE: TCP-IP Distribution List - Archives (1986)
DOCUMENT: TCP-IP Distribution List for November 1986 (116 messages, 83840 bytes)
SOURCE: http://securitydigest.org/exec/display?f=tcp-ip/archive/1986/11.txt&t=text/plain
NOTICE: securitydigest.org recognises the rights of all third-party works.

START OF DOCUMENT

-----------[000000][next][prev][last][first]----------------------------------------------------
Date:      1 Nov 1986 09:38-EST
From:      CERF@A.ISI.EDU
To:        braden@VENERA.ISI.EDU
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: Setting Initial Round-trip time
Bob,

harm comes from picking, for instance, values that are too low, leading
to unnecessary retransmission. No information probably leads to picking
some initial (constant) value - this is sometimes like having a broken
clock which is right twice a day rather than a clock which simply runs 
fast or slow and is never right. I will let you pick which analogy to apply!

Vint
-----------[000001][next][prev][last][first]----------------------------------------------------
Date:      Sat, 1-Nov-86 10:31:54 EST
From:      brescia@CCV.BBN.COM (Mike Brescia)
To:        mod.protocols.tcp-ip
Subject:   Re: Setting Initial Round-trip time

Re: analogies

     ... like having a broken clock which is right twice a day rather than a
     clock which simply runs fast or slow and is never right. I will let you
     pick which analogy to apply!

The analogy I wish to apply would be that neither a broken clock nor a
miscalibrated clock will ever be right if I am trying to count apples or
oranges.

I'd like to attack the assumption that knowing the round trip time will
compensate for the fact that packets are allowed to be dropped in the system.

(Mom & Apple Pie division)

In the current IP model, a packet may be delayed (10 to 100 seconds and more
have been reported), or dropped because of a transmission failure or
congestion at some gateway or packet switch.  If a packet is delayed, there
should be no retransmission because the second packet will only be delayed
behind the first.  If it is dropped due to transmission failure, the
retransmission should be as soon as possible, so that the end-point hosts see
a minimum disruption.  If it is dropped due to congestion, the retransmission
should be only as soon as you know the packet can get through or around the
congestion, otherwise you are only exacerbating it.

If you have arrived at a reasonable round trip time, and you have a packet
which has not been acknowledged after (some factor of) that time, can you
deduce which of the three things has happened to the packet?  If you make the
wrong decision, you can make things worse for yourself or the community of
users.


(Blue Sky division)

If the Internet could provide a better guarantee of delivery, as once the
Arpanet did, retransmission would not need to be so widespread, and a good
measure of round trip time would not be so much of a panic.  The Internet
model would need to be extended so that the effects of transmission losses and
congestion could be controlled.


    Mike

-----------[000002][next][prev][last][first]----------------------------------------------------
Date:      01 Nov 86 10:31:54 EST (Sat)
From:      Mike Brescia <brescia@ccv.bbn.com>
To:        CERF@a.isi.edu
Cc:        tcp-ip@sri-nic.ARPA
Subject:   Re: Setting Initial Round-trip time
Re: analogies

     ... like having a broken clock which is right twice a day rather than a
     clock which simply runs fast or slow and is never right. I will let you
     pick which analogy to apply!

The analogy I wish to apply would be that neither a broken clock nor a
miscalibrated clock will ever be right if I am trying to count apples or
oranges.

I'd like to attack the assumption that knowing the round trip time will
compensate for the fact that packets are allowed to be dropped in the system.

(Mom & Apple Pie division)

In the current IP model, a packet may be delayed (10 to 100 seconds and more
have been reported), or dropped because of a transmission failure or
congestion at some gateway or packet switch.  If a packet is delayed, there
should be no retransmission because the second packet will only be delayed
behind the first.  If it is dropped due to transmission failure, the
retransmission should be as soon as possible, so that the end-point hosts see
a minimum disruption.  If it is dropped due to congestion, the retransmission
should be only as soon as you know the packet can get through or around the
congestion, otherwise you are only exacerbating it.

If you have arrived at a reasonable round trip time, and you have a packet
which has not been acknowledged after (some factor of) that time, can you
deduce which of the three things has happened to the packet?  If you make the
wrong decision, you can make things worse for yourself or the community of
users.


(Blue Sky division)

If the Internet could provide a better guarantee of delivery, as once the
Arpanet did, retransmission would not need to be so widespread, and a good
measure of round trip time would not be so much of a panic.  The Internet
model would need to be extended so that the effects of transmission losses and
congestion could be controlled.


    Mike
-----------[000003][next][prev][last][first]----------------------------------------------------
Date:      Sat, 1 Nov 86 17:38:58 pst
From:      John B. Nagle <jbn@glacier.stanford.edu>
To:        TCP-IP@NIC
Subject:   Initial estimate of round-trip time
      In general, it is better to overestimate the round-trip time and
wait a bit longer than underestimate and cause congestion.  Very short
initial RTT estimates have caused considerable trouble in the past;
at various times, TOPS-20, 4.2BSD, and Symbolics TCP implementations
have had unreasonably short initial guesses.  And of course, if you
consistently use a RTT estimate smaller than the actual RTT, you
will fill up the link with multiple copies of the packet and cause
congestion collapse.  So think big.  I would argue for 5 seconds
as a good first guess.

      Unfortunately, there are problems that lead to a strong desire
to use a shorter interval.  Most of them reflect bugs or weak design
in the systems involved, but they are nonetheless real.   Here
are a few.

      There are some implementations that lose the first packet at
the link level due to a simple-minded implementation of ARP.  If
both source and destination have this problem, the first two packets
may be lost.  If source and destination are both on LANs interconnected
by gateways, everybody uses ARP, and nobody has the relevant entries
cached yet, it may be necessary to send a TCP SYN packet FIVE (5)
times before a reply makes it back to the source host.  And this is
in the absence of any packet loss for other reasons.

      There are some TCP implementations that lose the first SYN
packet received; the first SYN packet triggers the firing off of a
server task, but the server task doesn't get the SYN packet and
has to wait for a duplicate of it.  Again, this is a bad TCP
implementation, but such exist.

      And then, of course, there is packet loss through congestion,
about which I have written before.  The previous comment about an
observed 90% packet loss rate makes it clear that the problems 
have become more severe in the past few months.  Loss rates like
that can only come from vast overloads from badly-behaved implementations.

      The combination of all these problems does indeed temp one to
use a short RTT in hopes of improving one's own performance at the
expense of everybody else.  But resist the temptation.  It won't
fix the problem, which is elsewhere, and it will make the congestion
situation worse.

      Incidentally, I see nothing wrong with loading up a network to
100% link utilization.  If all the proper strategies are in, this should
work just fine.  We did this routinely at Ford Aerospace, with file transfers
running in the background sopping up all the idle line time while TELNET
sessions continued to receive adequate service.  It's no worse than 
running a computer with a mixed batch/time sharing load and 100% CPU
utilization.  (UNIX users may feel otherwise, but UNIX has traditionally
had a weak CPU dispatcher, being designed for a pure time-sharing load.)
The problem is not legitimate load, it's junk traffic due to bad
implementations.  We know this because if everybody did it right
the net would slow down more or less linearly with load, instead of
going into a state of semi-collapse.

      If some node is dropping 90% of its packets, somebody should 
be examining the dropped packets to find out who is sending them.
The party responsible should be spoken to.  Or disconnected. 
A little logging, some analysis, and some hard-nosed management
can cure this problem.  For most implementations, the fixes exist.
They usually just need to be installed.  There are still a lot of
stock 4.2BSD systems out there blithering away, especially from
vendors that don't track Berkeley too closely.

      As I pointed out in RFC970 (which will appear in IEEE Trans. on
Data Communications early next year, by the way), even a simple
scheduling algorithm in the gateways should alleviate this problem,
and prevent one bad host from effectively bringing the net down.

      Good luck, everybody.

					John Nagle
-----------[000004][next][prev][last][first]----------------------------------------------------
Date:      Sat, 1-Nov-86 20:38:58 EST
From:      jbn@GLACIER.STANFORD.EDU (John B. Nagle)
To:        mod.protocols.tcp-ip
Subject:   Initial estimate of round-trip time


      In general, it is better to overestimate the round-trip time and
wait a bit longer than underestimate and cause congestion.  Very short
initial RTT estimates have caused considerable trouble in the past;
at various times, TOPS-20, 4.2BSD, and Symbolics TCP implementations
have had unreasonably short initial guesses.  And of course, if you
consistently use a RTT estimate smaller than the actual RTT, you
will fill up the link with multiple copies of the packet and cause
congestion collapse.  So think big.  I would argue for 5 seconds
as a good first guess.

      Unfortunately, there are problems that lead to a strong desire
to use a shorter interval.  Most of them reflect bugs or weak design
in the systems involved, but they are nonetheless real.   Here
are a few.

      There are some implementations that lose the first packet at
the link level due to a simple-minded implementation of ARP.  If
both source and destination have this problem, the first two packets
may be lost.  If source and destination are both on LANs interconnected
by gateways, everybody uses ARP, and nobody has the relevant entries
cached yet, it may be necessary to send a TCP SYN packet FIVE (5)
times before a reply makes it back to the source host.  And this is
in the absence of any packet loss for other reasons.

      There are some TCP implementations that lose the first SYN
packet received; the first SYN packet triggers the firing off of a
server task, but the server task doesn't get the SYN packet and
has to wait for a duplicate of it.  Again, this is a bad TCP
implementation, but such exist.

      And then, of course, there is packet loss through congestion,
about which I have written before.  The previous comment about an
observed 90% packet loss rate makes it clear that the problems 
have become more severe in the past few months.  Loss rates like
that can only come from vast overloads from badly-behaved implementations.

      The combination of all these problems does indeed temp one to
use a short RTT in hopes of improving one's own performance at the
expense of everybody else.  But resist the temptation.  It won't
fix the problem, which is elsewhere, and it will make the congestion
situation worse.

      Incidentally, I see nothing wrong with loading up a network to
100% link utilization.  If all the proper strategies are in, this should
work just fine.  We did this routinely at Ford Aerospace, with file transfers
running in the background sopping up all the idle line time while TELNET
sessions continued to receive adequate service.  It's no worse than 
running a computer with a mixed batch/time sharing load and 100% CPU
utilization.  (UNIX users may feel otherwise, but UNIX has traditionally
had a weak CPU dispatcher, being designed for a pure time-sharing load.)
The problem is not legitimate load, it's junk traffic due to bad
implementations.  We know this because if everybody did it right
the net would slow down more or less linearly with load, instead of
going into a state of semi-collapse.

      If some node is dropping 90% of its packets, somebody should 
be examining the dropped packets to find out who is sending them.
The party responsible should be spoken to.  Or disconnected. 
A little logging, some analysis, and some hard-nosed management
can cure this problem.  For most implementations, the fixes exist.
They usually just need to be installed.  There are still a lot of
stock 4.2BSD systems out there blithering away, especially from
vendors that don't track Berkeley too closely.

      As I pointed out in RFC970 (which will appear in IEEE Trans. on
Data Communications early next year, by the way), even a simple
scheduling algorithm in the gateways should alleviate this problem,
and prevent one bad host from effectively bringing the net down.

      Good luck, everybody.

					John Nagle

-----------[000005][next][prev][last][first]----------------------------------------------------
Date:      Sun, 2-Nov-86 14:56:02 EST
From:      MRC@SIMTEL20.ARPA (Mark Crispin)
To:        mod.protocols.tcp-ip
Subject:   overly short RTT's

John, this is all well and good, but under the present system there
is a slight reward given to anti-social sites which blat out
retransmissions too quickly.  The sites which do not have excessively
short RTT's find themselves jammed out from access to the gateways.

Perhaps what we need to do is make the gateways be a little smarter
than just packet forwarders (yes, my gateway-building friends, I know
that I am being excessively simplistic, but hear me out).  Perhaps a
gateway should know enough about TCP to be able to detect retransmissions
of packets already in their queues and toss them out.
-------

-----------[000006][next][prev][last][first]----------------------------------------------------
Date:      Sun, 2-Nov-86 16:20:09 EST
From:      steve@BRL.ARPA (Stephen Wolff)
To:        mod.protocols.tcp-ip
Subject:   Re:  overly short RTT's

Umm.  There are even now other things than TCP lurking within those
innocent IP wrappers, and their number will increase as various children
of the ISO family are adopted.  Should the gateway know them all?  -s

-----------[000007][next][prev][last][first]----------------------------------------------------
Date:      Sun, 2 Nov 86 16:20:09 EST
From:      Stephen Wolff <steve@BRL.ARPA>
To:        Mark Crispin <MRC@simtel20.arpa>
Cc:        tcp-ip@sri-nic.arpa
Subject:   Re:  overly short RTT's
Umm.  There are even now other things than TCP lurking within those
innocent IP wrappers, and their number will increase as various children
of the ISO family are adopted.  Should the gateway know them all?  -s

-----------[000008][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3-Nov-86 05:57:36 EST
From:      cavallar@CSVAX.CALTECH.EDU
To:        mod.protocols.tcp-ip
Subject:   REMOVAL

Please remove the following address from the mailing list.

	jeff@vaxa.isi.edu

Thank you.

-----------[000009][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3-Nov-86 06:47:28 EST
From:      hedrick@TOPAZ.RUTGERS.EDU (Charles Hedrick)
To:        mod.protocols.tcp-ip
Subject:   Re:  IP on DECNET

Thanks.  I had in mind doing it as a device driver that pretended to
be point to point links.  It's hard to see why this would have to know
anything about DECnet.  As long as the bits get there, I find it hard
to see how DECnet could know the difference.  The only question I see
is whether the interface between my device driver and the DEUNA device
driver can be made to work, particularly in the presence of the
Wollongong code.  My suspicion is that it might work only when no
other IP was hapneing for that interface.  (On a number of machines,
we are now using separate interfaces for IP and DECnet.)  The problem,
of course, is figuring out when an IP or ARP packet gets handed to
Wollongong and when to my hack.  I intentionally suggested using point
to point links because there is no structure to them.  If I tried to
emulate DECnet support of Ethernet, I would have to handle multicasts
and make the system think it saw an area router.  I don't know about
the X.25 support, but assume there are protocols involved to open
connection that would have to be interpreted.

I am to meet with someone from DEC this week on this issue.
Apparently at least someone is interested in looking into it.  I find
that DECnet is currently my biggest networking headache.  I can get
good TCP implementations for every machine other than the VAX.  For
the VAX we have things that are either incomplete or the Wollongong
thing whch is too expensive for large-scale use and still doesn't
handle mail right.  I know a number of people who think DEC wants it
that way.  That's always hard to judge.  But the only convenient way
to build a campus network that will pass DECnet is to use level 2
routers.  We are very reluctant to do this because of concerns about
Ethernet meltdown being propagated around the campus.  Stevens is
building a campus network with LANbridges.  But the person responsible
didn't know about the problem with Ethernet meltdown.  They referred
me to the DEC person who is doing their design, and he didn't seem to
care.  I suggested maybe a simple filter to prevent broadcasts from
passing when the packet type is IP.  He would rather die than suggest
that DEC should do anything based on packet type, since that is
non-ISO.

The problem with depending upon migration to ISO is that it looks like
the last piece of ISO to fall into place will be the network layer,
and that is precisely what I need.  In fact, the only plausible
strategy for implementing ISO that I have heard is to layer it on top
of IP.

-----------[000010][next][prev][last][first]----------------------------------------------------
Date:      3 Nov 1986 13:20-PST
From:      STJOHNS@SRI-NIC.ARPA
To:        rick@SEISMO.CSS.GOV
Cc:        JNC@XX.LCS.MIT.EDU, tcp-ip@SRI-NIC.ARPA egp-people@CCV.BBN.COM
Subject:   Re: Poor performance related to egp?
Rick,

        Let me try to explain.  The problem is not that GGP needs
tuning, its that it needs replacing.  There  is  only  a  limited
amount of money in the pot for gateway algorithms and DARPA chose
to invest it in the Butterfly.  The DDN has made the decision  to
also  transition  the  mailbridge gateways to the Butterfly.  All
this take time, money and a lot of programming effort.  nd in the
government,  you  have to schedule each of them years in advance.
Sorry, but that is the way it is.

        I look forward to the day GGP goes away, but I'm  afraidd
you'll  have to put up with the wierd interactions with EGP until
then.

Mike StJohns
-----------[000011][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3-Nov-86 12:11:45 EST
From:      braden@ISI.EDU (Bob Braden)
To:        mod.protocols.tcp-ip
Subject:   Re: Setting Initial Round-trip time

Van,

Your message is wonderful.  After years of our sitting around making
Aristotelian speculations on this mailing list, you actually took some
data, with fascinating results.  Can we hope that you will write this
data up?  It is hard to see how we can make progress in this game unless
results like yours get disseminated for peer comment and education.

From my viewpoint, an RFC would be the right level... it would not have
the formality or publication delays of a "real" paper, and would make this
data and your ideas available as soon as possible.

If you don't have time to write it up, could be persaude you to come to
an appropriate task force meeting and present it?

Bob Braden

-----------[000012][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3 Nov 86 16:03:46 PST
From:      minshall%violet.Berkeley.EDU@berkeley.edu
To:        tcp-ip@sri-nic.arpa
Subject:   New version of tn3270 available.
A new version of tn3270, a program which emulates an IBM 3270 over the
ethernet, is available for anonymous ftp.  The tn3270 tar file is located
on host arpa.berkeley.edu, in directory pub, in file tn3270tar.  Unix sites
should probably retrieve this file in binary mode.

Following is the announcement of the new features, and bug fixes, in this
version of tn3270.  Thanks for much of the work to Bob Braden, Alan
Crosswell, Cliff Frost, and Steve Jacobson.

Your truly,
Greg Minshall

----------

New versions of the tn3270 and mset commands, used to logon to CMS from
unix, have been installed in /usr/new on all computer center Vaxen and
Suns.

Significant changes to tn3270 are:

o   The original version of tn3270 emulated an IBM 3277 terminal.  This is
    an out-of-date terminal, and is no longer supported in some IBM
    environments.  The new version of tn3270 emulates an IBM 3278
    terminal, which is a more recent IBM terminal.  The new version of
    tn3270 will emulate different models of the IBM 3278, depending
    on the size of the user's terminal.  The available terminal models
    and screen sizes are documented in "man new tn3270".

o   This version of tn3270 (and mset) allow the user to send an EBCDIC cent
    sign to the CMS host with the new map3270 "centsign" entry.  In addition,
    TEST REQUEST and CURSOR SELECT (IBM 3270 functions) now work reliably.

o   This version of tn3270 handles "autoskip" fields correctly.

o   This version of tn3270 will work even if the terminal (or window) on
    which tn3270 is running has a line width of more than 80 columns.

o   Clearing all local tab stops now clears "home" and "right margin"
    (which is the way the Series/1 works).

o   Tn3270 and mset now use an environment variable "KEYBD", if it
    exists, to decide which entry in /etc/map3270 to use for the
    user's terminal.  If "KEYBD" is not defined, then "TERM" is used.

o   A bug in the implementation of the 3270 order "Repeat to Address" (RA)
    has been fixed.

o   Mset now has new options "-picky" and "-shell"; see "man new mset".

o   Tn3270, when terminating (or going into command mode) now sends
    the termcap :ve:, :ke:, and :te: (if they exist) to the terminal.
    In addition, if the screen was in standout mode, this mode is
    cleared before terminating (or going into command mode).  See
    termcap(5).

o   Tn3270 now attempts to use the termcap :md: and :me: strings for
    highlighting instead of :so: and :se: (:so: and :se: are still used
    if :md: and :me: do not exist).  See termcap(5).

o   Various bugs giving rise to infinite loops dealing with "unformatted"
    screens have been fixed.

o   The base telnet portion of tn3270 (see telnet(1)) has been upgraded
    to the 4.3 telnet.  This has fixed many bugs where tn3270, in telnet
    mode, violated the ARPAnet TELNET specification.  In addition, the
    command structure for tn3270 is that of the 4.3 telnet, rather than
    the 4.2 telnet (which was the command structure for the older versions
    of tn3270).

o   A new command, transcom, has been added.  This allows users to write
    (somewhat intricate) programs which can communicate, in ASCII, with
    programs in the IBM host that talk "transparent" mode.  This may be
    useful for communicating graphics data to the terminal.  For more
    information on this feature, please see "man new tn3270" (and, on the
    Sun systems, "man tk3270").

------------

This is version 2 of tn3270.

Files (and directories):

ANNOUNCE	A description of the new functions and fixes
		in this version of tn3270.
README		This file.
curses		The 4.3 curses package, which allows PUTCHAR to
		be defined (needed only if NOT43 is defined; see
		tn3270/makefile) for the sun and vax computers.  These
		do NOT include the source for curses, just two .a files
		(in curses/sun and curses/vax).
include		Contains arpa/telnet.h (for 4.2 sites).  In addition,
		include/curses.h is the include file that MUST be used
		if the libcurses files in curses/vax/libcurses.a
		or curses/sun/libcurses.a are going to be used.
man		Contains man pages in man/man1 and man/man5.
telnet.c	Provides telnet protocol support for tn3270.  This
		is essentially the 4.3 telnet.c; for tn3270 use,
		this must be compiled with -DTN3270.
tn3270		The actual code for tn3270.  Note that the names of
		many of the files have changed.  In addition,
		a new "make" target exists named "prt3270".  This
		will generate a small program which interprets
		3270 data streams (as printed out by "toggle
		netdata"); hopefully this won't be needed by
		many people, but will be useful to those in need.
transcom	An example of a transcom command driver; for using
		tektool on Suns.  This directory includes the man
		page entry for the comand driver (tk3270).


Thanks go to Bob Braden, now at ISI, for his help in making tn3270
speak correctly in a 3278 environment; Alan Crosswell, at Columbia,
for working on alternate screen sizes; Cliff Frost, at Berkeley, for
help in the MS-DOS area; Steve Jacobson, at Berkeley, for lots
of work in the area of mset and transcom; and Jane Wolff, at Berkeley,
for helping keep the documentation intelligible to the user community.

There are comments in the code which might lead the casual reader
to think that possibly an MS-DOS version of tn3270 exists.  This
is, in fact, true.  We run with the Ungermann-Bass boards (which
implement TCP/IP on board).  We plan on distributing the
entire "tn3270 on a PC" package at some point, but packaging is a problem.
Not only does one need the tn3270 source (which is what you have here),
but one needs:  the right C compiler (we use the MetaWare compiler),
4.2 socket emulation code (which we wrote), minimal curses (which,
again, we wrote - but would be useless outside of a tn3270 environment),
and some C library stuff (as in (3N), mostly).  People interested in
the MS-DOS version should probably contact me directly.

Greg Minshall
<minshall@berkeley.edu>
415-642-0530
-----------[000013][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3-Nov-86 14:59:10 EST
From:      jcp@BRL.ARPA (Joe Pistritto, JHU|mike)
To:        mod.protocols.tcp-ip
Subject:   Re:  overly short RTT's


	No, it is clearly inappropriate for Gateways to assume that TCP
is being transmitted, or to have knowledge of everything in them.  However,
(and I agree that this greatly complicates building gateways), I think the
gateways SHOULD have some knowledge of what the 'real' bandwidth available
into another gateway (and hopefully beyond that gateway to the final
destination) is, and try actively not to exceed it.  It seems that ICMP
source quenches when appropriate would greatly alleviate the problem.
(assuming of course that all implementations reacted intelligently to
a source quench....)

						-jcp-

-----------[000014][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3-Nov-86 15:00:29 EST
From:      rick@SEISMO.CSS.GOV (Rick Adams)
To:        mod.protocols.tcp-ip
Subject:   Re: Poor performance related to egp?


Let me see if I have this correct. Based on the letters I have received:

	There is a major problem with GGP.
	This has been known for a long time.
	There is no plan to fix it in the forseeable future.
	This problem "at most" doubles the load on the arpanet.

Can anyone explain why this doesn't warrant immediate attention?

If someone told me there was a kernel bug that "at most" wasted 50% of
my CPU, I'd be quite concerned about it. I wouldn't wait for the next
hardware release and hope it was fixed then.

Observation indicates a 10 to 1 degradation in performance, which is
not what I would expect from doubling the load.

There seems to be some belief that the BBN Butterfly will be the salvation
of the world. I hope the Butterfly being considered is a lot different from
the Butterfly sitting about 25 feet from me (css-gateway 10.2.0.25). This
particular Butterfly is one of the most unreliable things I have
ever seen. It often needs to be MANUALLY (i.e. they call me up) rebooted
several times per day. 

Waiting for a solution based on the Butterfly seems quite foolish.
Especially when people are forced to install their own leased lines because
the ARPANET performance is unacceptable. (We already have 2. I'm sure there
are many others. I find it especially ironic that our DARPA project manager can
not use the ARPANET to access our machine (unaceptable performance), but
has to use a leased line)

---rick

-----------[000015][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3-Nov-86 15:06:49 EST
From:      BILLW@SCORE.STANFORD.EDU (William "Chops" Westfield)
To:        mod.protocols.tcp-ip
Subject:   determining RTTs

how about a couple of TCP options:

	This is retransmission # n
	This contains an ACK for retransmission #n
or
	Timestamp
	This is an ACK for packet containing TIMESTAMP.

This would enable RTT's to be determined exactly.

Of course, it's rather late to be adding things to the TCP spec.  Sigh.

billW
-------

-----------[000016][next][prev][last][first]----------------------------------------------------
Date:      03 Nov 86 18:07:28 EST (Mon)
From:      Robert Hinden <hinden@ccv.bbn.com>
To:        Rick Adams <rick@seismo.css.gov>
Cc:        JNC@xx.lcs.mit.edu, tcp-ip@sri-nic.ARPA, egp-people@ccv.bbn.com, hinden@ccv.bbn.com
Subject:   Re: Poor performance related to egp?
Rick,

You raise a number of issues which I would like to respond to.
Firstly, while everyone has been aware for some time that GGP does
not do a good job in regard to exterior EGP routes and there is no
plan to fix GGP, I disagree that it doubles the overall load on the
Arpanet.  It certainly does cause some traffic which is forwarded
through a core gateway that is destined to a network behind an
external EGP gateway to get an extra hop.  Also, as you point out
that for traffic that originates behind a EGP gateway that is also
destined to an exterior network will get an extra hop.  These extra
hops do not double the overall load on the Arpanet.  If one were to
count the traffic from all of the cases which do not cause an extra
hop (e.g. host to host, core gateway to core gateway, core gateway to
host, host to core gateway, host to exterior gateway, exterior
gateway to host, etc.) I suspect that over 80% of all Arpanet traffic
could be accounted for.

The problem with the Arpanet, in my opinion, is that it is under
configured for the traffic that is presented to it.  Getting rid of
all extra hops will not fix the Arpanet.  There needs to be more
capacity (trunks and IMP's).

The extra hops, while not optimal, aren't the worse thing in the
world.  I think most people would agree that it was more important to
make the core gateways handle more networks than fix the GGP extra
hop problem.  Also note that as the Internet grows, it might change
from the current flat routing model to a hierarchical model.  This
might cause all traffic to get an extra hop or two.

The Butterfly Gateway located at CSS, which connects the Arpanet to
Satnet, has been the least reliable of all of the Butterfly Gateways,
sometimes restarting several times a day.  We installed new software
last Friday which should fix these problems and it has been up since
then without any restarts.

Bob
-----------[000017][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3 Nov 86 22:32:35 EST
From:      garry@tcgould.tn.cornell.edu (Garry Wiegand)
To:        arpa.tcp-ip
Subject:   mysterious ftp
I am not a network wizard, and this may not be the right group to post
to, but I can't find a better one. Apologies in advance.

I am seeing some ftp behaviour that mystifies me. To be specific, when
I tell ftp to "get" a file, I normally see these messages:

	ftp> get foo
	200 PORT command OK
	125 File transfer started correctly
	226 File transfer completed OK
	xx bytes transferred in x.xx seconds (xx Kbytes/s)
	ftp>

But occasionally I see:

	ftp> get foo
	200 PORT command OK
	125 File transfer started correctly
	226 File transfer completed OK
	ftp>

Ie, no "bytes transferred" message. When I exit and look for "foo", 
it's not there. At all. This can be especially annoying when I'm
"mget"ing a large number of files, and some random number of them
do not arrive.

I am working with a 4.3 BSD Gould, a 4.1 (Wollongong 4.2) BSD Vax, and
an unknown-vintage Silicon Graphics Iris. All are connected to my local
ethernet.

Help ?

garry wiegand   (garry%cadif-oak@cu-arpa.cs.cornell.edu)
-----------[000018][next][prev][last][first]----------------------------------------------------
Date:      4 Nov 1986 06:23-PST
From:      STJOHNS@SRI-NIC.ARPA
To:        hinden@CCV.BBN.COM
Cc:        rick@SEISMO.CSS.GOV, JNC@XX.LCS.MIT.EDU tcp-ip@SRI-NIC.ARPA, egp-people@CCV.BBN.COM
Subject:   Re: Poor performance related to egp?
Re  Bob's  comment on the Arpanet underconfiguration, last time I
looked, we had something like 20 additional trunks  ordered,  but
not  yet  installed.  I have no idea when the TELCO will get them
in so don't ask.  But there is a light at the end of the  tunnel.
Mike
-----------[000019][next][prev][last][first]----------------------------------------------------
Date:      Tue, 4-Nov-86 11:09:00 EST
From:      WIBR@MIT-MULTICS.ARPA
To:        mod.protocols.tcp-ip
Subject:   Removal from TCP-IP relay

Would you please remove WIBR.WIBRMAIL from the mailing list.

thanks a lot.

-----------[000020][next][prev][last][first]----------------------------------------------------
Date:      Tue, 4-Nov-86 13:00:00 EST
From:      DCP@QUABBIN.SCRC.SYMBOLICS.COM (David C. Plummer)
To:        mod.protocols.tcp-ip
Subject:   Re:  overly short RTT's


    Date:     Sun, 2 Nov 86 16:20:09 EST
    From:     Stephen Wolff <steve@BRL.ARPA>

    Umm.  There are even now other things than TCP lurking within those
    innocent IP wrappers, and their number will increase as various children
    of the ISO family are adopted.  Should the gateway know them all?  -s

As much as possible: Yes.  This strategy helped tremendously with
another network protocol (Chaos) which has a .5 second retransmit
timeout when used over slow (9600 baud) land lines.  It actually did
more than that: it understood enouch of Chaos to limit the number of
packets per connection.  This problem was slightly alleviated by
changing out Chaos implementation to retransmit only the first packet on
the queue, which I think is common lore for all protocol implementations
now-a-days.

-----------[000021][next][prev][last][first]----------------------------------------------------
Date:      Tue, 4-Nov-86 13:38:58 EST
From:      steve@BRL.ARPA (Stephen Wolff)
To:        mod.protocols.tcp-ip
Subject:   Re:  overly short RTT's

>>    Should the gateway know them all?
 
>     As much as possible: Yes.

I think you're saying the layering's done wrong.

-----------[000022][next][prev][last][first]----------------------------------------------------
Date:      Tue, 4 Nov 86 13:38:58 EST
From:      Stephen Wolff <steve@BRL.ARPA>
To:        "David C. Plummer" <DCP@scrc-quabbin.arpa>
Cc:        tcp-ip@sri-nic.arpa
Subject:   Re:  overly short RTT's
>>    Should the gateway know them all?

>     As much as possible: Yes.

I think you're saying the layering's done wrong.

-----------[000023][next][prev][last][first]----------------------------------------------------
Date:      Tue, 4-Nov-86 19:10:38 EST
From:      jdb@JPL-VLSI.ARPA
To:        mod.protocols.tcp-ip
Subject:   TCP Question


     
     Does TCP have a  one-to-one responsibility to the ULP so the local-ULP 
     knows what packets the remote-TCP has accepted?
                      
     On a stream socket I can:
                         1) Establish a connection to a remote host
                         2) Physically DISCONECT the NAU from the ethernet
                         3) Issue n TCP writes and receive n GOOD return status
                         4) On the n+1 write or close I (may) get an error

    This may mean the n-10,n-11,n-12 reached their destination but n-1,n-2 
    didn't...or any other combination.I have no way of knowing.Can someone
    please explain where the "reliable communication"  is?

                            Thanks ,
                                      Jeff Busma
                                      JDB@JPL-VLSI.ARPA

-----------[000024][next][prev][last][first]----------------------------------------------------
Date:      5 Nov 1986 10:23-EST
From:      CLYNN@G.BBN.COM
To:        BILLW@SU-SCORE.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: determining RTTs
Bill,
	I was experimenting with those ideas a couple years ago;
the results were encouraging.  I tried to use a "new" tcp option
but discovered that one "legal" interpretation of the tcp spec
lead to an implementation which rejected any tcp connection which
contained an option which wasn't listed in the spec (don't know
if it has a one-byte or multiple-byte format therefore cannot
parse it).

	To minimize the impact with existing implementations I
passed information in the usually unused urgent pointer field
(the option was to inform the remote system that the field was
being used in this manner).  The information was thus carried
at no extra cost through the net.  The 16 bits were divided into
three fields: 3 for the retransmission number of the packet/data,
zero to six, or more (sender to receiver information); 3 for the
retransmission number of the packet which contained the oldest
data byte necessary to advance the ack (receiver to sender
information); and 10 for the size of any gap at the receiver.
The 3 bit fields seemed to be sufficient except in cases where
the destination had actually become unreachable.  There could be
some ambiguity if acks were lost, but that was minimized by only
retransmitting one packet's worth of data.  The size of the gap
was useful to limit retransmission of packets that had been
received after one which had been dropped, especially for those
systems which ack every packet received instead of just the last
in their receive queue.

Charlie
PS: The code to fill in the urgent field as described above was in
the last BBN release of network code for TOPS20;  feel free to
carry on the experimentation.
-----------[000025][next][prev][last][first]----------------------------------------------------
Date:      Wed, 5-Nov-86 11:18:15 EST
From:      bam@triadc.UUCP (Bruce Mac Donald)
To:        mod.protocols.tcp-ip
Subject:   (none)

Date: Wed, 5 Nov 86 08:03:26 pst
From: bam (Bruce Mac Donald)
Message-Id: <8611051603.AA11629@triadc.UUCP>
Subject: dos/vse ica question
Newsgroups: mod.protocols.tcp-ip,misc.wanted
Distribution: na
To: tcp-ip@sri-nic.arpa
Keywords: ica ibm4361 cics btam

I am posting this for a co-worker with no direct access to the net please 
email all replies to me I will summarize for the net if there is adequate
interest.

The environment is an IBM 4361 running under DOS/VSE using ICA (integrated
communications adapter), BTAM as access method.  I have a CICS application
that will be calling non IBM systems.

Basic question:  What is the best method of passing the phone number to the
dialer from the CICS application?

-----------[000026][next][prev][last][first]----------------------------------------------------
Date:      Thu, 6-Nov-86 09:21:44 EST
From:      kjl@BBN-CLXX.ARPA (Ken Lebowitz)
To:        mod.protocols.tcp-ip
Subject:   CMC TCP/IP experiences?

I'd like to hear from anyone who has used the CMC TCP/IP processor board.
I'm particularly interested in comments about it's TCP performance.

Ken Lebowitz
BBN Labs
<kjl@pineapple.bbn.com>

-----------[000027][next][prev][last][first]----------------------------------------------------
Date:      Thu, 6 Nov 86  9:21:44 EST
From:      Ken Lebowitz <kjl@bbn-clxx.arpa>
To:        tcp-ip@sri-nic.arpa
Subject:   CMC TCP/IP experiences?
I'd like to hear from anyone who has used the CMC TCP/IP processor board.
I'm particularly interested in comments about it's TCP performance.

Ken Lebowitz
BBN Labs
<kjl@pineapple.bbn.com>

-----------[000028][next][prev][last][first]----------------------------------------------------
Date:      7 Nov 1986 08:30-PST
From:      STJOHNS@SRI-NIC.ARPA
To:        CLYNN@BBNG.ARPA
Cc:        BILLW@SU-SCORE.ARPA, tcp-ip@SRI-NIC.ARPA
Subject:   Re: determining RTTs
As  of the TCP/IP Vendors workshop this August, the only one byte
TCP options are those already in place.  Any NEW options will  be
multiple  byte  format.   I think this was mentioned here before,
but might as well say it again.

(This is per Jon Postel) Mike
-----------[000029][next][prev][last][first]----------------------------------------------------
Date:      7 Nov 86 10:16 EST
From:      DDN-NAVY @ DDN1
To:        tcp-ip @ sri-nic.arpa
Cc:        DDN-NAVY @ DDN1
Subject:   REMOVAL FROM TCP-IP MAILING LIST
PLEASE REMOVE DDN-NAVY AND JYOUNG AT DDN1 FROM THE TCP-IP MAILING 
LIST.
THANKS
JYOUNG SENDS

-----------[000030][next][prev][last][first]----------------------------------------------------
Date:      Fri, 7-Nov-86 11:00:00 EST
From:      DCP@QUABBIN.SCRC.SYMBOLICS.COM (David C. Plummer)
To:        mod.protocols.tcp-ip
Subject:   Re:  overly short RTT's


    Date:     Tue, 4 Nov 86 13:38:58 EST
    From:     Stephen Wolff <steve@BRL.ARPA>

    >>    Should the gateway know them all?
 
    >     As much as possible: Yes.

    I think you're saying the layering's done wrong.

I can't answer that without a counter-proposal for a different layering.

-----------[000031][next][prev][last][first]----------------------------------------------------
Date:      Fri, 7 Nov 86 15:01:11 pst
From:      John B. Nagle <jbn@glacier.stanford.edu>
To:        mod.protocols.tcp-ip
Subject:   Re:  overly short RTT's
> Should the gateway know them (transport protocols) all?

     I gave some serious thought to this approach back in 1984, and went
so far as to start writing a TCP segment consolidator for a gateway.
But the complexity of such a thing is comparable to the receive side
of a TCP implementation, and the possibility exists of introducing a class of
bug for which the assignment of blame would be very difficult.

     It was after discarding this line of attack that I came up with
"fair queuing", as described in RFC970.  The key idea there is that
gateways should have queuing strategies that favor well-behaved hosts
over badly-behaved ones, rather than the reverse, so that it is in the
self-interest of hosts to be well-behaved.  This was a radical idea when
I first proposed it, but gradually the community seems to be coming around
to the point of view that non-FIFO queuing strategies in gateways are
desirable.  

     Now that we know how to throttle TCP connections effectively over
a speed range of three or four orders of magnitude, and now that 4.3BSD
and some other implementations have the machinery to do this correctly, 
we should be able to make the whole Internet run smoothly over a wide
range of loading conditions.   With smarter algorithms in the gateway
to control the throttling and to prevent ill-behaved hosts from
using up most of the resources, it should be possible to stop the
present abysmal behavior of the Internet under heavy load.

     I hate to say "I told you so", but I did predict the congestion
collapse of the Internet in my article in ACM Computer Communications
Review in October 1983.  From the reports I read here, it has happened.
I've proposed ways to deal with the problem, and those that have been
tried have worked.  There's been some "not invented here" grousing,
but no one has shown that my approaches are invalid.  There may indeed
be more elegant solutions to some of these problems.  (Lixia Zhang
at MIT is working on some).  But unless someone has a better idea that
will stand scrutiny by the community, I suggest that somebody put
fair queuing in a few key gateways and see if things get better.

     The description of the algorithm in RFC970 is a bit sketchy, so if
you want to implement it and are a bit confused, please feel free to get
in touch with me and I will try to be of assistance.

				John Nagle

-----------[000032][next][prev][last][first]----------------------------------------------------
Date:      Fri, 7-Nov-86 15:28:30 EST
From:      MILLS@A.ISI.EDU
To:        mod.protocols.tcp-ip
Subject:   Re: determining RTTs

In response to the message sent  5 Nov 1986 10:23-EST from CLYNN@G.BBN.COM

Charlie,

And I thought I was the only one making mischief in underutilized header
bandwidth. Fuzzballs happen to stamp the low-order 16 bits of the local
millisecond clock in the IP sequence field (generating more than one datagram
per millisecond is unlikely in these fossils). As you know, fuzzies usually keep
rather good time, so this might be useful in determining one-way delays. Also,
fuzzies stamp the size of the current retransmit queue in the urgent field
(when that field is not being used for the purposes intended). This was intended
for experiments in adaptive receiver strategies in much the same way your abuse
of the urgent field was. Someday when we both are grey we oughta try some of
those experiments.

Dave
-------

-----------[000033][next][prev][last][first]----------------------------------------------------
Date:      Fri, 7-Nov-86 18:01:11 EST
From:      jbn@GLACIER.STANFORD.EDU (John B. Nagle)
To:        mod.protocols.tcp-ip
Subject:   Re:  overly short RTT's

> Should the gateway know them (transport protocols) all?

     I gave some serious thought to this approach back in 1984, and went
so far as to start writing a TCP segment consolidator for a gateway.
But the complexity of such a thing is comparable to the receive side
of a TCP implementation, and the possibility exists of introducing a class of
bug for which the assignment of blame would be very difficult.

     It was after discarding this line of attack that I came up with
"fair queuing", as described in RFC970.  The key idea there is that
gateways should have queuing strategies that favor well-behaved hosts
over badly-behaved ones, rather than the reverse, so that it is in the
self-interest of hosts to be well-behaved.  This was a radical idea when
I first proposed it, but gradually the community seems to be coming around
to the point of view that non-FIFO queuing strategies in gateways are
desirable.  

     Now that we know how to throttle TCP connections effectively over
a speed range of three or four orders of magnitude, and now that 4.3BSD
and some other implementations have the machinery to do this correctly, 
we should be able to make the whole Internet run smoothly over a wide
range of loading conditions.   With smarter algorithms in the gateway
to control the throttling and to prevent ill-behaved hosts from
using up most of the resources, it should be possible to stop the
present abysmal behavior of the Internet under heavy load.

     I hate to say "I told you so", but I did predict the congestion
collapse of the Internet in my article in ACM Computer Communications
Review in October 1983.  From the reports I read here, it has happened.
I've proposed ways to deal with the problem, and those that have been
tried have worked.  There's been some "not invented here" grousing,
but no one has shown that my approaches are invalid.  There may indeed
be more elegant solutions to some of these problems.  (Lixia Zhang
at MIT is working on some).  But unless someone has a better idea that
will stand scrutiny by the community, I suggest that somebody put
fair queuing in a few key gateways and see if things get better.

     The description of the algorithm in RFC970 is a bit sketchy, so if
you want to implement it and are a bit confused, please feel free to get
in touch with me and I will try to be of assistance.

				John Nagle

-----------[000034][next][prev][last][first]----------------------------------------------------
Date:      Sat, 08 Nov 86 20:46:12 -0500
From:      Craig Partridge <craig@loki.bbn.com>
To:        tcp-ip@sri-nic.arpa
Subject:   RTT's revisited

As the guy whose message started this debate, I thought it might be
worthwhile to describe the mechanism I chose to try.

Basically I'm using a solution suggested by Lixia Zhang, with some tweaking.
Lixia suggested that the best estimate of the round-trip time was the
time it takes the first packet to make the loop (in RDP's case, the
time between the opening SYN and its ACK).

The idea sounded good so I put it in and promptly hit a small problem.
My simple minded implementation tended to pick timeouts
that were either too long or too short.  The problem was one of defining
where to measure the round trip from.  If you measure from the first
time you send a SYN, you tend to get a timeout estimate that is too 
high (if you had to resend the SYN several times, the likelihood that
the ACK you finally get is for the first one is rather small).  If
you measure from the time you send the most recent SYN, the estimate
is too low (again, it is unlikely that having sent several packets, the
ACK is for the most recent one).

So I tweaked the implementation.  First off, SYNs are retransmitted using
a roughly exponential backoff for the timeouts.  Then when
the ACK comes back, the estimated round-trip time is set to the timeout
currently in use.  Now this is still probably too short (if packets were
sent out at time 0, 1, 2, 4, and 8, and the ACK comes back at time
14, the estimated RTT is only 8 and if the ACK was for the second packet
the RTT should be 12), so I feed the estimated RTT for each packet
sent into the roundtrip computation.  (E.g. I set the estimated
RTT to 8, but then adjust the estimate by feeding in RTTs of 12, 16,
17, and 18).

A little testing today suggests that this makes the RTT slightly high
but close to right (and mail on the list has made it clear that if you
are going to be wrong, you'd like to be wrong on the high side).
Connecting via Goonhilly ECHO a dozen or so times always worked,
gave reasonable throughput and didn't appear to suffer excessive
retransmissions.  Of course a dozen tests really isn't enough so
I'm still fiddling, and (at the risk of increasing the mail
traffic still further) am still interested in further suggestions.

Craig
-----------[000035][next][prev][last][first]----------------------------------------------------
Date:      Mon, 10-Nov-86 11:15:18 EST
From:      weltyc%cieunix@CSV.RPI.EDU (Christopher A. Welty)
To:        mod.protocols.tcp-ip
Subject:   TWG/VMS mail over TCP/IP


	I am currently running TWG's TCP/IP on VMS.  We used to have the 
PMDF for VMS and it was very nice, especially compared to the TWG mail
stuff for VMS (which is pretty poor).  In the specs for the VMS PMDF it
says something about a channel for mail using the Wolongong Software.  Has
anyone done this? Is anyone doing this?  I would love to get my hands on
it.  Anyone know anything at all?  I would be willing to even work on it
if I could get some info...

					-Chris
					 weltyc%cieunix@csv.rpi.edu

-----------[000036][next][prev][last][first]----------------------------------------------------
Date:      Mon, 10-Nov-86 18:50:32 EST
From:      braden@ISI.EDU (Bob Braden)
To:        mod.protocols.tcp-ip
Subject:   BSD/SUN Puzzle

I have been experimenting with high-rate ICMP echoes across the ISI
Ethernet between SUN 3/75's.  The pinger program sets an interval timer to
fire off every 20 ms (the minimum resolution).  Each time it fires,
the signal routine calls sendto() to send N successive ICMP echo requests
without a pause.  sendto() is bound to a RAW socket.  The remote SUN
echoes, and a recvfrom() (called in an endless loop) gathers statistics
on RTT, etc. All very trivial.

Now, if N = 3 (150 packets per second),  no packets are dropped, and it
uses 6 % of the SUN CPU (1% in user state, 5% in system state). If N = 4
(200 packets per second), 7 % of the packets are dropped, but it uses
57% of my SUN CPU time (10% in user state, 47 % in system state).

Obviously, at 200 per second we are overrunning something and queues are
building up.  That would account for the packet loss.  But I cannot
understand why the CPU time should build up in this non-linear fashion,
unless there is some (heaven forbid) linear search process going on in
some system queue.  Can anyone give suggest to me what is going on?


Bob Braden

-----------[000037][next][prev][last][first]----------------------------------------------------
Date:      Mon, 10-Nov-86 19:29:50 EST
From:      pvm@VENERA.ISI.EDU (Paul Mockapetris)
To:        mod.protocols.tcp-ip
Subject:   root server changes

The current set of root servers is about to change.  ISIB is being
turned off for good, and ISIA will be a new root server.  ISIA should
come online later this week.  For a transition phase, both may be
available, but ISIB should depart for good sometime next week.

This probably means some changes to your configuration files for
domain support.

paul

-----------[000038][next][prev][last][first]----------------------------------------------------
Date:      Mon, 10-Nov-86 23:17:48 EST
From:      Postel@ISI.EDU (Jon Postel)
To:        mod.protocols.tcp-ip
Subject:   re: TCP Question of Jeff Busma

It would be useful if people interested in learning about TCP took the
trouble to at least attempt to read the document before asking questions.

TCP provides a reliable two directional octet stream.  Every octet sent
by an process using TCP is delivered in the order sent.  There is no
necessary relation between the data buffers used between the sending
process and the sending TCP, and the segments used between the sending
TCP and the receiving TCP, or between TCP segments and IP datagrans, or
between IP datagrams and physical network packets.

If you can close a TCP connection without getting an error then all the
data was delivered.

--jon.
 

-----------[000039][next][prev][last][first]----------------------------------------------------
Date:      11 Nov 1986 07:07:13 PST
From:      Dan Lynch <LYNCH@B.ISI.EDU>
To:        Jon Postel <Postel@VENERA.ISI.EDU>
Cc:        TCP-IP@SRI-NIC.ARPA, jdb@JPL-VLSI.ARPA, LYNCH@B.ISI.EDU
Subject:   re: TCP Question of Jeff Busma
Jon,  There is one additional interpretation of Jeff Busma's query:
the implementation of TCP/IP he is referring to is not a valid one...

Dan

P.S.  But, since there is no TCP/IP certification authority in place
we just have to speculate and probe.
-------
-----------[000040][next][prev][last][first]----------------------------------------------------
Date:      Tue, 11-Nov-86 10:07:13 EST
From:      LYNCH@B.ISI.EDU (Dan Lynch)
To:        mod.protocols.tcp-ip
Subject:   re: TCP Question of Jeff Busma

Jon,  There is one additional interpretation of Jeff Busma's query:
the implementation of TCP/IP he is referring to is not a valid one...

Dan

P.S.  But, since there is no TCP/IP certification authority in place
we just have to speculate and probe.
-------

-----------[000041][next][prev][last][first]----------------------------------------------------
Date:      Tue, 11-Nov-86 20:18:28 EST
From:      BILLW@SCORE.STANFORD.EDU (William "Chops" Westfield)
To:        mod.protocols.tcp-ip
Subject:   NFILE

Does anyone know of any implmentations of the NFILE protocol (Symbolics'
lisp machine file access protocol) on top of TCP?

The goal is a server that runs on tops20, any server implmentation
in any language wuold be helpful...

Thanks
Bill W
-------

-----------[000042][next][prev][last][first]----------------------------------------------------
Date:      Wed, 12-Nov-86 05:21:05 EST
From:      craig@LOKI.BBN.COM (Craig Partridge)
To:        mod.protocols.tcp-ip
Subject:   RTT's revisited


As the guy whose message started this debate, I thought it might be
worthwhile to describe the mechanism I chose to try.

Basically I'm using a solution suggested by Lixia Zhang, with some tweaking.
Lixia suggested that the best estimate of the round-trip time was the
time it takes the first packet to make the loop (in RDP's case, the
time between the opening SYN and its ACK).

The idea sounded good so I put it in and promptly hit a small problem.
My simple minded implementation tended to pick timeouts
that were either too long or too short.  The problem was one of defining
where to measure the round trip from.  If you measure from the first
time you send a SYN, you tend to get a timeout estimate that is too 
high (if you had to resend the SYN several times, the likelihood that
the ACK you finally get is for the first one is rather small).  If
you measure from the time you send the most recent SYN, the estimate
is too low (again, it is unlikely that having sent several packets, the
ACK is for the most recent one).

So I tweaked the implementation.  First off, SYNs are retransmitted using
a roughly exponential backoff for the timeouts.  Then when
the ACK comes back, the estimated round-trip time is set to the timeout
currently in use.  Now this is still probably too short (if packets were
sent out at time 0, 1, 2, 4, and 8, and the ACK comes back at time
14, the estimated RTT is only 8 and if the ACK was for the second packet
the RTT should be 12), so I feed the estimated RTT for each packet
sent into the roundtrip computation.  (E.g. I set the estimated
RTT to 8, but then adjust the estimate by feeding in RTTs of 12, 16,
17, and 18).

A little testing today suggests that this makes the RTT slightly high
but close to right (and mail on the list has made it clear that if you
are going to be wrong, you'd like to be wrong on the high side).
Connecting via Goonhilly ECHO a dozen or so times always worked,
gave reasonable throughput and didn't appear to suffer excessive
retransmissions.  Of course a dozen tests really isn't enough so
I'm still fiddling, and (at the risk of increasing the mail
traffic still further) am still interested in further suggestions.

Craig

-----------[000043][next][prev][last][first]----------------------------------------------------
Date:      12 Nov 1986 10:09-EST
From:      CLYNN@G.BBN.COM
To:        jdb@JPL-VLSI.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: TCP Question
... The implementation may not be technically invalid since there is
a catch-22.  The description of the status command lists "number of
buffers awaiting acknowledgement" and "number of buffers pending
receipt" as data available to the application writer (tcp user).  By
first doing the TCP Close (which in some systems is NOT the same as a
file system "close") and then doing status calls to make sure that the
"number of buffers awaiting acknowledgement" returned is zero, one can
make sure that all outstanding data got as far as the remote TCP.  The
catch-22 is the first sentence under the description of status command
... "This ...  could be excluded without adverse effect."
-----------[000044][next][prev][last][first]----------------------------------------------------
Date:      Wed, 12-Nov-86 15:47:32 EST
From:      MILLS@A.ISI.EDU
To:        mod.protocols.tcp-ip
Subject:   Re: RTT's revisited

In response to the message sent  Sat, 08 Nov 86 20:46:12 -0500 from craig@loki.bbn.com

Craig,

Experiments I did several years ago (some results are in RFC-889) suggest
RTT estimators be based on the interval between the first transmission and
the first reply. Arguments supporting this have been made by RSRE and UCL
as long ago as 1979. A backoff has proved a very good idea and, so far as I
know, has been implemented in every TCP in common use. As you point out, it
is much better to err on the high side than the low, especially  especially with SYN/ACK
exchanges, since data packets tend to be fatter (longer delay) than
SYN/ACK packets. Other suggestions for tweaking the estimator are in
RFC-889.

It has been my experience that the performance of the estimator can be much
improved by increasong the rate of sample collection. This can be done by
keeping a stack of recently sent timestamps and associated sequence numbers,
then checking each off as ACKs are received. Depending on the allowable
number of outstanding packets, this can increase the rate several times.

I blow hot and cold on the usefulness of statistics based on these
data, especially when high variances are involved, as seems to be the case
now with ARPAnet navigation.

Dave
-------

-----------[000045][next][prev][last][first]----------------------------------------------------
Date:      13 Nov 1986 07:37:29 CST
From:      SITT@GUNTER-ADAM.ARPA
To:        MILLS@A.ISI.EDU, craig@LOKI.BBN.COM, tcp-ip@SRI-NIC.ARPA
Cc:        SITT@GUNTER-ADAM.ARPA
Subject:   Re: RTT's revisited
In response to the message sent  12 Nov 1986 15:47:32 EST from MILLS@A.ISI.EDU

As far as I know there has been no prohibition against joint installation
by GAO or Congress.  The current EID policy was supported in an 
ALMAJCOM letter by AF/SCT on 22 Apr 86.  Additionally, the current draft 
of the Local Information Transfer Architecture incorporates the  fiber 
optic policy in Annex E.  The policy for installation in the LITA 
basically says that fiber should not be installed jointly with coax for 
broadband LANs or twisted pair unless the route will overlay 
the eventual fiber optic backbone network.  It recommends use of fiber or joint
installation for Catagory A and Catogory B systems (e.g., T-carrier, 
ESS trunk circuits, broadband closed circuit TV, remote display circuits 
for radar, high-speed point-to-point links, high-density links, line-of-sight 
systems, ...).  I will send you a copy of the letter and the LITA (draft).

Hope life's not too tough in the Pacific.  Keep in touch.
I should be flying that way regularly starting in June 87.
-------
-----------[000046][next][prev][last][first]----------------------------------------------------
Date:      13 Nov 1986 07:43:26 CST
From:      SITT@GUNTER-ADAM.ARPA
To:        MILLS@A.ISI.EDU, craig@LOKI.BBN.COM, tcp-ip@SRI-NIC.ARPA
Cc:        SITT@GUNTER-ADAM.ARPA
Subject:   Re: RTT's revisited
In response to the message sent  13 Nov 1986 07:37:29 CST from SITT@GUNTER-ADAM.ARPA

Please disregard previous message.  It was transmitted in error.
-------
-----------[000047][next][prev][last][first]----------------------------------------------------
Date:      Thu, 13-Nov-86 08:37:29 EST
From:      SITT@GUNTER-ADAM.ARPA
To:        mod.protocols.tcp-ip
Subject:   Re: RTT's revisited

In response to the message sent  12 Nov 1986 15:47:32 EST from MILLS@A.ISI.EDU

As far as I know there has been no prohibition against joint installation
by GAO or Congress.  The current EID policy was supported in an 
ALMAJCOM letter by AF/SCT on 22 Apr 86.  Additionally, the current draft 
of the Local Information Transfer Architecture incorporates the  fiber 
optic policy in Annex E.  The policy for installation in the LITA 
basically says that fiber should not be installed jointly with coax for 
broadband LANs or twisted pair unless the route will overlay 
the eventual fiber optic backbone network.  It recommends use of fiber or joint
installation for Catagory A and Catogory B systems (e.g., T-carrier, 
ESS trunk circuits, broadband closed circuit TV, remote display circuits 
for radar, high-speed point-to-point links, high-density links, line-of-sight 
systems, ...).  I will send you a copy of the letter and the LITA (draft).

Hope life's not too tough in the Pacific.  Keep in touch.
I should be flying that way regularly starting in June 87.
-------

-----------[000048][next][prev][last][first]----------------------------------------------------
Date:      Thu, 13-Nov-86 08:43:26 EST
From:      SITT@GUNTER-ADAM.ARPA
To:        mod.protocols.tcp-ip
Subject:   Re: RTT's revisited

In response to the message sent  13 Nov 1986 07:37:29 CST from SITT@GUNTER-ADAM.ARPA

Please disregard previous message.  It was transmitted in error.
-------

-----------[000049][next][prev][last][first]----------------------------------------------------
Date:      13 Nov 86  1457 PST
From:      Joe Weening <JJW@SAIL.STANFORD.EDU>
To:        TCP-IP@SRI-NIC.ARPA
Subject:   Re: EGP madness
Page 6 of the new "Assigned Numbers" (RFC 990) explains that

         The class A network number 127 is assigned the "loopback"
         function, that is, a datagram sent by a higher level protocol
         to a network 127 address should loop back inside the host.  No
         datagram "sent" to a network 127 address should ever appear on
         any network anywhere.

I hadn't seen this announced before, but apparently the de facto
standard use of this number has become official, and someone is
therefore violating the standard.  Time for the protocol police
to start an investigation ...

						Joe

-----------[000050][next][prev][last][first]----------------------------------------------------
Date:      Thu, 13-Nov-86 14:38:02 EST
From:      MRC%PANDA@SUMEX-AIM.STANFORD.EDU (Mark Crispin)
To:        mod.protocols.tcp-ip
Subject:   EGP madness


     Some of you may be interested in the following EGP topology, which I got
back from my favorite gateway, 10.1.0.15.  Note that 10.2.0.80 thinks that
network 127 is 3 hops away.  What the hell is network 127???

-- Mark --

[Network topology for network 10
 12 interior gateway(s), 0 exterior gateway(s)
 Gateway at 1.0.15:
  Network(s) 0 hop(s) away: 128.43
  Network(s) 1 hop(s) away: 192.12.215, 192.5.144, 192.12.62
 Gateway at 2.0.80:
  Network(s) 1 hop(s) away: 26
  Network(s) 2 hop(s) away: 128.47, 192.5.9, 128.44, 6, 192.12.33, 192.12.128,
	128.26, 192.5.38, 192.12.131, 128.25, 192.5.92, 128.21
  Network(s) 3 hop(s) away: 192.12.29, 192.12.30, 128.29, 192.5.26, 192.5.21,
	128.20, 128.3, 128.60, 192.5.23, 192.5.22, 192.5.25, 192.12.139,
	192.12.119, 128.8, 128.115, 192.12.124, 192.5.52, 192.12.196, 128.63,
	128.102, 192.5.16, 192.12.184, 128.122, 128.38, 192.12.120, 192.12.64,
	127, 192.12.15, 128.49
  Network(s) 4 hop(s) away: 128.155, 192.16.14, 192.12.65, 192.12.31, 192.5.47,
	192.5.218, 192.5.65, 192.16.7, 192.12.125, 192.12.13, 192.16.16, 128.54
  Network(s) 5 hop(s) away: 128.165
 Gateway at 5.0.5:
  Network(s) 2 hop(s) away: 192.12.172
 Gateway at 2.0.5:
  Network(s) 1 hop(s) away: 8, 128.11, 128.89
 Gateway at 2.0.25:
  Network(s) 1 hop(s) away: 4
 Gateway at 3.0.27:
  Network(s) 1 hop(s) away: 128.9
  Network(s) 2 hop(s) away: 7, 128.4, 35, 192.5.39, 28, 128.6, 18, 128.2,
	192.12.19, 192.12.18, 192.5.7, 192.5.11, 192.12.141, 192.5.14, 128.31,
	128.170
  Network(s) 3 hop(s) away: 192.5.28, 192.5.29, 128.39, 128.16, 128.98, 128.124,
	128.149, 192.12.9, 128.97, 192.5.55, 128.121, 192.5.165, 192.5.56
  Network(s) 4 hop(s) away: 192.10.41, 192.5.57, 192.5.148
  Network(s) 6 hop(s) away: 128.117, 192.17.5, 128.5, 192.5.146, 192.12.207,
	128.135
 Gateway at 1.0.28:
  Network(s) 1 hop(s) away: 192.5.18
 Gateway at 2.0.37:
  Network(s) 1 hop(s) away: 192.5.48
  Network(s) 2 hop(s) away: 128.10, 128.83, 192.5.104, 128.32, 128.104,
	192.12.220, 192.12.12, 192.5.2, 128.136, 192.5.10, 192.5.37, 192.5.53,
	192.12.5, 192.5.58, 192.5.69, 128.101, 128.91, 128.125, 128.110, 128.95,
	128.52
  Network(s) 3 hop(s) away: 128.105, 192.12.44, 128.42, 192.16.72, 192.5.49,
	192.12.221, 192.5.40, 192.5.19, 128.140, 192.12.63, 192.16.73, 128.81,
	192.10.42, 192.12.56, 128.139, 128.99
  Network(s) 4 hop(s) away: 192.12.185, 192.12.69, 128.112, 192.5.101, 128.96
  Network(s) 5 hop(s) away: 128.46
 Gateway at 5.0.51:
  Network(s) 1 hop(s) away: 128.18
 Gateway at 5.0.63:
  Network(s) 1 hop(s) away: 14
 Gateway at 7.0.63:
  Network(s) 1 hop(s) away: 192.1.9
  Network(s) 2 hop(s) away: 192.1.2, 128.30, 36, 128.84
  Network(s) 3 hop(s) away: 192.1.4
  Network(s) 4 hop(s) away: 192.12.91
 Gateway at 2.0.9:
  Network(s) 1 hop(s) away: 128.36
  Network(s) 2 hop(s) away: 192.5.88
  Network(s) 4 hop(s) away: 192.12.81, 192.16.167
]
-------

-----------[000051][next][prev][last][first]----------------------------------------------------
Date:      Thu, 13-Nov-86 16:20:28 EST
From:      brescia@CCV.BBN.COM (Mike Brescia)
To:        mod.protocols.tcp-ip
Subject:   Re: EGP madness (net 127)


>          Some of you may be interested in the following EGP topology, which I got
>     back from my favorite gateway, 10.1.0.15.  Note that 10.2.0.80 thinks that
>     network 127 is 3 hops away.  What the hell is network 127???

Net 127, an unregistered net number, is used by Berkeley in the 4.xBSD unix to
indicate the 'loopback interface' internal to the host.  It has in the past
been advertised by some hosts which run EGP and accidently have the loopback
address listed first in the configuration of the net interfaces in the unix.

It is happening again at NLM-MCS.

The 'core' system, while providing some protection to itself, does not censor
any information, but passes this on to the other gateways, and thus to the
other EGP sites.  It probably causes a bit of confusion on any vaxes which run
EGP and are trying to use their own loopback interface.

(I've deliberately tried to eschew any biased statements above, but had to get
this self-referent, biased sentence in.)

    Mike Brescia
    Gateway Censor.

-----------[000052][next][prev][last][first]----------------------------------------------------
Date:      13 Nov 86 16:20:28 EST (Thu)
From:      Mike Brescia <brescia@ccv.bbn.com>
To:        Mark Crispin <MRC%PANDA@sumex-aim.ARPA>
Cc:        TCP-IP@sri-nic.ARPA, egp-people@ccv.bbn.com
Subject:   Re: EGP madness (net 127)

>          Some of you may be interested in the following EGP topology, which I got
>     back from my favorite gateway, 10.1.0.15.  Note that 10.2.0.80 thinks that
>     network 127 is 3 hops away.  What the hell is network 127???

Net 127, an unregistered net number, is used by Berkeley in the 4.xBSD unix to
indicate the 'loopback interface' internal to the host.  It has in the past
been advertised by some hosts which run EGP and accidently have the loopback
address listed first in the configuration of the net interfaces in the unix.

It is happening again at NLM-MCS.

The 'core' system, while providing some protection to itself, does not censor
any information, but passes this on to the other gateways, and thus to the
other EGP sites.  It probably causes a bit of confusion on any vaxes which run
EGP and are trying to use their own loopback interface.

(I've deliberately tried to eschew any biased statements above, but had to get
this self-referent, biased sentence in.)

    Mike Brescia
    Gateway Censor.
-----------[000053][next][prev][last][first]----------------------------------------------------
Date:      Thu, 13-Nov-86 16:58:21 EST
From:      rick@seismo.CSS.GOV (Rick Adams)
To:        mod.protocols.tcp-ip
Subject:   Re:  EGP madness

Network 127 is everyones favorite bogus network! It's used as a loopback
network by many BSD UNIX systems. It should never be exported, but 
sometimes leaks occur.

---rick

-----------[000054][next][prev][last][first]----------------------------------------------------
Date:      Thu, 13-Nov-86 17:57:00 EST
From:      JJW@SAIL.STANFORD.EDU (Joe Weening)
To:        mod.protocols.tcp-ip
Subject:   Re: EGP madness

Page 6 of the new "Assigned Numbers" (RFC 990) explains that

         The class A network number 127 is assigned the "loopback"
         function, that is, a datagram sent by a higher level protocol
         to a network 127 address should loop back inside the host.  No
         datagram "sent" to a network 127 address should ever appear on
         any network anywhere.

I hadn't seen this announced before, but apparently the de facto
standard use of this number has become official, and someone is
therefore violating the standard.  Time for the protocol police
to start an investigation ...

						Joe

-----------[000055][next][prev][last][first]----------------------------------------------------
Date:      Thu, 13-Nov-86 23:20:59 EST
From:      karels%okeeffe@UCBVAX.BERKELEY.EDU (Mike Karels)
To:        mod.protocols.tcp-ip
Subject:   Re: EGP madness (net 127)

The error that causes net 127 to be advertised (other than not using
a better address for loopback "interfaces") is a confguration error
for Kirton's EGP.  The egp configuration file should always list the
"egpnetsreachable", and the list should not include 127.  Otherwise,
the EGP process looks up everything it can find on the local machine.

Any address (even an officially assigned one) can be used for the loopback
with 4.3; it's set just like any other address.  (But now I note in jjw's
reply that 127 is official; I guess everyone is required to implement it.)

		Mike

-----------[000056][next][prev][last][first]----------------------------------------------------
Date:      Fri, 14 Nov 86 09:41:57 PST
From:      Bob Braden <braden@isi.edu>
To:        karels%okeeffe@berkeley.edu
Cc:        TCP-IP@sri-nic.arpa, egp-people@ccv.bbn.com
Subject:   Re: EGP madness (net 127)
	
	The error that causes net 127 to be advertised (other than not using
	a better address for loopback "interfaces") is a confguration error
	for Kirton's EGP.  The egp configuration file should always list the
	"egpnetsreachable", and the list should not include 127.  Otherwise,
	the EGP process looks up everything it can find on the local machine.
Mike,

It would seem that a program that allows a configuration error to cause
public mischief has a bug.  Since "Kirton's EGP" is effectively part of
the BSD distribution, can we hope that this bug will be fixed?

Bob Braden

	
-----------[000057][next][prev][last][first]----------------------------------------------------
Date:      Fri, 14-Nov-86 09:38:23 EST
From:      GOWER@A.ISI.EDU (Neil E. Gower)
To:        mod.protocols.tcp-ip
Subject:   Re: EGP madness


In Mark's message about the toplology as seen from the point of view of
10.1.0.15, the following entry appeared.

 Gateway at 5.0.5:
  Network(s) 2 hop(s) away: 192.12.172

I am really confused because our network (192.12.172) is attached
directly to the ARPAnet (10) via a GW at 10.1.0.46.  Does the fact that
the table shows that we are 2 hops (whatever that means) behind the BBN
MIL-ARPA GW explain why we (in Texas) have a hard time getting to the
West coast?  Are we going through the East coast to get there?  Or is
this just a mirage which is necessary for the EGP/GGP stuff to work?
Or a result of the shortage of table space in the GWs?

Neil Gower
-------

-----------[000058][next][prev][last][first]----------------------------------------------------
Date:      14 Nov 1986 09:38:23 EST
From:      Neil E. Gower <GOWER@A.ISI.EDU>
To:        Mark Crispin <MRC%PANDA@SUMEX-AIM.ARPA>
Cc:        TCP-IP@SRI-NIC.ARPA, GOWER@RI170A.ARPA
Subject:   Re: EGP madness

In Mark's message about the toplology as seen from the point of view of
10.1.0.15, the following entry appeared.

 Gateway at 5.0.5:
  Network(s) 2 hop(s) away: 192.12.172

I am really confused because our network (192.12.172) is attached
directly to the ARPAnet (10) via a GW at 10.1.0.46.  Does the fact that
the table shows that we are 2 hops (whatever that means) behind the BBN
MIL-ARPA GW explain why we (in Texas) have a hard time getting to the
West coast?  Are we going through the East coast to get there?  Or is
this just a mirage which is necessary for the EGP/GGP stuff to work?
Or a result of the shortage of table space in the GWs?

Neil Gower
-------
-----------[000059][next][prev][last][first]----------------------------------------------------
Date:      Fri, 14-Nov-86 15:51:35 EST
From:      van@lbl-csam.arpa (Van Jacobson)
To:        mod.protocols.tcp-ip
Subject:   Yet more on RTTs

A few weeks ago there was a query about estimating packet round trip 
time for an RDP implementation.  I replied with some local measurements
that suggested that TCP's problems might be similar to RDP's:  most
TCP conversations were so short that they behaved as datagrams.  Based
on this, I suggested that a part of RTT maintenance be moved from
the TCP layer to the IP layer.  If RTT really is common to RDP and TCP,
the IP layer is to logical place to put it.  Based on measurement and
simulation, I have reason to believe that this move would improve
the Internet's present, abysmal performance.

I've been out of touch for two weeks (a problem of inter-personnal
congestion control) and read the past two weeks of TCP-IP messages
last night.  The RTT messages were disappointing:  they addressed
problems whose solution is known and which are being solved (albeit
slowly).  I think we're facing a whole new set of problems.  In an
effort to promote some light (or heat), what follows is my simple
minded explanation of what's going on and what we might start to
do about it.  (In what follows, "connection" means a conversation
between two processes over a network, not a TCP connection.)

RTT is measured to help deal with unreliable packet delivery.  When
packets are delivered reliably, TCP and RDP are self-clocking.  Since
we know that delivery is unreliable, we design our protocols to make
an educated guess about whether a particular packet has been lost:
If no "clock" has been received for a "long time" (relative to the
round trip time), the packet probably needs to be retransmitted.

There are two reasons for losing a packet:
  1) It was damaged or misplaced in transit.
  2) It was discarded due to congestion.
The appropriate recovery strategy depends on the reason:  For (1),
the packet should be retransmitted as soon as possible.  For (2),
the retransmission should happen after a "Long" time (many times
the round trip time) so the congestion has a chance to clear (I'm
making the assumption that there's substantial buffering in the
subnet so the time constants for congestion are long -- this is
true of the nets I deal with and, given current memory prices,
likely to remain true).  

Given that the sender doesn't know whether (1) or (2) has occurred,
what stategy should be used?  If the stategy for (2) is chosen when (1)
is the cause, the throughput on this connection will go down a bit.  If
the strategy for (1) is chosen when (2) is the cause, the problem will
get much worse, both for this host and others on the net.  In the
absence of other information, the principle of Least Damage tells us to
use the strategy for (2).  [Experience also suggests that damaged
packets are unlikely -- the error rate on our worst net is <0.1%.  But,
if you really have to get maximum throughput on a connection, as
opposed to maximum agregate throughput on all your connections, the
Pollaczek-Khinchine equation says that the variance of the RTT
estimates can be used to distinguish (1) & (2).  I design networks for
real-time control in hostile environments and occasionally make use of
this.  It's not generally useful.]

The strategy for (1) is very "local".  It can be detected by the
process running the connection and that process can take corrective
action that should both cure the problem and have negligible effect
on other connections.  The strategy for (2) is global.  The congestion
detected on a connection is probably not caused by that connection.
In fact, it is likely that no single connection is the cause.  Thus no
single action is going to cure the problem, only the combined effect of
several connections reducing their traffic rate.  For this to happen,
each of those connections has to discover the problem (which means
sending packets, which agravate the problem) including newly opened
connections.  The recovery time is clearly going to be an exponential
with a long time constant.

A way to reduce the recovery time is to introduce more coupling between
the connections.  Congestion is a propery of network path(s), not of
connections.  When one connection discovers congestion on a path,
that information should be made available to all connections in the
same machine using that path.  This isn't hard to implement:  A lot
of the congestion happens over paths that look like:


 Host A-|                                     |
	|                                     |
	|-GwyB---------------------------GwyC-|
	|                                     |
	|                                     |-Host D

Where A is talking to D, the vertical lines are relatively high-speed,
local nets and the horizontal line is a low-speed, long-haul net(s).  The
difference in net speeds means that any congestion will almost certainly
occur somewhere on the path from B to C.  This means that, from A's
point of view, the round trip time to D is characteristic of the RTT to
any host served by C (I have data which says that the gateway accounts
for 90% of the variation in RTT).  If A contains a routing entry for C
(IP requires a routing entry in A for B or C or both), a slot could be
left in that entry for RTT.  If TCP, RDP, etc., used that slot for
the value in all their RTT calculations, information about the path
would automatically be shared (and also wouldn't be lost when a TCP
connection closed).  Just this much change would eliminate the "turn-
on transient" of retransmissions that occur while a tcp connection
is learning the RTT.

Once one stops regarding RTT as a property of connections and starts to
regard it as a measured, dynamic property of the topology, some related
ideas start to look interesting.  Like B telling A topology and A
telling B transit times (the stability problems of the old Arpanet
routing protocol shouldn't show up if this is only done locally and
RTT(local) << RTT(long haul)).  Or treating "source quench" as if it
meant "I'm congested" rather than "You should shut up" (it obviously
means both).  Under the first interpretation, it is information about
the state of part of the path.  If gateways along the return path
wiretap, the information does service to several hosts rather than a
single TCP conversation (and we start to get distributed congestion
control via "choke packets" which have some nice properties if there's
enough buffering in the subnet to handle the diffusion time.)

[I'm sure I'll be toasted for something in the preceding paragraph,
if not for the rest of this opus.  We learn by making mistakes.]

I'll close with a brief reiteration of my context.  As the round trip
time of the Internet has gotten worse, the nature of our (locally
generated) traffic has changed.  Only someone desperate or mad would
try to telnet.  Our ftp lacks an automatic retry and it took only a
few "connection timed out"s to make our users abandon file transfer.
The result is a high proportion of mail traffic.  Our usual congestion
is not caused by a few hosts flooding the net with packets (perhaps
because the few hosts we've found doing this were quickly and forcibly
disconnected).  Our usual congestion is the result of a large fraction
of the 200 hosts on an ethernet trying to ship mail through a gateway
with a 9.6Kbit output line.  Each host sends 3 small packets and one big
one, "HELO", "MAIL FROM", "RCPT TO" and "DATA..." (the small packets
are SMTP's fault -- thanks to John Nagle's accumulate-until-the-ack, all
the packets are as big as they can be).  The destinations are usually
different.  I don't know of a congestion algorithm that deals with this
situation but I feel we need one.

  - Van Jacobson

-----------[000060][next][prev][last][first]----------------------------------------------------
Date:      Fri, 14-Nov-86 19:19:15 EST
From:      JNC@XX.LCS.MIT.EDU ("J. Noel Chiappa")
To:        mod.protocols.tcp-ip
Subject:   Re: EGP madness


	Hi. This is a canned message. My apologies for not sending a
personalized reply, but this question gets asked once a month and I got
tired of typing the same message. (An MIT hacker defined 'Hell' as
'answering the same bug report over and over again'.) It was answered most
recently on:

Fri 3 Oct 86 01:48:28-EDT
Fri 31 Oct 86 16:03:42-EST


	You have asked a question about the infamous 'extra hop
problem'. The problem is not caused by EGP, which is telling you
exactly what the gateway you are a neighbour with is doing itself with
packets to given destinations, but the routing protocol (GGP) which is
used by the core gateways among themselves. It predates EGP, was not
designed with the pattern of information flows that you see in EGP in
mind, and is the cause of the problem.
	As a brief example of the problem, if MIT has core gateway A as
an EGP peer, and Berkeley has a peer core gateway B, then there is no
way (using GGP) for A to inform B that to get to MIT it can go direct;
both B and all its clients (e.g.  Berkeley) think they have to go
through A.  This is the cause of the funny routes to places you ought to
be able to get to directly, etc.
	Your gateway is just fine, and it's not EGP's fault either. The
extra hop problem will only be solved when GGP is retired; i.e. when the
PDP11 core gateways are replaced by Butterflys, probably. When GGP is
replaced the problem will magically disappear without any changes to
EGP.

	For a more detailed explanation of the problem, look in the
TCP-IP archive for a message I sent out at Thu 6 Mar 86 18:16:01-EST
which goes into great detail. (No, I do not know how to access the
TCP-IP archives, so don't send me mail asking for it; I'll ignore your
message. If someone from the NIC sends me the appropriate info, I'll
happily insert it here.) Just out of interest, were you on TCP-IP
then?

		Noel
-------

-----------[000061][next][prev][last][first]----------------------------------------------------
Date:      Fri, 14-Nov-86 21:54:41 EST
From:      karn@FLASH.BELLCORE.COM (Phil R. Karn)
To:        mod.protocols.tcp-ip
Subject:   Re:  Yet more on RTTs

I feel that the "round trip timing problem" by itself is a red herring.
It's really a symptom of a larger problem.  A TCP with a very simple RTT
algorithm that always errs on the high side would still perform quite well
if the network didn't drop so many packets. The network wouldn't drop so
many packets if it wasn't being swamped by so many badly designed and
mistuned TCPs.

Last summer in Monterey there was a lot of discussion about vendor
certification and how much hard work it takes to test a protocol
implementation. The thing is, we already HAVE a "validation suite"; it's
called "operational use in the ARPA Internet"!  Furthermore, it has already
revealed some serious problems in some very popular implementations, but the
vendors have yet to fix the damn things (including the maker of the
workstation I'm typing this on). I've seen several (object only) releases of
software for this system come and go since RFC-896 came out and they STILL
don't have the Nagle algorithm yet.  Given how popular this system is, it's
no surprise that the Internet is in such trouble.  I think we should
concentrate on fixing known problems before we invent new ones to solve.

Phil

-----------[000062][next][prev][last][first]----------------------------------------------------
Date:      Sat, 15-Nov-86 13:07:00 EST
From:      SYMALG@YUSOL.BITNET
To:        mod.protocols.tcp-ip
Subject:   Encore Annex or Bridge Terminal Server

Newsgroups: mod.computers.tcp-ip
Subject: Encore Annex or Bridge Terminal Server ?
Reply-To: mike@yetti.UUCP (Mike Clarkson )
Organization: York U. Computer Science


We have a Sun 160  to which we would like to add about 10 terminals lines.
The Sun is mainly a file server for 3 3/50's, and the amount of use of
the terminal ports is expected to be light.  There are other machines on the
Ethernet that I would like to reach with the terminals, but I could always
log into the Sun first, and then Telnet or whatever across the ethernet.

1)      What do I lose by putting my terminals on the ethernet?  For example,
will ^O ^S ^Q all get gobbled rather than passed to Emacs for instance.
Will a terminal server on the ethernet affect paging from the Sun 50's to
the Sun 160?

2)      Besides flexibilty, what do I gain by having my terminals on the
ethernet?  Performance?  Would I be better off with an ALM on the VME bus?

3)      What advantages does the Encore Annex have over the Bridge boxes?
I would love to hear from anyone who has one.

4)      Does anyone have the phone number for Encore?  I have the Bridge info.

E-mail to me, and I'll summarize to the net.

Mike Clarkson, yetti!mike.UUCP


Mike Clarkson,            ...!allegra \                 BITNET: mike@YUYETTI or
CRESS, York University,   ...!decvax   \                        SYMALG@YUSOL
4700 Keele Street,        ...!ihnp4     > !utzoo!yetti!mike
North York, Ontario,      ...!linus    /
CANADA M3J 1P3.           ...!watmath /         Phone: +1 (416) 736-2100 x 7767


"...the most inevitable business communications system on the planet."
                                                - ROLM magazine advertisement
 which planet?

-----------[000063][next][prev][last][first]----------------------------------------------------
Date:      Sat, 15 Nov 86 13:07 EST
From:      <SYMALG%YUSOL.BITNET@WISCVM.WISC.EDU>
To:        tcp-ip@sri-nic.arpa
Subject:   Encore Annex or Bridge Terminal Server
Newsgroups: mod.computers.tcp-ip
Subject: Encore Annex or Bridge Terminal Server ?
Reply-To: mike@yetti.UUCP (Mike Clarkson )
Organization: York U. Computer Science


We have a Sun 160  to which we would like to add about 10 terminals lines.
The Sun is mainly a file server for 3 3/50's, and the amount of use of
the terminal ports is expected to be light.  There are other machines on the
Ethernet that I would like to reach with the terminals, but I could always
log into the Sun first, and then Telnet or whatever across the ethernet.

1)      What do I lose by putting my terminals on the ethernet?  For example,
will ^O ^S ^Q all get gobbled rather than passed to Emacs for instance.
Will a terminal server on the ethernet affect paging from the Sun 50's to
the Sun 160?

2)      Besides flexibilty, what do I gain by having my terminals on the
ethernet?  Performance?  Would I be better off with an ALM on the VME bus?

3)      What advantages does the Encore Annex have over the Bridge boxes?
I would love to hear from anyone who has one.

4)      Does anyone have the phone number for Encore?  I have the Bridge info.

E-mail to me, and I'll summarize to the net.

Mike Clarkson, yetti!mike.UUCP


Mike Clarkson,            ...!allegra \                 BITNET: mike@YUYETTI or
CRESS, York University,   ...!decvax   \                        SYMALG@YUSOL
4700 Keele Street,        ...!ihnp4     > !utzoo!yetti!mike
North York, Ontario,      ...!linus    /
CANADA M3J 1P3.           ...!watmath /         Phone: +1 (416) 736-2100 x 7767


"...the most inevitable business communications system on the planet."
                                                - ROLM magazine advertisement
 which planet?
-----------[000064][next][prev][last][first]----------------------------------------------------
Date:      Sat, 15-Nov-86 14:51:38 EST
From:      van@LBL-CSAM.ARPA (Van Jacobson)
To:        mod.protocols.tcp-ip
Subject:   Re: Yet more on RTTs

RTTs are not a red herring.  As the specs now stand, RTT and the
associated TCP retransmit algorithm are the *only* congestion
control for 99%+ of the Internet traffic.  The Nagle algorithm is
not for congestion control, it increases the line efficiency so
you are less likely to need congestion control.  This only
postpones the day of reckoning.  We have on the order of 30,000
networked computers sitting behind the Internet backbone and the
number is increasing exponentially.  Long-haul services like
NSFNet supercomputer access are going to increase the traffic
those computers send across the backbone.  There is a factor of
100 difference between number of customers and number of backbone
circuits.  There is a factor of 200 impedance mismatch between
the local nets and the backbone.  With these numbers, congestion
is guaranteed, even with everyone running every algorithm that
John devises. 

The problem could be avoided if there was some way to solve it in
the gateways.  RFC970 proposes one such algorithm.  I started to
implement it but took some data that convinced me it wouldn't
help.  In fact, I couldn't see anything that would help short of
improving the congestion algorithms in the endnodes.  I saw a
useful endnode change, implemented part of it and it worked.  But
to work well it requires more topology information going between
the gateways and the endnodes.  This is undesirable, as is the
thought of changing the tcp in all 1000 of our local computers. 
Thus it seemed worthwhile to continue the discussion in this
forum. 

If congestion problems can be solved in the gateways, we have
quite a bit of time and only need to do trial implementations and
measurements to verify that the proposed algorithms work in real
world traffic.  If problems have to be solved in the endnodes, we
have to implement and verify solutions now, then start leaning
hard on vendors to adopt those solutions.  If we went to the
vendors today, it would be at least a year until we could buy the
fruits of our labor. 

With luck, John Nagle's algorithms, and DCA/NSF infusions of
money into the backbone, we have a year to solve the next set of
problems.  But we still have to solve them.  Telling vendors to
hurry up and market things they should have had yesterday is very
important.  So is figuring out what to tell them tomorrow. 

  - Van

-----------[000065][next][prev][last][first]----------------------------------------------------
Date:      Sat, 15-Nov-86 17:20:34 EST
From:      tim@hoptoad.UUCP (Tim Maroney)
To:        mod.protocols.tcp-ip
Subject:   Prevalence of RDP implementations?

We are currently shifting from Appletalk protocols to Internet protocols
with our remote function protocol, which now runs on top of ATP (gag).  We
want to be able to port quickly to various operating systems.  My question
is whether RDP is quickly catching on.  It's currently listed as a minor
host protocol; is it starting to become a major host protocol?  I would like
to run RFP on top of RDP, which seems perfectly suited to the task, but it
is likely we will use TCP instead if RDP is still fairly obscure.


What about UDP?  Well, I don't feel like adding yet another single-protocol
reliability layer to it!  I think RDP is what UDP should have been in the
first place.

-----------[000066][next][prev][last][first]----------------------------------------------------
Date:      Sun, 16-Nov-86 02:14:38 EST
From:      karn@FLASH.BELLCORE.COM (Phil R. Karn)
To:        mod.protocols.tcp-ip
Subject:   Re: Yet more on RTTs

Okay, you're right. RTTs are not a red herring, because yet ANOTHER
unfixed bug in my Brand X workstation (and widespread on the net)
is the incorrect computation of round trip time from the last transmission
to the first ACK of a sequence number.  Fix this one, throw in the Nagle
algorithm, and set the initial RTT to a reasonable value like 5-10 sec,
and I think we'll be in good shape for quite some time.

Phil

-----------[000067][next][prev][last][first]----------------------------------------------------
Date:      Sun, 16-Nov-86 03:29:10 EST
From:      ROODE%BIONET@SUMEX-AIM.Stanford.EDU (David Roode)
To:        mod.protocols.tcp-ip
Subject:   Re:  Peace fullness.

Not only was the WollonGong TCP-IP fully dependent on the
DARPA-funded Berkeley Unix implementation, but the port to
VMS was done by Dave Kashtan at SRI who deserves the
credit for the 'hostile environment' adaptation.
-------

-----------[000068][next][prev][last][first]----------------------------------------------------
Date:      Sun, 16-Nov-86 15:15:00 EST
From:      SRA@XX.LCS.MIT.EDU (Rob Austein)
To:        mod.protocols.tcp-ip
Subject:   EGP madness (net 127)


    Date: Thursday, 13 November 1986  23:20-EST
    From: karels%okeeffe@BERKELEY.EDU (Mike Karels)

    Any address (even an officially assigned one) can be used for the loopback
    with 4.3; it's set just like any other address.  (But now I note in jjw's
    reply that 127 is official; I guess everyone is required to implement it.)

I got the impression that this was a more along the lines of
allocating the network number so that nobody would try to use it for
anything else (because there will probably always be broken machines
that leak this address).  I certainly didn't read it as an imperative
that all TCP/IP implementations go out and implement loopback on this
address.

If anybody else read this the way Mike did I guess we need a
clarification from the Number Czar.

--Rob

-----------[000069][next][prev][last][first]----------------------------------------------------
Date:      Sun, 16-Nov-86 18:26:11 EST
From:      hedrick@topaz.rutgers.edu (Charles Hedrick)
To:        mod.protocols.tcp-ip
Subject:   Re:  Encore Annex or Bridge Terminal Server

We use Bridge CS100's extensively.  We normally set them to be transparent.
I.e. ^O ^S ^Q get passed to the host.  We set ^\ to do both XON and XOFF
locally.  (This character doesn't seem to be needed by any of our software.
You can choose another if you prefer.)  ^S takes too long to work through
the network to be useful, so we felt we had to supply some local ability
to pause output.  But the ^S character would be a disaster, since it is
uased by Emacs for search.  The problem of terminating output (^O and ^C)
can be solved within the telnet protocol.  Bridge supports the Telnet
synch.  When the host clears its output buffer, it sends an out of
band notification to the server, with an inband mark to say where
discarding data should stop.  We implemented the host end on our
Pyramids.  ^O still ddoesn't work instantly, but it is good enough for
practical purposes.  

At some point terminal service will put a load on your Ethernet.  But
that point is several thousand users.

If your system is going to be used for heavy timesharing, the host end
of telnet will use some CPU time.  For a couple of users I wouldn't
expect to see an effect.  Also, character echo will slow down as the
system gets loaded, since telnetd has to be shceduled twice for each
character.  On our Pyramids we put telentd in the kernel, which both
removes the loading effect and removes the echo delay.  We have not
tried this on the Sun, but no doubt we will eventually.

The terminal server is a lot more flexible.  We are tending to use
that for most new terminals.  But it is more complex, and so there
are more things that can go wrong.  Where we have a machine whose
users will always be connected to that one machine, we still use
direct terminals.  But that is increasingly rare.

-----------[000070][next][prev][last][first]----------------------------------------------------
Date:      Sun, 16-Nov-86 18:29:37 EST
From:      MILLS@A.ISI.EDU
To:        mod.protocols.tcp-ip
Subject:   Re: EGP madness (net 127)

In response to the message sent  13 Nov 86 16:20:28 EST (Thu) from brescia@ccv.bbn.com

Marc,

Well, what we have here is the famous Martian, which would be even more famous
if some gateways (not the core gateways) didn't censor them. While I don't fault
the Gateway Censor for not coming down on X-rated packets (I think it's ugly to
have to do that), we get a lot more flotsam way up here on the NSFnet bayous.

Dave
-------

-----------[000071][next][prev][last][first]----------------------------------------------------
Date:      16 NOV 86 23:37-PST
From:      BEN%YMIR.BITNET@WISCVM.WISC.EDU
To:        TCP-IP@SRI-NIC.ARPA
Subject:   please remove me

Thanks for all the informative discussion, but due to time constraints I would
appreciate it if you could REMOVE my name from the TCP-IP mailing list.

                     thanks again,          Ben Staat
                                            ben@ymir
-----------[000072][next][prev][last][first]----------------------------------------------------
Date:      Mon, 17-Nov-86 17:53:40 EST
From:      ospwd@EMORY.ARPA (Peter Day {EUCC})
To:        mod.protocols.tcp-ip
Subject:   TELNET 3270 for PC/IP


The following is a summary of responses to my request for
TELNET 3270 for PC/IP. Essentially, it appears that this will be
available from two universities: Cornell (via Maryland
Distribution) and Berkeley. In addition, FTP Software responded
that they expect to have it by March 1987.

The Berkeley program uses the Ungerman Bass TCP-PC product, which
is particularly interesting because it not only has a netbios
interface, but also a Name Service facility.

BERKELEY:

Date: Wed, 22 Oct 86 17:39:15 PDT
From: Greg Small <gts%violet.Berkeley.EDU%ucbvax.berkeley.edu@CSNET-RELAY>

Greg Minshall is porting his Unix tn3270 to the PC.   It is based on
Ungermann-Bass's TCP-PC product that runs on their intelligent Ethernet
controller.  Tn3270 interfaces to the Ungermann-Bass extended Netbios
interface through a temporary socket library, but will be converted
to Ungermann-Bass's socket library for the PC.

The extended Netbios interface allows running telnet, ftp, tn3270 and user
socket programs concurrent with IBM PC Network applications.  The Netbios
supports IBM PC Network applications with standard Netbios calls using
TCP/IP for the transport and routing.  The extensions give TCP, IP and
direct Ethernet access.

For more information on tn3270, mail to minshall@opal.Berkeley.EDU.

Greg Small                                           (415)642-5979
Personal Computer Networking & Communications        gts@opal.Berkeley.EDU
214 Evans Hall CFC                                   ucbvax!jade!opal!gts
University of California, Berkeley, Ca 94720         SPGGTS@UCBCMSA.BITNET

Date: 22 Oct 86 20:30:29 PDT (Wed)
From: minshall%opal.Berkeley.EDU%ucbvax.berkeley.edu@CSNET-RELAY

Peter,

	If you buy the UB TCP/IP board, then you can get tn3270
from us.  This is the same (more or less) program that runs under
4.2/4.3 Unix.  We are currently beta-testing it, and we expect to
give it to users (on campus) next week (or so).

	What you would get (from us) is an executable, plus source.
You would need a PC compiler to compile it; we use the MetaWare
compiler.

Greg Minshall
(415)642-0530

CORNELL:

Date: Tue, 21 Oct 86 06:41:56 EDT
From: Scott Brim <swb%devvax.tn.cornell.edu@CSNET-RELAY>

We've had it for quite a while (in color, with user-definable
keymappings, all that sort of stuff), but not for all interfaces.
I've forwarded your mail to the keeper of that code.
							Scott

Date: 04 November 86 20:04 EST
From: RHX@CORNELLC

Peter,  Here's a copy...
        (Forward to anyone who might wish to know.)
     
------------------------------
     
To:  Peter W. Day
From: Dick Cogger
      At Cornell, we have ported the MIT stuff to Aztec C, ported it
in C to the Macintosh, added Omninet drivers, and added a nice 3270
for talking to Wiscnet.  We also have a compatible serial-port version
which works nicely against the 7171 running a modified H19 definition.
on the PC, there is key and color mapping, user selectable.  The Mac
has a rudimentary macro facility which will be added to the PC.  For
both, 3270 has a built-in file transfer which uses a CMS module and
operates via the TCP connection which is up for Telnet.  Very simple
from the user perspective, but not super-high performance.
     
     We plan to submit all of it to the Maryland distribution as soon
as we have the source cleaned up and organized-- current target for
the PC stuff is Thanksgiving, more or less.  We'll be adding drivers
for Appletalk and IBM token-ring, eventually.
                                               -Dick


FTP SOFTWARE:

From jbvb%borax.lcs.mit.edu@CSNET-RELAY Thu Oct 23 15:31:21 1986
Date: Tue, 21 Oct 86 20:01:25 edt
From: "James B. VanBokkelen" <jbvb%borax.lcs.mit.edu@CSNET-RELAY>

FTP Software has a contractural committment to include a tn3270 in its
PC/TCP package (major extensions to PC/IP by John Romkey, who has left
MIT) by March, 1987.  We hope to beat this deadline by a good deal.

jbvb@ai.ai.mit.edu
James B. VanBokkelen
FTP Software, Inc.
(617) 864-1711

CMU: Did not reply, although John Romkey <romkey@xx.lcs.mit.edu>
listed in his note to pcip-request dated 21 Sep 86 a CMU version
"available to anyone, token ring driver and 3270 emulator available
to IBM ACIS universities."

Rob Warnock called to point out that the key to porting 4.3bsd tn3270
to a system that has telnet (with source) was to change the telnet
on the target system in the same manner as telnet was changed under
4.3bsd for tn3270.

-----------[000073][next][prev][last][first]----------------------------------------------------
Date:      Mon, 17 Nov 86 17:53:40 EST
From:      Peter Day {EUCC} <ospwd@EMORY.ARPA>
To:        mod.protocols.tcp-ip, tcp-ip@SRI-NIC.ARPA
Subject:   TELNET 3270 for PC/IP

The following is a summary of responses to my request for
TELNET 3270 for PC/IP. Essentially, it appears that this will be
available from two universities: Cornell (via Maryland
Distribution) and Berkeley. In addition, FTP Software responded
that they expect to have it by March 1987.

The Berkeley program uses the Ungerman Bass TCP-PC product, which
is particularly interesting because it not only has a netbios
interface, but also a Name Service facility.

BERKELEY:

Date: Wed, 22 Oct 86 17:39:15 PDT
From: Greg Small <gts%violet.Berkeley.EDU%ucbvax.berkeley.edu@CSNET-RELAY>

Greg Minshall is porting his Unix tn3270 to the PC.   It is based on
Ungermann-Bass's TCP-PC product that runs on their intelligent Ethernet
controller.  Tn3270 interfaces to the Ungermann-Bass extended Netbios
interface through a temporary socket library, but will be converted
to Ungermann-Bass's socket library for the PC.

The extended Netbios interface allows running telnet, ftp, tn3270 and user
socket programs concurrent with IBM PC Network applications.  The Netbios
supports IBM PC Network applications with standard Netbios calls using
TCP/IP for the transport and routing.  The extensions give TCP, IP and
direct Ethernet access.

For more information on tn3270, mail to minshall@opal.Berkeley.EDU.

Greg Small                                           (415)642-5979
Personal Computer Networking & Communications        gts@opal.Berkeley.EDU
214 Evans Hall CFC                                   ucbvax!jade!opal!gts
University of California, Berkeley, Ca 94720         SPGGTS@UCBCMSA.BITNET

Date: 22 Oct 86 20:30:29 PDT (Wed)
From: minshall%opal.Berkeley.EDU%ucbvax.berkeley.edu@CSNET-RELAY

Peter,

	If you buy the UB TCP/IP board, then you can get tn3270
from us.  This is the same (more or less) program that runs under
4.2/4.3 Unix.  We are currently beta-testing it, and we expect to
give it to users (on campus) next week (or so).

	What you would get (from us) is an executable, plus source.
You would need a PC compiler to compile it; we use the MetaWare
compiler.

Greg Minshall
(415)642-0530

CORNELL:

Date: Tue, 21 Oct 86 06:41:56 EDT
From: Scott Brim <swb%devvax.tn.cornell.edu@CSNET-RELAY>

We've had it for quite a while (in color, with user-definable
keymappings, all that sort of stuff), but not for all interfaces.
I've forwarded your mail to the keeper of that code.
							Scott

Date: 04 November 86 20:04 EST
From: RHX@CORNELLC

Peter,  Here's a copy...
        (Forward to anyone who might wish to know.)
     
------------------------------
     
To:  Peter W. Day
From: Dick Cogger
      At Cornell, we have ported the MIT stuff to Aztec C, ported it
in C to the Macintosh, added Omninet drivers, and added a nice 3270
for talking to Wiscnet.  We also have a compatible serial-port version
which works nicely against the 7171 running a modified H19 definition.
on the PC, there is key and color mapping, user selectable.  The Mac
has a rudimentary macro facility which will be added to the PC.  For
both, 3270 has a built-in file transfer which uses a CMS module and
operates via the TCP connection which is up for Telnet.  Very simple
from the user perspective, but not super-high performance.
     
     We plan to submit all of it to the Maryland distribution as soon
as we have the source cleaned up and organized-- current target for
the PC stuff is Thanksgiving, more or less.  We'll be adding drivers
for Appletalk and IBM token-ring, eventually.
                                               -Dick


FTP SOFTWARE:

From jbvb%borax.lcs.mit.edu@CSNET-RELAY Thu Oct 23 15:31:21 1986
Date: Tue, 21 Oct 86 20:01:25 edt
From: "James B. VanBokkelen" <jbvb%borax.lcs.mit.edu@CSNET-RELAY>

FTP Software has a contractural committment to include a tn3270 in its
PC/TCP package (major extensions to PC/IP by John Romkey, who has left
MIT) by March, 1987.  We hope to beat this deadline by a good deal.

jbvb@ai.ai.mit.edu
James B. VanBokkelen
FTP Software, Inc.
(617) 864-1711

CMU: Did not reply, although John Romkey <romkey@xx.lcs.mit.edu>
listed in his note to pcip-request dated 21 Sep 86 a CMU version
"available to anyone, token ring driver and 3270 emulator available
to IBM ACIS universities."

Rob Warnock called to point out that the key to porting 4.3bsd tn3270
to a system that has telnet (with source) was to change the telnet
on the target system in the same manner as telnet was changed under
4.3bsd for tn3270.

-----------[000074][next][prev][last][first]----------------------------------------------------
Date:      Mon, 17-Nov-86 18:34:01 EST
From:      karels%okeeffe@UCBVAX.BERKELEY.EDU (Mike Karels)
To:        mod.protocols.tcp-ip
Subject:   Re: EGP madness (net 127)

Well, yes, as a matter of fact I've already fixed this bug locally.
The bug and a fix were also reported on egp-people some time back.
I fixed it somewhat differently, in a way which doesn't depend
on the loopback network of 127.  I'm updating the copy on ucbarpa
for anonymous ftp.  The notes and an example configuration file
have also been updated.  The three modified files are in
pub/4.3/egp-update.tar on ucbarpa.berkeley.edu.

		Mike

-----------[000075][next][prev][last][first]----------------------------------------------------
Date:      Mon 17 Nov 86 23:56:03-EST
From:      Dennis G. Perry <PERRY@VAX.DARPA.MIL>
To:        tcp-ip@SRI-NIC.ARPA
Cc:        perry@VAX.DARPA.MIL
Subject:   [Ken Pogran <pogran@ccq.bbn.com>: Recent ARPANET Performance Improv]
Although I don't think we have conquered the things lying beneath the
swamp, I do believe that we have cut thru some of the water lilies.  Please
let me know how we are doing, both good and bad so we can make this
place a little more bareable.  Below is a report of the latest improvements
to the Arpanet tangle.

dennis
                ---------------

Received: from ccq.bbn.com by vax.darpa.mil (4.12/4.7)
	id AA12250; Wed, 12 Nov 86 16:16:16 est
Message-Id: <8611122116.AA12250@vax.darpa.mil>
Date: Wed, 12 Nov 86 15:23:01 EST
From: Ken Pogran <pogran@ccq.bbn.com>
Subject: Recent ARPANET Performance Improvements
To: Prishivalko@ddn2.arpa, Perry@vax.darpa.mil, Grindle@ddn1.arpa,
        Leonard@ddn1.arpa, ARPANETMGR@ddn1.arpa
Cc: JBurke@cc7.bbn.com, MLevandowski@ccm.bbn.com, RGrenier@ccm.bbn.com,
        Blumenthal@vax.bbn.com, McKenzie@j.bbn.com, Mayersohn@cc5.bbn.com,
        JWiggins@cc5.bbn.com, CGreenleaf@cc5.bbn.com, MPrimak@cc5.bbn.com,
        FSerr@cc5.bbn.com, SCohn@cc5.bbn.com, Hinden@ccv.bbn.com,
        Bartlett@cct.bbn.com, BDlugos@ccy.bbn.com, CStein@ccq.bbn.com,
        STaylor@ccb.bbn.com, pogran@ccq.bbn.com

As you know, over the past few weeks a number have steps have
been taken to alleviate the extreme congestion that has plagued
the ARPANET in recent months.  BBNCC is pleased to be able to
report that, as a result of these steps, there has been a
significant improvement in network performance.  

In a message dated 30 September, Jeff Mayersohn recommended eight
actions to improve ARPANET performance.  Of those, five have been
implemented at this time.  They are:

1.  TAC 113 has been installed on ARPANET TACs, reducing
    character-at-a-time traffic.

2.  Network parameters have been adjusted to provide more even
    sharing of cross-country bandwidth.

3.  Wideband Network gateways have been modified to favor the
    Wideband Net over the ARPANET for cross-country traffic
    between some LANs.

4.  Additional network performance statistics were collected in
    early October (as reported earlier).

5.  A link has been restored between the Purdue and Wisconsin
    nodes.

In addition, and most significantly, a 56kb line was put into
service last week between USC and CIT.  This line effectively
bypasses a 19.2kb line that had created a bottleneck in one of
the ARPANET's three cross-country paths, as reported in Jeff's
message of 8 October.

Finally, the ARPANET was upgraded to PSN 6, and Mailbridges were
upgraded to MB1008.1.

On Friday, 7 November, all of these changes were in place
together for the first time.  Performance measurement data taken
on Friday and on Monday, 10 November indicate significant
improvement in three key measures: mean round trip delay, mean
number of hops taken by data packets, and number of
performance-related traps received by the ARPANET monitoring
center.  

Mean round trip delay and mean number of hops have returned
approximately to their June 1986 levels, and are down
significantly from levels measured in early October (Mean round
trip delay, in particular, has dropped from 1215 ms on 2 October
and 625 ms on 3 October to 298 ms on 7 November).  "Traps"
reported by network nodes to the Monitoring Center indicating
congestion and other performance problems have decreased by more
than an order of magnitude -- from 80K-150K per day in October to
5K-10K on November 7 and 10.

>From this data we can conclude that the ARPANET is past its most
immediate crisis.  However, the network has little reserve
capacity, and performance is still critical.  Some congestion
still occurs, and loss of a single trunk or node could still
bring the network into a very congested state.  Thus, further
steps to improve network performance, such as the provision of
additional cross-country bandwidth and additional processing
capacity at several key nodes, must still be taken in order to
restore the ARPANET to longterm good health.

Attached to this message is a report from John Wiggins of our
Network Analysis staff detailing the results of our latest
performance measurements.

Regards,
 Ken Pogran
 BBNCC


-------
From:    John Wiggins (BBN 5/134  617-497-3390) <jwiggins@cc5.bbn.com>
Date:    11 Nov 86 18:37:00 EST (Tue)
Subject: Arpanet Congestion Analysis (PART THREE)

To:      mayersohn@cc5.bbn.com
cc:      jwiggins@cc5.bbn.com, pyle@cc5.bbn.com, cvenkate@cc5.bbn.com, 
	 cgreenleaf@cc5.bbn.com, fserr@cc5.bbn.com, scohn@cc5.bbn.com, 
	 mprimak@cc5.bbn.com


						November 11, 1986
Jeff,

I am very pleased to inform you of significant improvement in network
performance as a result of the changes that have been made during the
last few weeks.  These changes have included: (1) removal of the 19.2
kb link {between the two USC nodes} from a major trans-continental
path, (2) increasing the propagation delays on crucial links, (3)
keeping the giveback timer set at two slow ticks, instead of 8 ticks,
(4) installation of TAC 113 to take advantage of "word-at-a-time"
optimizations, (5) reconnection of link 94 (between WISC94 and
PURDU37), and (6) installation of PSN 6 to remove faulty microcode
from the network. 

The Arpanet is still in a critical state, with little or no reserve
capacity.  So, by no means should this good news be construed to imply
a lessening of the urgency of deploying our longer term
recommendations.  In particular, addition trans-continental trunking
bandwidth is dearly needed. 

Here is a comparison of some important network-wide cumstats data.
Note that the major topological difference between October and June is
that the 19.2kb link was connect to a stub in June, but was part of
one of the three trans-continental paths during the October
collections.  The November data include our recommended by-pass of the
19.2kb link with a new 56kb link connecting CIT54 and US121.  At this
time, the 19.2kb link remains in "backup" with a very high configured
propagation delay; this was done in an attempt to force traffic away
from the 19.2kb link unless the new 56kb link is down. 

The data from each day are averaged over the 6-hour period from 8:00
to 14:00 EST.  On October 3, the propagation delays for three links
were increased so that routing would report the maximum delay for a
single hop, approximately 1.6 seconds.  This is the major difference
between the two October collections. 

                             6-hour Periods from 9:00 to 15:00 EDT
 			    June 11      Oct. 2      Oct. 3     Nov. 7

FROM HOSTS
msgs/sec                       218         152         181        185
pkts/sec                       293         209         262        254

mean round trip delay (sec)    312        1215         625        298

OUT ALL CHANNELS (trunks)
data pkts/sec                 1044        1009        1189       1000
ctl.  pkts/sec                 983         907        1136        898
total pkts/sec                2028        1916        2325       1898
internode throughput (kb/s)    206         182         228        208
utilization (data only)       .094        .094        .113       .090
utilization (incl. overhead)  .213        .208        .245       .173

routing updates/sec           1.67        2.40        2.44       1.45

TRAFFIC DISTRIBUTION

'min-hop' msg_weighted
     mean path                2.75        3.51        3.54       3.35

data pkts/sec out trunks
     divided by pkts/sec
     from hosts               3.56        4.83        4.54       3.93


This last quantity would be the mean number of hops for data packets
from hosts in the absence of retransmissions.  It would increase with
retransmissions, as well as with increased real path lengths.  The
"min-hop msg_weighted mean path" would increase from a real
re-distribution of offered load, and also includes some contribution
from the additional hop on the southern trans-continental route for
the October and November data. 

Our daily monitoring of PSN traps processed by the noc has also lead
us to conclude our recommendations have improved network performance.
Before the correct giveback timer setting and the new 56kb links were
in the network, we would observe between 80,000-150,000 performance
related traps on a typical weekday.  On Friday, 7 Nov 86, all of our
short-term recommendations were in place for the first time.  On that
day, we observed less than 5000 performance related traps.  Yesterday,
Mon 10 Nov 86, we processed just 10,000 of these PSN traps. 

We still need the recommended additional trunking Although the two
days we have so far look much better, they still indicate problems in
network performance.  The number of traps is still fairly high.  The
loss of a single trunk would bring the network into a very congested
state.  In conclusion, there is little reserve capacity.  We hope to
see our other recommendations implemented ASAP. 

						    Sincerely,
						   _John Wiggins_
-------
-------
-----------[000076][next][prev][last][first]----------------------------------------------------
Date:      18 Nov 1986 05:53-EST
From:      CERF@A.ISI.EDU
To:        van@LBL-CSAM.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: Yet more on RTTs
Van and Dan,

I am in agreement that performance issues must not be swept under the rug.
The better we can solve them, the longer the product lifetime because the
ISO implementations will have to do at least as well for anyone to switch
over. I'm not against ISO, just determined that the switch to that suite,
when it comes, is with the best performance possible. In the meantime, the
TCP/IP suite must similarly do the best possible for the client.

I would be happy to have Van make apresentation in March 87 if he is
interested. Actually, I still have in mind a panel of people concerned
with commercial use of TCP/IP - I have a set of questions to ask
concerning the motivations, requirements and expectations of these
commercial users.

Vint
-----------[000077][next][prev][last][first]----------------------------------------------------
Date:      Tue, 18 Nov 86 17:11:36 -0500
From:      Craig Partridge <craig@loki.bbn.com>
To:        hoptoad!tim@lll-crg.arpa
Cc:        tcp-ip@sri-nic.arpa
Subject:   re: Prevalence of RDP implementations?

Tim,

    I know of only two implementations, neither of which is in general
circulation.  I'd be interested in hearing of others.

    The first was done by Bob Walsh while he was at BBN, and is part
of the BBN TCP/IP implementation.

    The second is the one I'm currently working on (despite my BBN affliation
I'm doing this independently of Walsh's implementation, as work towards a
M.Sc. degree at Harvard).  Right now I'm worrying about benchmarking and
evaluating the implementation and, as yet, have given no real thought
to questions of distributing it.

    Both implementations are for the 4.2/4.3bsd distributions.

Craig
-----------[000078][next][prev][last][first]----------------------------------------------------
Date:      Tue, 18-Nov-86 19:10:18 EST
From:      braden@ISI.EDU (Bob Braden)
To:        mod.protocols.tcp-ip
Subject:   Re:  Prevalence of RDP implementations?

I believe one can say that RDP is not widely implemented or used.  In my
opinion, it should be regarded as an experimental protocol.  There are
two good ideas in RDP (selective retransmission and packet-orientation),
which are being incorporated into more recent experimental protcols
-- eg VMTP and NETBLT.  Note that packet orientation is also a feature we
will all obtain eventually from TP4. 

I have also heard some opinions (from within BBN as well as elsewhere)
that there are some not-so-good ideas in RDP.  In my opinion (again), if
RDP were to become seriously used, it  would need a cycle of improvement.
It would not seem to be a big win over TCP in most applications.

Bob Braden

 

-----------[000079][next][prev][last][first]----------------------------------------------------
Date:      Tue, 18 Nov 86 16:20:02 MEZ
From:      RZ33%DKAUNI11.BITNET@WISCVM.WISC.EDU
To:        TCP-IP@SRI-NIC.ARPA
Subject:   NOTE from RZ33
Date: 18 November 1986, 16:17:51 MEZ
From: Dietrich Eckert           (0721) 608-2066      RZ33     at DKAUNI11
To:   TCP-IP at SRI-NIC

subject : ... please add me to the TCP-IP list ...


many thanks and kind regards

D.Eckert   RZ33 at DKAUNI11   EARN - nodeadministrator
-----------[000080][next][prev][last][first]----------------------------------------------------
Date:      Wed, 19-Nov-86 04:55:42 EST
From:      jh@tut.UUCP (Juha Hein{nen)
To:        mod.protocols.tcp-ip
Subject:   TCP/IP for TOPS-20

A neighboring university has got two DEC 2060s and would like to run
TCP/IP on them.  The problem is that Digital doesn't support TCP/IP in
Europe and refuse to sell the product.

Are there any other sources than Digital whom they could get TCP/IP for
TOPS-20?

	Juha Heinanen
	Tampere Univ. of Technology
	Finland

-----------[000081][next][prev][last][first]----------------------------------------------------
Date:      Wed, 19 Nov 86 13:10:17 -0500
From:      Craig Partridge <craig@loki.bbn.com>
To:        walsh@harvard.harvard.edu
Cc:        tcp-ip@sri-nic.arpa
Subject:   re: Prevalence of RDP implementations?

> I understand that you are at least referring to my code and have looked
> at in writing your "independent" implementation.

Bob,

    I've used your implementation for conformance testing (comparing
results of checksum routines, opening connections between implementations,
etc).  So yes, I've leveraged off your work a bit.

    However, for a variety of reasons, I did write my implementation
from scratch -- not one line was taken from yours.  I think that qualifies
as independent, without the quotation marks.

    It seemed to me that using another implementaton for conformance testing
was only reasonable practice and did not require explicit acknowledgement.
If you feel somehow slighted, my apologies, that was certainly not my intent.

Craig
-----------[000082][next][prev][last][first]----------------------------------------------------
Date:      Wed, 19 Nov 86 13:11:40 CST
From:      Linda Crosby <lcrosby@ALMSA-1.ARPA>
To:        TCP-IP@SRI-NIC.ARPA
Subject:   TCP-IP/ETHERNET QUESTION

     I am in the process of establishing a TCP/IP network that will
run on Broadband ethernet.  The prototype installation will consist of
4 mainframe computers (VAX 780) and 8 or 16 users using IBM-PC clones.  The
highwater mark could be as many as 8 mainframe computers and 400 IBM-PC
clone users.

     I would like to know if 400 users, as described above, running the
standard suite of DOD protocols (TCP/IP, TELNET, FTP, SMTP) can be
supported on a single ethernet ?  How many connections would be
reasonable before the network begins to degrade ?  Is anyone familiar
with a broadband ethernet, similar to what we are proposing, that is
configured with 400 connections ?  Is this reasonable ?

     I would like to here from anyone who has covered this ground
before.  A successfull installation elsewhere would be a big
confidence builder.

Linda J. Crosby
Technical Liaison
ALMSA-1
(LCROSBY@ALMSA-1)

-----------[000083][next][prev][last][first]----------------------------------------------------
Date:      Wed, 19-Nov-86 12:34:18 EST
From:      walsh@HARVARD.HARVARD.EDU (Bob Walsh)
To:        mod.protocols.tcp-ip
Subject:   re: Prevalence of RDP implementations?

Craig,

I understand that you are at least referring to my code and have looked
at in writing your "independent" implementation.

bob

-----------[000084][next][prev][last][first]----------------------------------------------------
Date:      Wed, 19-Nov-86 14:11:40 EST
From:      lcrosby@ALMSA-1.ARPA (Linda Crosby)
To:        mod.protocols.tcp-ip
Subject:   TCP-IP/ETHERNET QUESTION


     I am in the process of establishing a TCP/IP network that will
run on Broadband ethernet.  The prototype installation will consist of
4 mainframe computers (VAX 780) and 8 or 16 users using IBM-PC clones.  The
highwater mark could be as many as 8 mainframe computers and 400 IBM-PC
clone users.

     I would like to know if 400 users, as described above, running the
standard suite of DOD protocols (TCP/IP, TELNET, FTP, SMTP) can be
supported on a single ethernet ?  How many connections would be
reasonable before the network begins to degrade ?  Is anyone familiar
with a broadband ethernet, similar to what we are proposing, that is
configured with 400 connections ?  Is this reasonable ?

     I would like to here from anyone who has covered this ground
before.  A successfull installation elsewhere would be a big
confidence builder.

Linda J. Crosby
Technical Liaison
ALMSA-1
(LCROSBY@ALMSA-1)

-----------[000085][next][prev][last][first]----------------------------------------------------
Date:      Wed, 19-Nov-86 17:39:41 EST
From:      craig@LOKI.BBN.COM (Craig Partridge)
To:        mod.protocols.tcp-ip
Subject:   re: Prevalence of RDP implementations?


Tim,

    I know of only two implementations, neither of which is in general
circulation.  I'd be interested in hearing of others.

    The first was done by Bob Walsh while he was at BBN, and is part
of the BBN TCP/IP implementation.

    The second is the one I'm currently working on (despite my BBN affliation
I'm doing this independently of Walsh's implementation, as work towards a
M.Sc. degree at Harvard).  Right now I'm worrying about benchmarking and
evaluating the implementation and, as yet, have given no real thought
to questions of distributing it.

    Both implementations are for the 4.2/4.3bsd distributions.

Craig

-----------[000086][next][prev][last][first]----------------------------------------------------
Date:      Wed, 19-Nov-86 20:38:29 EST
From:      LYNCH@A.ISI.EDU (Dan Lynch)
To:        mod.protocols.tcp-ip
Subject:   Re: [Ken Pogran <pogran@ccq.bbn.com>: Recent ARPANET Performance Improv]

As a current user of TACs and hosts on both Milnet and Arpanet and as a
former provider of remote timesharing services across these networks
I can say that things have indeed gotten a lot better in the past few weeks.

I thank those who have made this happen and second (third?) the urging from
those folks who know what all this looks like at the bottom to get more
resources in place soon.  We are riding on the hairy edge now that
it has been tuned.  No more such miracles are forthcoming.

But, I thank thee for this one...

Dan
-------

-----------[000087][next][prev][last][first]----------------------------------------------------
Date:      19 Nov 1986 20:38:29 EST
From:      Dan Lynch <LYNCH@A.ISI.EDU>
To:        Dennis G. Perry <PERRY@VAX.DARPA.MIL>
Cc:        tcp-ip@SRI-NIC.ARPA, LYNCH@A.ISI.EDU
Subject:   Re: [Ken Pogran <pogran@ccq.bbn.com>: Recent ARPANET Performance Improv]
As a current user of TACs and hosts on both Milnet and Arpanet and as a
former provider of remote timesharing services across these networks
I can say that things have indeed gotten a lot better in the past few weeks.

I thank those who have made this happen and second (third?) the urging from
those folks who know what all this looks like at the bottom to get more
resources in place soon.  We are riding on the hairy edge now that
it has been tuned.  No more such miracles are forthcoming.

But, I thank thee for this one...

Dan
-------
-----------[000088][next][prev][last][first]----------------------------------------------------
Date:      Wed, 19-Nov-86 22:02:58 EST
From:      ddp#@ANDREW.CMU.EDU (Drew Daniel Perkins)
To:        mod.protocols.tcp-ip
Subject:   Re: TELNET 3270 for PC/IP

Sorry for not replying, but I didn't want to get you excited for nothing.
Here's the good news.  Yes, we do have a working version of tn3270 for PC/IP.
It emulates a 3278 on a mono card, and it has full color support and emulates
a 3279 on a color card.  It also has a key redefinition facility.  We are
using it here at CMU and it works great.   Now here's the bad news.
Unfortunately, the 3270 emulator was done by someone in IBM and is owned by
IBM/ACIS.  They won't let me distribute it.  I also have a driver for the IBM
token ring card.  Unfortunately, we did it with IBM funding, so they own the
code and won't let me distribute it.  Both of these were done on top of my
Microsoft C version of PCIP, which fortunately was done without IBM funds and
is freely distributable to anyone who wants it.  IBM is putting all their
eggs in U. of Md. basket and hopes that they will soon be distributing both
as an official IBM/ACIS supported product.  If you (any universities anyway)
would like to get my software NOW, PLEASE bang on your local IBM marketing
people's heads to get them to allow me to distribute it.  Like most
university software, we won't officially support it but I do usually fix the
bugs as soon as I hear about them.

Drew

-----------[000089][next][prev][last][first]----------------------------------------------------
Date:      Thu, 20-Nov-86 04:41:51 EST
From:      craig@LOKI.BBN.COM (Craig Partridge)
To:        mod.protocols.tcp-ip
Subject:   re: Prevalence of RDP implementations?


> I understand that you are at least referring to my code and have looked
> at in writing your "independent" implementation.

Bob,

    I've used your implementation for conformance testing (comparing
results of checksum routines, opening connections between implementations,
etc).  So yes, I've leveraged off your work a bit.

    However, for a variety of reasons, I did write my implementation
from scratch -- not one line was taken from yours.  I think that qualifies
as independent, without the quotation marks.

    It seemed to me that using another implementaton for conformance testing
was only reasonable practice and did not require explicit acknowledgement.
If you feel somehow slighted, my apologies, that was certainly not my intent.

Craig

-----------[000090][next][prev][last][first]----------------------------------------------------
Date:      Thu, 20-Nov-86 13:12:00 EST
From:      nunn@NBS-VMS.ARPA ("NUNN, JOHN C.")
To:        mod.protocols.tcp-ip
Subject:   Need source for hostname server/daemon


Can someone tell me where I might find the source for a hostname
daemon/server (RFC953) for 4.xBSD?

	Thanks,
	John <nunn@nbs-vms.arpa>
------

-----------[000091][next][prev][last][first]----------------------------------------------------
Date:      20 Nov 86 13:12:00 EST
From:      "NUNN, JOHN C." <nunn@nbs-vms.ARPA>
To:        "tcp-ip" <tcp-ip@sri-nic.arpa>
Subject:   Need source for hostname server/daemon

Can someone tell me where I might find the source for a hostname
daemon/server (RFC953) for 4.xBSD?

	Thanks,
	John <nunn@nbs-vms.arpa>
------
-----------[000092][next][prev][last][first]----------------------------------------------------
Date:      Fri, 21-Nov-86 13:54:31 EST
From:      OLE@SRI-NIC.ARPA (Ole Jorgen Jacobsen)
To:        mod.protocols.tcp-ip
Subject:   New files in NETINFO:

Folks,
	Attached is an updated copy of the RFC sets I sent out recently.
This file is now kept in [SRI-NIC.ARPA]NETINFO:RFC-SETS.TXT, is available
via Anonymous FTP and will be updated when appropriate. If you do not 
have access to FTP, send a message to me or NIC@SRI-NIC.ARPA requesting
the file. I do not plan to send the file to the entire list again after
this.

	Also attached is a new file: [SRI-NIC.ARPA]NETINFO:TCP-IP-BIB.TXT
which contains a bibliography of recent articles pertaining to TCP/IP,
X.25, TP-4, and other standards.  

Cheers,
	Ole
-------------------------------------------------------------------------

[ NETINFO:RFC-SETS.TXT ]                                       [ 11/86, OJJ ]

                                   RFC SETS


                        DDN Network Information Center
                         SRI International, Room EJ291
                             333 Ravenswood Avenue
                             Menlo Park, CA 94025
                       (800) 235-3155 or (415) 859-3695
                               NIC@SRI-NIC.ARPA



If  you are ordering RFCs individually, the attached sets of RFCs can guide you
in ordering a complete list of RFCs by topic, such as  "mail"  and  "gateways".
Related  RFCs  are  grouped  together  within  each  set.  Call the DDN Network
Information Center if you need assistance in ordering RFCs.    RFCs  which  are
marked with an asterisk (*) are not included in the 1985 DDN Protocol Handbook.



                                  MAJOR RFCs

       RFC-791      Internet Protocol (IP)
       RFC-792      Internet Control Message Protocol (ICMP)
       RFC-793      Transmission Control Protocol (TCP)
       RFC-768      User Datagram Protocol (UDP)
       RFC-854      Telnet Protocol (see also many Telnet Options
                    in the RFC index)
       RFC-959      File Transfer Protocol (FTP)
       RFC-821      Simple Mail Transfer Protocol (SMTP)
       RFC-822      Standard for the Format of ARPA Internet Text Messages
       RFC-990      Assigned Numbers
       RFC-991      Official ARPA Internet Protocols



                                   MAIL RFCs

       RFC-821      Simple Mail Transfer Protocol (SMTP)
       RFC-822      Standard for the Format of ARPA Internet Text Messages
       RFC-886 *    Proposed Standard for Message Header Munging
       RFC-915 *    Network Mail Path Services
       RFC-934 *    Proposed Standard for Message Encapsulation
       RFC-937      Post Office Protocol Version 2
       RFC-974 *    Mail Routing and the Domain System
       RFC-976 *    UUCP Mail Interchange Format Standard
       RFC-987 *    Mapping between X.400 and RFC 822



                            NAMING AND DOMAIN RFCs

       RFC-882 *    Domain Names - Concepts and Facilities
       RFC-883      Domain Names - Implementation Specification
       RFC-920 *    Domain Requirements
       RFC-921 *    Domain Name System Implementation Schedule - Revised
       RFC-973 *    Domain System Changes and Observations
       RFC-974 *    Mail Routing and the Domain System
       RFC-952      Internet Host Table Specification
       RFC-953      Hostnames Server



                                 GATEWAY RFCs

       IEN-109 *    How to build a gateway
       RFC-792      Internet Control Message Protocol (ICMP)
       RFC-823      The DARPA Internet Gateway (GGP)
       RFC-890 *    Exterior Gateway Protocol Implementation Schedule
       RFC-904      Exterior Gateway Protocol Formal Specification (EGP)
       RFC-911 *    EGP Gateway under Berkeley UNIX 4.2
       RFC-975 *    Autonomous Confederations
       RFC-985 *    Requirements for Internet Gateways -- Draft



                     LOCAL AREA, SUBNET and BROADCAST RFCs

       RFC-894      A Standard for the Transmission of IP Datagrams
                    over Ethernet Networks
       RFC-895      Standard for the Transmission of IP Datagrams over
                    Experimental Ethernet Networks
       RFC-948      Two Methods for the Transmission of IP Datagrams over
                    IEEE 802.3 Networks
       RFC-826      An Ethernet Address Resolution Protocol
       RFC-903      A Reverse Address Resolution Protocol
       RFC-925 *    Multi-LAN Address Resolution Protocol
       RFC-917 *    Internet Subnets
       RFC-940 *    Toward an Internet Standard Scheme for Subnetting
       RFC-950 *    Internet Standard Subnetting Procedure
       RFC-919 *    Broadcasting Internet Datagrams
       RFC-922 *    Broadcasting Internet Datagrams in the Presence of Subnets
       RFC-947 *    Multi-network Broadcasting within the Internet
       RFC-966 *    Host Groups: A Multicast Extension to the Internet Protocol
       RFC-988 *    Host Extensions for IP Multicasting



                                BACKGROUND RFCs

       IEN-48       The Catenet Model for Internetworking
       IEN-137 *    On Holy Wars and a Plea for Peace
       IEN-140 *    Mutual Encapsulation of Internetwork Protocols
       RFC-896 *    Congestion Control in IP/TCP
       RFC-970 *    On Packet Switches with Infinite Storage
       RFC-813      Window and Acknowledgement Strategy in TCP
       RFC-814      Name, Addresses, Ports, and Routes
       RFC-815      IP Datagram Reassembly Algorithms
       RFC-816      Fault Isolation and Recovery
       RFC-817      Modularity and Efficiency in Protocol Implementation
       RFC-871 *    A Perspective on the ARPANET Reference Model
       RFC-872 *    TCP-ON-A-LAN
       RFC-873 *    The Illusion of Vendor Support
       RFC-874 *    A Critique of X.25
       RFC-875 *    Gateways, Architectures, and Heffalumps
       RFC-980 *    Protocol Documentation Order Information



                        TOWARDS INTERNATIONAL STANDARDS

       RFC-905 *    ISO Transport Protocol Specification
       RFC-942 *    Transport Protocols for Department of Defense Data Networks
       RFC-939 *    Executive Summary of the NRC Report on Transport Protocols
                    for Department of Defense Data Networks
       RFC-945 *    DoD Statement on the NRC Report
       RFC-983 *    ISO Transport Services on Top of the TCP
       RFC-941 *    Addendum to the Network Service Definition Covering
                    Network Layer Addressing
       RFC-987 *    Mapping between X.400 and RFC 822
       RFC-926 *    Protocol for Providing the Connectionless-Mode
                    Network Services

-------------------------------------------------------------------------

[ NETINFO:TCP-IP-BIB.TXT ]                                [ 11/86, OJJ ]



                              BACKGROUND READING



                        DDN Network Information Center
                         SRI International, Room EJ291
                             333 Ravenswood Avenue
                             Menlo Park, CA 94025
                       (800) 235-3155 or (415) 859-3695
                               NIC@SRI-NIC.ARPA



The  attached  bibliography  of recent articles pertaining to TCP and IP, X.25,
the Transport Protocol (TP-4), OSI and other standards was compiled by the  DDN
Network Information Center (NIC) as a background reading list for vendors.  The
bibliography cites articles, mostly from the open  literature,  representing  a
variety  of  viewpoints.   It has not been sanctioned by any government agency,
nor does it contain references to the Requests for Comments (RFCs).    The  NIC
does not provide copies of these articles because they are readily available in
the open literature.  The NIC has copies of the DDN Protocol Handbook, the  RFC
index,  and  the OSD (Office of the Secretary of Defense) directives pertaining
to the DoD protocol suite.

-------------------------------------------------------------------------------



Bolt Beranek and Newman, Inc.  Features of internetwork protocol [Draft].
    Washington, DC: National Bureau of Standards, Inst. for Computer Sciences
    and Technology; 1980 July; Rpt. No. ICST/HLNP-80-8. 67 p.

Burruss, J.   Features of the transport and session protocols [Draft report].
    Washington, DC: National Bureau of Standards, Inst. for Computer Sciences
    and Technology; 1980 March; Rpt. No. ICST/HLNP-80-1 and BBN Rpt. No. 4361.
    71 p.

Cashin, J.  DDN answers the protocol call. Software News. 4(12): 16-17;
    1984 December.

Cerf, V.; Cain, E.  DoD Internet architecture model.  Comput. Networks. 7(5):
    307-318; 1983 October.

Cerf, V.; Lyons, R.E.  Military requirements for packet switched networks and
    their implications for protocol standardization. Comput. Networks. 7(5):
    293-306; 1983 October.

Comer, D.; Korb, J.T.  CSNET protocol software: The IP-to-X.25 interface.
    Communications Architectures and Protocols; SIGCOMM '83 Symposium; 1983
    March 8-9; Austin, Tx. New York: Association for Computing Machinery;
    1983: 154-159.

Eslam, E.S.  Defense Data Network hits its stride. Telecommunications. 20(5):
    121-123; 1986 May.

Estrin, J.  Networking standards end confusion. Unix/World. 3(10): 26-31;
    1986 October.

Estrin, J.; Carrico, W.  TCP/IP protocols address LAN needs. Mini-Micro Syst.
    19(7): 111-119; 1986 May.

Groenbaek, I.  Conversion between the TCP and ISO tranport protocols as a
  method of achieving interoperability between data communications systems.
  IEEE J. Sel. Areas Commun. SAC4(2): 288-296; 1986 March.

Groenbaek, I.  TCP and ISO transport service: A brief description and
    comparison; The Hague, Netherlands: SHAPE Technical Center; 1984 February;
    STC TM-726. 37 p.

Grossman, D.B.  Comments on "Congestion control in TCP/IP internetworks".
    Comput. Commun. Rev. 15(2): 3-7; 1985 April/May.

Haverty, J.; Tauss, G.  How good is TCP/IP? Gov. Data Syst. 15(3): 54-58;
    1986 April-May.

Haverty, J.; Gurwitz, R.  Protocols and their implementation: A matter of
    choice. Data Commun. 12(3): 153-166; 1983 March.

Herman, J.G.; McQuillan, J.M.  How to expand and modernize a global network.
    Data Commun. 14(13): 171-190; 1985 December.

Horwitt, E.  Military acts to speed OSI. Computerworld. 20(33): 1-4; 1986
    August 18.

Horwitt, E.  OSI substitute lures net users. Computerworld. 20(30): 1-8; 1986
    July 28.

Ladermann, D.  Not yet time for trusty TCP/IP to be shunted aside. Gov.
    Comput. News. 5(19): 46-47; 1986 September 26

Lynch, C.A.  Protocols in perspective. Bull. Am. Soc. Inf. Sci. 11(6): 9-11;
    1985 August.

Moskowitz, R.A.  TCP/IP: Stairway to OSI. Computer Decisions. 18(9): 50-51;
    1986 April 22.

Nagle, J.  Congestion control in IP/TCP internetworks. Comput. Commun. Rev.
    14(4); 11-17; 1984 October.

National Bureau of Standards, Inst. for Computer Sciences and Technology.
    Computer networks program. Washington, DC: NBS-ICST; 1986 February. 36 p.

National Bureau of Standards, Inst. for Computer Sciences and Technology.
    Implementation guide for ISO transport protocol. Washington, DC: NBS-ICST;
    1985 December. ICST/SNA-85-18. 90 p.

National Bureau of Standards, Inst. for Computer Sciences and Technology.
    Military supplement to ISO transport protocol. Washington, DC: NBS-ICST;
    1985 December. ICST/SNA-85-17. 31 p.

Nelson, S.  DOD net uses unique protocols. Gov. Comput. News. 4: 88; 1985
    March 8.

Nelson-Rowe, L.  National Bureau of Standards OKs first U.S. E-mail
    specification. Commun. Week: 16; 1986 August 18.

Postel, J.B.  Internetwork applications using the DARPA protocol suite.
    Marina del Rey, CA: University of Southern California, Information
    Sciences Inst.; 1985 April; ISI/RS-85-151. 13 p.

Postel, J.B.  Internetwork protocol approaches. In: Tutorial: Computer
    communications: Architectures, protocols, and standards; By Stallings,
    W. Silver Spring, MD: IEEE Computer Society Press; 1985: 223-230.

Rauch-Hindin, W.  Communication standards: OSI is not a paper tiger. Systems
    and Software. 4(3); 64-86; 1985 March.

Rose, M.T.  Comments on "Comments on 'Congestion control in TCP/IP
    internetworks'" or The Holy Wars begin again. Comput. Commun. Rev. 15(5);
    2-9; 1985 October/November.

Rudin, H.  Informal overview of formal protocol specification. IEEE Commun.
    Mag. 23(3): 46-52; 1985 March.

Santos, P.J. Jr.  (Comments)2  on "Congestion control in IP/TCP internetworks".
    Comput. Commun. Rev. 15(3); 3-5; 1985 July/August.

Selvaggi, P.S.  Department of Defense data protocol standardization program.
    Comput. Networks. 7(5): 319-328; 1983 October.

Sirbu, M.A.; Zwimpfer, L.E.  Standards setting for computer communication:
    The Case of X.25. IEEE Commun. Mag. 23(3): 35-45; 1985 March.

Stallings, W.   Can we talk?  Datamation. 31(20): 101-106; 1985 October 15.

Stallings, W.   Primer: Understanding transport protocols. Data Commun. 13(11):
    201-215; 1984 November.

Tauss, G.; Kane, J.   Agencies handle data transfer with packet tech. Gov.
    Comput. News. 5: 37-38; 1986 January 17.

Tully, J.   XNS and TCP/IP protocols on Ethernet. Networks 85: Proceedings of
    the European Computer Communications Conference; 1985; London. Pinner,
    England: Online Publications; 1985: 103-112.

Wood, H.M.  Network protocol standards: The U.S. government approach.
    J. Telecommun. Networks. 1(2): 189-190; 1982 June.

Zhang, L.  Why TCP timers don't work well.  Communications Architectures and
    Protocols; SIGCOMM '86 Symposium; 1986 August 5-7; Stowe, VT: Association
    for Computing Machinery; 1986.
-------

-----------[000093][next][prev][last][first]----------------------------------------------------
Date:      Fri, 21-Nov-86 14:59:38 EST
From:      egisin%watmath.waterloo.edu@RELAY.CS.NET.UUCP
To:        mod.protocols.tcp-ip
Subject:   Submission for mod-protocols-tcp-ip

Path: watmath!egisin
From: egisin@Math.Waterloo.EDU (Eric Gisin @ University of Waterloo)
Newsgroups: mod.protocols.tcp-ip,comp.protocols.tcp-ip,comp.unix.wizards
Subject: Symmetric TCP connection in 4.2 BSD
Message-ID: <3487@watmath.UUCP>
Date: 21 Nov 86 19:59:20 GMT
Sender: egisin@watmath.UUCP
Lines: 21

I'm trying to develop an IP application under 4.2 BSD
that requires a symmetric TCP connection, as opposed
to the assymmetric connection commonly used in the client/server scheme.
The application is a RSCS line driver.

I was initially going to use UDP in place of BISYNC,
but BISYNC is too brain damaged for this to be done easily.

So I decided to replace the lower half of the RSCS line protocol (VMB)
with TCP.  I haven't been able to come up with a way to establish
the TCP connection between two RSCS line daemons symmetrically,
which was trivial to do with UDP (with a socket, bind, and connect).
The TCP specifcation says a TCP should be able to establish
a connection with two active sockets, or that you can passively
wait for a connection from a specified host. 4.2 allows neither of these.

Does anyone have any 4.2/4.3 code to do this?
It would be preferable if it used one well know port
and had the ability to have more than one line daemon per host.
RSCS does not easily allow me to use listen, accept, fork;
the line daemon processes are started at boot time.

-----------[000094][next][prev][last][first]----------------------------------------------------
Date:      Fri, 21-Nov-86 18:17:27 EST
From:      nowicki@SUN.COM.UUCP
To:        mod.protocols.tcp-ip
Subject:   Re: TCP-IP/ETHERNET QUESTION


	I am in the process of establishing a TCP/IP network that will
	run on Broadband ethernet.

In a way, "Broadband ethernet" is an oxymoron. Ethernet is
baseband.  There are several companies that build systems that are
compatible with Ethernet transciever specs, but use broadband
signalling instead of baseband signalling.  Although this is an analog
issue that is mysterious to software types like me, the fact
that you modulate and demodulate should not help collision problems, so
you have the usual Ethernet length restrictions.  Of course you can
exceed the length restrictions, with possible collision problems.

	The prototype installation will consist of 4 mainframe
	computers (VAX 780) and 8 or 16 users using IBM-PC clones.

Your definition of "mainframe" is interesting, since the workstation on
my desk is twice as fast, and is the middle of our line.  At any
rate, we have many Ethernets with up to about 100 Sun-3 machines.  It is
interesting that bandwidth is not the first limitation.  The main
reason you don't want more than about 100 machines is that one faulty
machine can bring the whole network down.  The probability that someone
shorts the cable, or starts to continuously broadcast, or at least has
bad collision detection circuitry, becomes pretty close to one with
over 100 nodes.

Of course this might be because Ethernet networks just evolve, while
broadband networks are usually "planned".  We make each floor of each
building its own Ethernet, with additional Ethernets for labs.  You can
make a Sun into a gateway just by sliding in another Ethernet
controller.  It also helps to use transceiver multiplexor boxes such as
the ones made by TCL, to reduce the number of actual taps, (less likely
to short the cable). So you probably can put 400 PCs onto a single net,
but do you want to?

	-- Bill Nowicki
	   Sun Microsystems

-----------[000095][next][prev][last][first]----------------------------------------------------
Date:      Fri, 21-Nov-86 19:16:00 EST
From:      SYMALG@YUSOL.BITNET.UUCP
To:        mod.protocols.tcp-ip
Subject:   Encore or Annex - Summary of Replies (long)

First of all I would like to thank all the people who took the time to
reply to my request for information on Encore and Bridge.  The speed and
uniformaly high technical content of the replies was simply amazing.
My sincerest thanks goes out to:

         chris@columbia.edu
     ROODE%BIONET@SUMEX-AIM
       dudek%endor@harvunxt
  hedrick@topaz.rutgers.edu
  swb@devvax.tn.cornell.edu
  steve%umnd-cs-gw%umn-dulu
  kincl%hplnmk@hplabs.HP.CO
              lars@acc.arpa
   SATZ@Sierra.Stanford.EDU
  weinberg%necis.UUCP@harvu
          ihnp4!uokmax!mike
  ott!encore!pinocchio!alan
  eismo!mcvax!daimi!pederch


Editing of a summary always makes it possible to have missrepresented what
others meant to say - the responsibilty is mine alone.  Sorry for the length of
the posting, but there was so much *good* info that I thought I'd let you
decide how much you were up to reading.

In a one line summary:
Encore wins especially for Unix systems, but also check out Cisco.

Original-posting:
===============================================================================
Newsgroups: mod.protocols.tcp-ip
Subject: Encore Annex or Bridge Terminal Server

We have a Sun 160  to which we would like to add about 10 terminals lines.
The Sun is mainly a file server for 3 3/50's, and the amount of use of
the terminal ports is expected to be light.  There are other machines on the
Ethernet that I would like to reach with the terminals, but I could always
log into the Sun first, and then Telnet or whatever across the ethernet.

1)      What do I lose by putting my terminals on the ethernet?  For example,
will ^O ^S ^Q all get gobbled rather than passed to Emacs for instance.
Will a terminal server on the ethernet affect paging from the Sun 50's to
the Sun 160?

2)      Besides flexibilty, what do I gain by having my terminals on the
ethernet?  Performance?  Would I be better off with an ALM on the VME bus?

3)      What advantages does the Encore Annex have over the Bridge boxes?
I would love to hear from anyone who has one.

4)      Does anyone have the phone number for Encore?  I have the Bridge info.


===============================================================================

Selected comments to highlight the major points:

-------------------------------------------------------------------------------
We 3 Bridge CS/1T terminal servers with 96 lines attached.  We are not
particularly happy with the software support; in particular, the boxes
are memory-starved and inefficient due to some fundamental design limitation
that restricts the box to using very small packets (default 82 bytes, and
this is a max for some of their boxes).

You might want to check out Cisco Systems's ASM, which is probably the
best-performing and cheapest box you'll find.  The company is tiny and in its
infancy, but the guy maintaining the software is really sharp.  I don't have
any of their boxes, but I've played with them a little and read their
documentation and I'd buy some if we had the money.  Their number in Menlo Park
is (415) 326-1941.  I think an 8-line box with a parallel printer port is only
around $7K, and you can expand it in 8-line increments up to 48 lines or so, or
add additional network interfaces to use it as a gateway.

-------------------------------------------------------------------------------
You should consider that you would be better off if those people
desiring terminal access to a Sun went not to a file server
but rather to one of the clients (preferably one not then in use).
To have a couple users on the file server can degrade performance for
the clients more so than adding those same users on clients.
-------------------------------------------------------------------------------
  We looked
closely at Encore and Bridge before deciding to go with Encore.
The advantages that Bridge had was that they were slightly cheaper per port,
and they supported connections from a host to a remote Bridge port, to operate
modems and in general bridge from a terminal on the ethernet to an
arbitrary RS232 connection.  Encore had a more familiar user interface
(mimics UNIX BSD commands rlogin, telnet, csh job control), the software
was downloaded from a central host instead of distributed on floppies,
and in general Encore seemed to have more of a commitment to the UNIX/TCP-IP
world than Bridge (Bridge started their terminal servers with XNS).
The ability of the Encore Annex boxes to offload some of the terminal i/o
load from the host by running an Encore-modified version of GNU-Emacs
clinched it for us.

    We have been *very* satisfied with the Encore Annex terminal servers.
We now have 6, and I anticipate getting at least 2 more before the end
of the year.  Encore has certainly shown a commitment to UNIX and the
TCP world - they anticipate support of BSD 4.3 based TCP/IP in their
servers (including subnet support), and ARPA domain-server support in
their next release (it will be in beta test soon).  We have not had a hardware
failure since we got them (only a couple of months, but still a good sign).
They have responded to complaints of bugs with rapid software updates which
correct the problems.  The technical assistance I have received over the
telephone was knowledgeable and helpful.

    Performance-wise, the cost of rlogin connections to the SUN is certainly
worse than a direct serial connection - on the order of twice as cpu-intensive.
The ability of SUNs to push bytes out their ethernet controllers is very good,
so this isn't as much of a problem as it could be.  We are pushing this to
the limit, and though the machine still behaves pretty well and services NFS
disk requests without many problems, the interactive-echo response to users
sometimes suffers.  Part of a solution to this problem is to implement
the NVS and NVT kernel enhancements from rick@nyit, which effectively connects
the incoming TCP socket with the input/output of the pseudo-tty which the
rlogin is using (instead of passing all input and output through rlogind).
It sounds to me like your configuration won't be pushing things too hard,
so you could probably get away without this.  I am hoping SUN will support
such mods in future releases of their OS - I have mentioned this to them,
but I have no idea whether they will or not.
-------------------------------------------------------------------------------
At some point terminal service will put a load on your Ethernet.  But
that point is several thousand users.

If your system is going to be used for heavy timesharing, the host end
of telnet will use some CPU time.  For a couple of users I wouldn't
expect to see an effect.  Also, character echo will slow down as the
system gets loaded, since telnetd has to be shceduled twice for each
character.  On our Pyramids we put telentd in the kernel, which both
removes the loading effect and removes the echo delay.  We have not
tried this on the Sun, but no doubt we will eventually.

The terminal server is a lot more flexible.  We are tending to use
that for most new terminals.  But it is more complex, and so there
are more things that can go wrong.  Where we have a machine whose
users will always be connected to that one machine, we still use
direct terminals.  But that is increasingly rare.
-------------------------------------------------------------------------------
Encore vs. Bridge -- well, actually, I'd also check out cisco - talk
to Len Bosack, President (it's a small company).  415-326-1941.  The
only reason we didn't go with cisco was because we didn't think they
could give us much support (they are small and in California).  They
have nice boxes.  Bridge is cheaper.  Cisco and Encore are more
Unix-like.  Encore has foreground, background, stopped "jobs", and in
the future will have hooks for implementing your own commands and for
a "security server" running on a BSD Unix system which gets control at
critical times.  You can get source for the Annex too, if you want to
do development -- e.g. Univ of Oklahoma is developing some cooperative
processing stuff for "vi" between BSD Unix and the Annex.  Oh, I
didn't mention the "leap" code -- Encore has built cooperative
processing into GNUemacs so it talks a special protocol to the Annex
and offloads the host from trivial screen management.  I don't use it
personally.  There's even a GNU function to show you how many
characters have been saved from host processing through this protocol.
It's not at all clear that the potential bells & whistles on the Annex
save you anything, but the programmers on this Annex say the CS/100
feels quite drab compared to it.

The Annex listens to "rwho" and "routed" packets to learn what to do.
The code to use nameservers is done but they didn't release it yet
because they don't feel like it's industrial strength.  The CS/100
etc. have hardcoded routing and macros in them.  Which is better?
Depends on who you talk to and how.  There are also a bunch of new
options going into the Annex.  I don't know what the future of the
Bridge servers is.  If you get them with a large number of ports
they're certainly cheaper.

-------------------------------------------------------------------------------

>Will a terminal server on the ethernet affect paging from the Sun 50's to
>the Sun 160?

Terminal activity is extremely small to begin with, it shouldn't affect
your ethernet traffic.  The current release of the Annex software is
geared towards your running the rwho daemon, though.  If you don't want
the overhead of the rwho daemon, the only thing you can do is refer to
machines by their internet numbers (this is the route we chose with our
Sun/2's).  This changes in their next release scheduled for December (?).
It has some kind of name server builtin.

>2)      Besides flexibilty, what do I gain by having my terminals on the
>ethernet?  Performance?  Would I be better off with an ALM on the VME bus?

Encore has taken a lot of the overhead related to emacs (screen movement, etc)
and let the annex do the work.  This results in fewer packets being
propagated over the ethernet => lower overhead for the host computer.
I only know of this working when the host computer is an Encore MultiMax
(which we have), but the propaganda they put out says this works with
any UNIX 4.2 host.  It wouldn't be unreasonable to assume they are correct.

Something else you gain is increased baud rate.  We run all our staff
terminals (~20) at 19.2 now.  It's a noticable difference from 9600 baud.
16 of these terminals are on the same Annex.

The Annex also allows 3 connections for each port.  A "simple" job control
facility at the annex level.  It may not be readily apparant, but there's
1001 uses for this feature.

I am not immediately familiar with the ALM.

>3)      What advantages does the Encore Annex have over the Bridge boxes?
>I would love to hear from anyone who has one.

I am not familiar with the Bridge features, but I can say that Encore is a
UNIX house, and will always be.  They would be more likely to cater
to UNIX extras as they came along.

>4)      Does anyone have the phone number for Encore?  I have the Bridge info.

Chicago Sales Office:           +1 312 380-1256
Headquarters Marlborough, MA:   +1 617 460-0500

We have been very pleased with our Annexes.  They come with 16 serial ports
and 1 parallel port per annex.  We had one shipped DOA, but no other problems
hardware-wise.  With a University discount, I believe they are somewhere in
the $5-7K range.

-------------------------------------------------------------------------------

Biases:  I am an employee of Encore Computer Corporation; I think the Annex
is cool; an old friend of mine has worked extensively with the Annex.  However,
I don't work on the Annex (I'm part of the OS group).

> We have a Sun 160  to which we would like to add about 10 terminals lines.
> The Sun is mainly a file server for 3 3/50's, and the amount of use of
> the terminal ports is expected to be light.

A terminal server may not be strictly cost effective in your environment.  Such
a beastie may cost you several hundred dollars per port.  An extra serial i/o
card for your Sun, with 16 ports, may be available more cheaply.

> There are other machines on the
> Ethernet that I would like to reach with the terminals, but I could always
> log into the Sun first, and then Telnet or whatever across the ethernet.
 
> 1)      What do I lose by putting my terminals on the ethernet?  For example,
> will ^O ^S ^Q all get gobbled rather than passed to Emacs for instance.
> Will a terminal server on the ethernet affect paging from the Sun 50's to
> the Sun 160?
 
> 2)      Besides flexibilty, what do I gain by having my terminals on the
> ethernet?  Performance?  Would I be better off with an ALM on the VME bus?

There are different performance tradeoffs here.  With a serial i/o card, the
Sun probably has to do less work to get a single character; with a terminal
server, the Sun has to process an Ethernet packet.  On the other hand, you have
much lower overhead using rlogin or telnet from the terminal server (Annex
support both, don't know about Bridge) directly to another host than doing the
same thing from the Sun.  I've spent too much of my life logged into one host
and then rlogin'd/telnet'd to another -- that's several layers of host software
times two host equals painful.  For the last several months I've been using a
variety of Annexes to talk to a variety of hosts on the Encore ethernet.  The
performance is great -- I can't tell I'm going over the ethernet.  I can also
have three simultaneous sessions to different hosts.

The Annex supports RAW, CBREAK, and COOKED modes, so you don't have to send an
entire packet over the net for each character the user types.  In the
appropriate mode, character or line i/o is done.  I think the Annex can gather
characters from multiple ports destined for the same host and send them in the
same packet.  I presume these features may be available on the Bridge as well.

The Annex can be set to pass through ^O ^S ^Q and whatever else you want.  I run
gnu-emacs with no trouble, ^S bound to incremental-search, and so on.

Annex traffic doesn't seem to greatly affect or be affected by other network
traffic.  I've never noticed a delay.  I've also got my Sun NFS'd to a VAX,
sharing the common ethernet.  Again, I haven't noticed any problems.  However, I
don't do any remote paging (I've only got a single Sun-2 but I'll be getting a
Sun-3 as well in a few weeks).

Terminal servers in general offers more flexibility than serial i/o cards.

> 3)      What advantages does the Encore Annex have over the Bridge boxes?
> I would love to hear from anyone who has one.

The Annex also supports a distributed editing protocol known as LEAP.  My old
friend who worked with the Annex modified gnu-emacs to support LEAP, which puts
a good chunk of the i/o processing burden in the Annex, where it belongs.  The
host doesn't hear from the Annex during editing operations that don't cause
screen refreshes.  Encore distributes this version of gnu, and I believe that
the leap modifications are available as part of the standard distribution now,
to boot.

The Annex also supports csh-style job control syntax.  Other Annex features,
like inactivity timers, port passwords, and so on may be more or less standard
across terminal servers.  One feature I like that may not be unique to the Annex
is the ability to tell the Annex what kind of terminal I have.  The Annex will
pass this information along to my login process, in the term variable, so no
matter what machine I login to my termcap information will be set correctly.

-------------------------------------------------------------------------------

-----------[000096][next][prev][last][first]----------------------------------------------------
Date:      Fri, 21 Nov 86 19:16 EST
From:      <SYMALG%YUSOL.BITNET@WISCVM.WISC.EDU>
To:        tcp-ip@sri-nic.arpa
Subject:   Encore or Annex - Summary of Replies (long)
First of all I would like to thank all the people who took the time to
reply to my request for information on Encore and Bridge.  The speed and
uniformaly high technical content of the replies was simply amazing.
My sincerest thanks goes out to:

         chris@columbia.edu
     ROODE%BIONET@SUMEX-AIM
       dudek%endor@harvunxt
  hedrick@topaz.rutgers.edu
  swb@devvax.tn.cornell.edu
  steve%umnd-cs-gw%umn-dulu
  kincl%hplnmk@hplabs.HP.CO
              lars@acc.arpa
   SATZ@Sierra.Stanford.EDU
  weinberg%necis.UUCP@harvu
          ihnp4!uokmax!mike
  ott!encore!pinocchio!alan
  eismo!mcvax!daimi!pederch


Editing of a summary always makes it possible to have missrepresented what
others meant to say - the responsibilty is mine alone.  Sorry for the length of
the posting, but there was so much *good* info that I thought I'd let you
decide how much you were up to reading.

In a one line summary:
Encore wins especially for Unix systems, but also check out Cisco.

Original-posting:
===============================================================================
Newsgroups: mod.protocols.tcp-ip
Subject: Encore Annex or Bridge Terminal Server

We have a Sun 160  to which we would like to add about 10 terminals lines.
The Sun is mainly a file server for 3 3/50's, and the amount of use of
the terminal ports is expected to be light.  There are other machines on the
Ethernet that I would like to reach with the terminals, but I could always
log into the Sun first, and then Telnet or whatever across the ethernet.

1)      What do I lose by putting my terminals on the ethernet?  For example,
will ^O ^S ^Q all get gobbled rather than passed to Emacs for instance.
Will a terminal server on the ethernet affect paging from the Sun 50's to
the Sun 160?

2)      Besides flexibilty, what do I gain by having my terminals on the
ethernet?  Performance?  Would I be better off with an ALM on the VME bus?

3)      What advantages does the Encore Annex have over the Bridge boxes?
I would love to hear from anyone who has one.

4)      Does anyone have the phone number for Encore?  I have the Bridge info.


===============================================================================

Selected comments to highlight the major points:

-------------------------------------------------------------------------------
We 3 Bridge CS/1T terminal servers with 96 lines attached.  We are not
particularly happy with the software support; in particular, the boxes
are memory-starved and inefficient due to some fundamental design limitation
that restricts the box to using very small packets (default 82 bytes, and
this is a max for some of their boxes).

You might want to check out Cisco Systems's ASM, which is probably the
best-performing and cheapest box you'll find.  The company is tiny and in its
infancy, but the guy maintaining the software is really sharp.  I don't have
any of their boxes, but I've played with them a little and read their
documentation and I'd buy some if we had the money.  Their number in Menlo Park
is (415) 326-1941.  I think an 8-line box with a parallel printer port is only
around $7K, and you can expand it in 8-line increments up to 48 lines or so, or
add additional network interfaces to use it as a gateway.

-------------------------------------------------------------------------------
You should consider that you would be better off if those people
desiring terminal access to a Sun went not to a file server
but rather to one of the clients (preferably one not then in use).
To have a couple users on the file server can degrade performance for
the clients more so than adding those same users on clients.
-------------------------------------------------------------------------------
  We looked
closely at Encore and Bridge before deciding to go with Encore.
The advantages that Bridge had was that they were slightly cheaper per port,
and they supported connections from a host to a remote Bridge port, to operate
modems and in general bridge from a terminal on the ethernet to an
arbitrary RS232 connection.  Encore had a more familiar user interface
(mimics UNIX BSD commands rlogin, telnet, csh job control), the software
was downloaded from a central host instead of distributed on floppies,
and in general Encore seemed to have more of a commitment to the UNIX/TCP-IP
world than Bridge (Bridge started their terminal servers with XNS).
The ability of the Encore Annex boxes to offload some of the terminal i/o
load from the host by running an Encore-modified version of GNU-Emacs
clinched it for us.

    We have been *very* satisfied with the Encore Annex terminal servers.
We now have 6, and I anticipate getting at least 2 more before the end
of the year.  Encore has certainly shown a commitment to UNIX and the
TCP world - they anticipate support of BSD 4.3 based TCP/IP in their
servers (including subnet support), and ARPA domain-server support in
their next release (it will be in beta test soon).  We have not had a hardware
failure since we got them (only a couple of months, but still a good sign).
They have responded to complaints of bugs with rapid software updates which
correct the problems.  The technical assistance I have received over the
telephone was knowledgeable and helpful.

    Performance-wise, the cost of rlogin connections to the SUN is certainly
worse than a direct serial connection - on the order of twice as cpu-intensive.
The ability of SUNs to push bytes out their ethernet controllers is very good,
so this isn't as much of a problem as it could be.  We are pushing this to
the limit, and though the machine still behaves pretty well and services NFS
disk requests without many problems, the interactive-echo response to users
sometimes suffers.  Part of a solution to this problem is to implement
the NVS and NVT kernel enhancements from rick@nyit, which effectively connects
the incoming TCP socket with the input/output of the pseudo-tty which the
rlogin is using (instead of passing all input and output through rlogind).
It sounds to me like your configuration won't be pushing things too hard,
so you could probably get away without this.  I am hoping SUN will support
such mods in future releases of their OS - I have mentioned this to them,
but I have no idea whether they will or not.
-------------------------------------------------------------------------------
At some point terminal service will put a load on your Ethernet.  But
that point is several thousand users.

If your system is going to be used for heavy timesharing, the host end
of telnet will use some CPU time.  For a couple of users I wouldn't
expect to see an effect.  Also, character echo will slow down as the
system gets loaded, since telnetd has to be shceduled twice for each
character.  On our Pyramids we put telentd in the kernel, which both
removes the loading effect and removes the echo delay.  We have not
tried this on the Sun, but no doubt we will eventually.

The terminal server is a lot more flexible.  We are tending to use
that for most new terminals.  But it is more complex, and so there
are more things that can go wrong.  Where we have a machine whose
users will always be connected to that one machine, we still use
direct terminals.  But that is increasingly rare.
-------------------------------------------------------------------------------
Encore vs. Bridge -- well, actually, I'd also check out cisco - talk
to Len Bosack, President (it's a small company).  415-326-1941.  The
only reason we didn't go with cisco was because we didn't think they
could give us much support (they are small and in California).  They
have nice boxes.  Bridge is cheaper.  Cisco and Encore are more
Unix-like.  Encore has foreground, background, stopped "jobs", and in
the future will have hooks for implementing your own commands and for
a "security server" running on a BSD Unix system which gets control at
critical times.  You can get source for the Annex too, if you want to
do development -- e.g. Univ of Oklahoma is developing some cooperative
processing stuff for "vi" between BSD Unix and the Annex.  Oh, I
didn't mention the "leap" code -- Encore has built cooperative
processing into GNUemacs so it talks a special protocol to the Annex
and offloads the host from trivial screen management.  I don't use it
personally.  There's even a GNU function to show you how many
characters have been saved from host processing through this protocol.
It's not at all clear that the potential bells & whistles on the Annex
save you anything, but the programmers on this Annex say the CS/100
feels quite drab compared to it.

The Annex listens to "rwho" and "routed" packets to learn what to do.
The code to use nameservers is done but they didn't release it yet
because they don't feel like it's industrial strength.  The CS/100
etc. have hardcoded routing and macros in them.  Which is better?
Depends on who you talk to and how.  There are also a bunch of new
options going into the Annex.  I don't know what the future of the
Bridge servers is.  If you get them with a large number of ports
they're certainly cheaper.

-------------------------------------------------------------------------------

>Will a terminal server on the ethernet affect paging from the Sun 50's to
>the Sun 160?

Terminal activity is extremely small to begin with, it shouldn't affect
your ethernet traffic.  The current release of the Annex software is
geared towards your running the rwho daemon, though.  If you don't want
the overhead of the rwho daemon, the only thing you can do is refer to
machines by their internet numbers (this is the route we chose with our
Sun/2's).  This changes in their next release scheduled for December (?).
It has some kind of name server builtin.

>2)      Besides flexibilty, what do I gain by having my terminals on the
>ethernet?  Performance?  Would I be better off with an ALM on the VME bus?

Encore has taken a lot of the overhead related to emacs (screen movement, etc)
and let the annex do the work.  This results in fewer packets being
propagated over the ethernet => lower overhead for the host computer.
I only know of this working when the host computer is an Encore MultiMax
(which we have), but the propaganda they put out says this works with
any UNIX 4.2 host.  It wouldn't be unreasonable to assume they are correct.

Something else you gain is increased baud rate.  We run all our staff
terminals (~20) at 19.2 now.  It's a noticable difference from 9600 baud.
16 of these terminals are on the same Annex.

The Annex also allows 3 connections for each port.  A "simple" job control
facility at the annex level.  It may not be readily apparant, but there's
1001 uses for this feature.

I am not immediately familiar with the ALM.

>3)      What advantages does the Encore Annex have over the Bridge boxes?
>I would love to hear from anyone who has one.

I am not familiar with the Bridge features, but I can say that Encore is a
UNIX house, and will always be.  They would be more likely to cater
to UNIX extras as they came along.

>4)      Does anyone have the phone number for Encore?  I have the Bridge info.

Chicago Sales Office:           +1 312 380-1256
Headquarters Marlborough, MA:   +1 617 460-0500

We have been very pleased with our Annexes.  They come with 16 serial ports
and 1 parallel port per annex.  We had one shipped DOA, but no other problems
hardware-wise.  With a University discount, I believe they are somewhere in
the $5-7K range.

-------------------------------------------------------------------------------

Biases:  I am an employee of Encore Computer Corporation; I think the Annex
is cool; an old friend of mine has worked extensively with the Annex.  However,
I don't work on the Annex (I'm part of the OS group).

> We have a Sun 160  to which we would like to add about 10 terminals lines.
> The Sun is mainly a file server for 3 3/50's, and the amount of use of
> the terminal ports is expected to be light.

A terminal server may not be strictly cost effective in your environment.  Such
a beastie may cost you several hundred dollars per port.  An extra serial i/o
card for your Sun, with 16 ports, may be available more cheaply.

> There are other machines on the
> Ethernet that I would like to reach with the terminals, but I could always
> log into the Sun first, and then Telnet or whatever across the ethernet.

> 1)      What do I lose by putting my terminals on the ethernet?  For example,
> will ^O ^S ^Q all get gobbled rather than passed to Emacs for instance.
> Will a terminal server on the ethernet affect paging from the Sun 50's to
> the Sun 160?

> 2)      Besides flexibilty, what do I gain by having my terminals on the
> ethernet?  Performance?  Would I be better off with an ALM on the VME bus?

There are different performance tradeoffs here.  With a serial i/o card, the
Sun probably has to do less work to get a single character; with a terminal
server, the Sun has to process an Ethernet packet.  On the other hand, you have
much lower overhead using rlogin or telnet from the terminal server (Annex
support both, don't know about Bridge) directly to another host than doing the
same thing from the Sun.  I've spent too much of my life logged into one host
and then rlogin'd/telnet'd to another -- that's several layers of host software
times two host equals painful.  For the last several months I've been using a
variety of Annexes to talk to a variety of hosts on the Encore ethernet.  The
performance is great -- I can't tell I'm going over the ethernet.  I can also
have three simultaneous sessions to different hosts.

The Annex supports RAW, CBREAK, and COOKED modes, so you don't have to send an
entire packet over the net for each character the user types.  In the
appropriate mode, character or line i/o is done.  I think the Annex can gather
characters from multiple ports destined for the same host and send them in the
same packet.  I presume these features may be available on the Bridge as well.

The Annex can be set to pass through ^O ^S ^Q and whatever else you want.  I run
gnu-emacs with no trouble, ^S bound to incremental-search, and so on.

Annex traffic doesn't seem to greatly affect or be affected by other network
traffic.  I've never noticed a delay.  I've also got my Sun NFS'd to a VAX,
sharing the common ethernet.  Again, I haven't noticed any problems.  However, I
don't do any remote paging (I've only got a single Sun-2 but I'll be getting a
Sun-3 as well in a few weeks).

Terminal servers in general offers more flexibility than serial i/o cards.

> 3)      What advantages does the Encore Annex have over the Bridge boxes?
> I would love to hear from anyone who has one.

The Annex also supports a distributed editing protocol known as LEAP.  My old
friend who worked with the Annex modified gnu-emacs to support LEAP, which puts
a good chunk of the i/o processing burden in the Annex, where it belongs.  The
host doesn't hear from the Annex during editing operations that don't cause
screen refreshes.  Encore distributes this version of gnu, and I believe that
the leap modifications are available as part of the standard distribution now,
to boot.

The Annex also supports csh-style job control syntax.  Other Annex features,
like inactivity timers, port passwords, and so on may be more or less standard
across terminal servers.  One feature I like that may not be unique to the Annex
is the ability to tell the Annex what kind of terminal I have.  The Annex will
pass this information along to my login process, in the term variable, so no
matter what machine I login to my termcap information will be set correctly.

-------------------------------------------------------------------------------
-----------[000097][next][prev][last][first]----------------------------------------------------
Date:      21 Nov 86 19:59:38 GMT
From:      Eric Gisin <egisin%watmath.waterloo.edu@RELAY.CS.NET>
To:        mod-protocols-tcp-ip%watmath.waterloo.edu@RELAY.CS.NET
Subject:   Submission for mod-protocols-tcp-ip
Path: watmath!egisin
From: egisin@Math.Waterloo.EDU (Eric Gisin @ University of Waterloo)
Newsgroups: mod.protocols.tcp-ip,comp.protocols.tcp-ip,comp.unix.wizards
Subject: Symmetric TCP connection in 4.2 BSD
Message-ID: <3487@watmath.UUCP>
Date: 21 Nov 86 19:59:20 GMT
Sender: egisin@watmath.UUCP
Lines: 21

I'm trying to develop an IP application under 4.2 BSD
that requires a symmetric TCP connection, as opposed
to the assymmetric connection commonly used in the client/server scheme.
The application is a RSCS line driver.

I was initially going to use UDP in place of BISYNC,
but BISYNC is too brain damaged for this to be done easily.

So I decided to replace the lower half of the RSCS line protocol (VMB)
with TCP.  I haven't been able to come up with a way to establish
the TCP connection between two RSCS line daemons symmetrically,
which was trivial to do with UDP (with a socket, bind, and connect).
The TCP specifcation says a TCP should be able to establish
a connection with two active sockets, or that you can passively
wait for a connection from a specified host. 4.2 allows neither of these.

Does anyone have any 4.2/4.3 code to do this?
It would be preferable if it used one well know port
and had the ability to have more than one line daemon per host.
RSCS does not easily allow me to use listen, accept, fork;
the line daemon processes are started at boot time.

-----------[000098][next][prev][last][first]----------------------------------------------------
Date:      Sun, 23-Nov-86 18:16:37 EST
From:      mills@huey.udel.edu.UUCP
To:        mod.protocols.tcp-ip
Subject:   NTP ticks getting louder

Folks,

I thought you might like an update on how clocks are ticking in the swamps.
After some rummaging around today I was surprised to learn that not only the
GOES radio clock on FORD1.ARPA had completely departed its interface, but the
WWVB clock on UMD1.ARPA had departed its antenna, or something like that. Only
the WWVB clock on DCN1.ARPA, along with the scruffy WWV clocks on GW.UMICH.EDU
and UDEL2.UDEL.EDU (relocated from DCN6.ARPA), continued to tick. However,
since all the swamps involved use Network Time Protocol (NTP) peers as backup,
the hosts involved remained synchronized (to DCN1.ARPA) and the clockwatchers
scattered throughout the Internet scarcely knew anything was abnormal.

The control of time warps as synchonization switched between the local clocks
and NTP-derived time was not without mishap, however, and revealed some bugs.
Benign torpedoes sent by Rich Wales at UCLA exposed one bug that caused NTP
targets to vaporize and then recondense, although most of the time this did
not destabilize system synchronization. Some very subtle transients in the
recursive median filters used by the fuzzball NTP peers to deglitch neighbor
offsets proved very hard to catch, but catched they got. Several changes were
made to the fuzzware to improve accuracy and reduce vulnerability to glitches.

Here at U Delaware we are synchronizing clocks to DCN1.ARPA via ARPAnet paths
and can report satisfying results. With an eight-stage recursive median
filter, one-minute poll interval, 256-ms aperture and filter constants as
reported in previous RFCs, we can reliably deliver local time to within 10-20
ms or so of the DCN1.ARPA WWVB reference clock, which has previously been
calibrated to within a few milliseconds of NBS radio time. It turns out that
NTP is a useful diagnostic of network health as well, since wide delay
dispersions and offset glitches are sensitive indicators of path switching and
congestion. Milo Medin at NASA/AMES, Rich Wales at UCLA and Mike Petry at U
Maryland have NTP non-fuzzball peers running and, hopefully, can report how
well things work via other paths and using other systems.

Finding and fixing time warps during the shakedown of NTP in distributed-peer
mode (see RFC-958) has been surprisingly hard. since the system amounts to a
set of mutually coupled, nonlinear, phase-locked oscilators. As many know, the
theory of linear phase-locked oscillators is well trampled in Electrical
Engineering, as are models of mutual trust/distrust in Computer Science. The
present problems seem to lie more in the area of nonlinear statistics, for
which the technology of nonlinear filtering (e.g. order statistics, median
filters), clustering algorithms (e.g. RFC-956) and multivariate estimation are
proving excellent tools. These tools, incidently, are excellent for the study
of large, ill-disciplined Internets in general. Which suggests, of course,
further instrumentation of NTP peers as a network monitoring mechanism.

Dave
-------

-----------[000099][next][prev][last][first]----------------------------------------------------
Date:      Sun, 23-Nov-86 19:39:47 EST
From:      mills@HUEY.UDEL.EDU.UUCP
To:        mod.protocols.tcp-ip
Subject:   Tsunami in the swamps

Folks,

I spent long hours this weekend trying to find and fix problems which were
destabilizing NSFnet access via DCN-GATEWAY. I found that relatively obscure
problems in FORDnet and UMDnet were causing earthquakes all over the system.
Since these are examples of how relatively innocent misbehavior can have
profound implications on Internet service everywhere, I am distributing the
following saga to this list, in spite of the rather intricate and specialized
technical details involved.

I spent most of Saturday digging into the U Maryland local net UMDnet via the
UMD1 fuzzball and trying to explain why I couldn't complete a largish FTP
transfer. I found that MIMSY, the "official" UMDnet gateway, was having
trouble keeping its MILnet peering partner(s) up and cycling up/down every
twenty minutes or so. This behavior was similar to that of our (U Delaware)
gateway before we jacked up maxacquire from one to three in the Unix EGP
gateway daemon and upped our peering partners accordingly. We found that
up/down cycles create nasty routing loops in the core gateway system, with
many ICMP time-exceeded messages flying about, as well as hijacking system
bandwidth for spasms of core-gateway update messages.

EGP reachability problems are immediately evident by watching the hop counts
(which I happened to do with a fuzzball) for some time and observing which
ones are cycling. I found several nets that were doing that, which suggests
the MIMSY problem may be happening at the gateway(s) servicing these nets. It
isn't clear why the problems are occuring at all, even with only one peering
partner. Fuzzball gateway DCN-GATEWAY seems to have no trouble sustaining EGP
reachability, which might indicate something bust in the Unix EGP code itself,
or possibly an incompatibility between it and the core gateway code.

Another problem was found on the FORD1 host on FORDnet, which is also
connected via DCN-GATEWAY. Apparently, cables between its serial-line
interfaces and modems were switched (for unknown cause), which caused a
routing loop on the access line from DCnet. The result was massive congestion
on DCnet, through which traffic for a large portion of NSFnet and its clients
pass, and service was badly degraded. The reason the loop occured in the first
place was that some FORDnet hosts have no routing algorithm and so require a
handcrafted FORD1 routing table and dedicated interface. The problem was
exacerbated, both because the line speed is relatively low (9600 bps), the TTL
fields used by many hosts were large (255) and many hosts retransmitted before
the TTLs had expired.

This lesson again points up the need for all hosts in the Internet to use
realistic TTLs (values between 30 and 60 have been suggested), use
conservative values for inital RTT estimates (values at least 5 seocnds have
been suggested) and back off upon retransmission. It also points up the need
for comprehensive self-configuration mechanisms, either in the form of a
reachability protocol, routing algorithm or some other mechanism with
sufficient functionality to deal with broken configurations. Finally, it
suggests we should be exploring the fairness principles suggested by John
Nagle and others, especially the Nagle Conjecture (a derivative of Murphy's
Law): "If it can break, don't bet on it."

Finally, I blame myself for a bizarre behavior of some hosts speaking the
Network Time Protocol (NTP). I found a wee beastie crawling deep inside the
fuzzball NTP code which caused frequent clock discontinuities (time warps),
rather than continuous slewing, in some neighbors. These neighbors, having
reset their clocks, also reset their link-delay calculations, which are
necessary to drive the routing algorithm. Eventually, the routing algorithm
starved for lack of delay updates and declared the neighbor down. Previously,
this problem has occured with the fuzzball routing protocol due to broken
hardware and/or software, but only in local nets all speaking the same
protocol.

In the NTP case, further analysis disclosed the ominous fact that large chunks
of Internet real estate can be chipped away by destabilizing local clocks
(using NTP, UDP or whatever clock-synchronization mechanism is handy). The
fuzzball and Unix (Mike Petry) NTP implementations use recursive median
filters to deglitch the synchronization mechanism (which is where the fuzzy
bug was). These filters can be spoofed, intentially or otherwise, to cause
glitches to happen anyway. Goodness, gracious, but our Internet is getting
sophisticatedly sneaky.

Dave
-------

-----------[000100][next][prev][last][first]----------------------------------------------------
Date:      23-Nov-86 20:48:59-UT
From:      mills@huey.udel.edu
To:        tcp-ip@sri-nic.arpa
Cc:        nsfnet@sh.cs.net
Subject:   Tsunami in the swamps
Folks,

I spent long hours this weekend trying to find and fix problems which were
destabilizing NSFnet access via DCN-GATEWAY. I found that relatively obscure
problems in FORDnet and UMDnet were causing earthquakes all over the system.
Since these are examples of how relatively innocent misbehavior can have
profound implications on Internet service everywhere, I am distributing the
following saga to this list, in spite of the rather intricate and specialized
technical details involved.

I spent most of Saturday digging into the U Maryland local net UMDnet via the
UMD1 fuzzball and trying to explain why I couldn't complete a largish FTP
transfer. I found that MIMSY, the "official" UMDnet gateway, was having
trouble keeping its MILnet peering partner(s) up and cycling up/down every
twenty minutes or so. This behavior was similar to that of our (U Delaware)
gateway before we jacked up maxacquire from one to three in the Unix EGP
gateway daemon and upped our peering partners accordingly. We found that
up/down cycles create nasty routing loops in the core gateway system, with
many ICMP time-exceeded messages flying about, as well as hijacking system
bandwidth for spasms of core-gateway update messages.

EGP reachability problems are immediately evident by watching the hop counts
(which I happened to do with a fuzzball) for some time and observing which
ones are cycling. I found several nets that were doing that, which suggests
the MIMSY problem may be happening at the gateway(s) servicing these nets. It
isn't clear why the problems are occuring at all, even with only one peering
partner. Fuzzball gateway DCN-GATEWAY seems to have no trouble sustaining EGP
reachability, which might indicate something bust in the Unix EGP code itself,
or possibly an incompatibility between it and the core gateway code.

Another problem was found on the FORD1 host on FORDnet, which is also
connected via DCN-GATEWAY. Apparently, cables between its serial-line
interfaces and modems were switched (for unknown cause), which caused a
routing loop on the access line from DCnet. The result was massive congestion
on DCnet, through which traffic for a large portion of NSFnet and its clients
pass, and service was badly degraded. The reason the loop occured in the first
place was that some FORDnet hosts have no routing algorithm and so require a
handcrafted FORD1 routing table and dedicated interface. The problem was
exacerbated, both because the line speed is relatively low (9600 bps), the TTL
fields used by many hosts were large (255) and many hosts retransmitted before
the TTLs had expired.

This lesson again points up the need for all hosts in the Internet to use
realistic TTLs (values between 30 and 60 have been suggested), use
conservative values for inital RTT estimates (values at least 5 seocnds have
been suggested) and back off upon retransmission. It also points up the need
for comprehensive self-configuration mechanisms, either in the form of a
reachability protocol, routing algorithm or some other mechanism with
sufficient functionality to deal with broken configurations. Finally, it
suggests we should be exploring the fairness principles suggested by John
Nagle and others, especially the Nagle Conjecture (a derivative of Murphy's
Law): "If it can break, don't bet on it."

Finally, I blame myself for a bizarre behavior of some hosts speaking the
Network Time Protocol (NTP). I found a wee beastie crawling deep inside the
fuzzball NTP code which caused frequent clock discontinuities (time warps),
rather than continuous slewing, in some neighbors. These neighbors, having
reset their clocks, also reset their link-delay calculations, which are
necessary to drive the routing algorithm. Eventually, the routing algorithm
starved for lack of delay updates and declared the neighbor down. Previously,
this problem has occured with the fuzzball routing protocol due to broken
hardware and/or software, but only in local nets all speaking the same
protocol.

In the NTP case, further analysis disclosed the ominous fact that large chunks
of Internet real estate can be chipped away by destabilizing local clocks
(using NTP, UDP or whatever clock-synchronization mechanism is handy). The
fuzzball and Unix (Mike Petry) NTP implementations use recursive median
filters to deglitch the synchronization mechanism (which is where the fuzzy
bug was). These filters can be spoofed, intentially or otherwise, to cause
glitches to happen anyway. Goodness, gracious, but our Internet is getting
sophisticatedly sneaky.

Dave
-------
-----------[000101][next][prev][last][first]----------------------------------------------------
Date:      23-Nov-86 20:49:54-UT
From:      mills@huey.udel.edu
To:        tcp-ip@sri-nic.arpa
Subject:   NTP ticks getting louder
Folks,

I thought you might like an update on how clocks are ticking in the swamps.
After some rummaging around today I was surprised to learn that not only the
GOES radio clock on FORD1.ARPA had completely departed its interface, but the
WWVB clock on UMD1.ARPA had departed its antenna, or something like that. Only
the WWVB clock on DCN1.ARPA, along with the scruffy WWV clocks on GW.UMICH.EDU
and UDEL2.UDEL.EDU (relocated from DCN6.ARPA), continued to tick. However,
since all the swamps involved use Network Time Protocol (NTP) peers as backup,
the hosts involved remained synchronized (to DCN1.ARPA) and the clockwatchers
scattered throughout the Internet scarcely knew anything was abnormal.

The control of time warps as synchonization switched between the local clocks
and NTP-derived time was not without mishap, however, and revealed some bugs.
Benign torpedoes sent by Rich Wales at UCLA exposed one bug that caused NTP
targets to vaporize and then recondense, although most of the time this did
not destabilize system synchronization. Some very subtle transients in the
recursive median filters used by the fuzzball NTP peers to deglitch neighbor
offsets proved very hard to catch, but catched they got. Several changes were
made to the fuzzware to improve accuracy and reduce vulnerability to glitches.

Here at U Delaware we are synchronizing clocks to DCN1.ARPA via ARPAnet paths
and can report satisfying results. With an eight-stage recursive median
filter, one-minute poll interval, 256-ms aperture and filter constants as
reported in previous RFCs, we can reliably deliver local time to within 10-20
ms or so of the DCN1.ARPA WWVB reference clock, which has previously been
calibrated to within a few milliseconds of NBS radio time. It turns out that
NTP is a useful diagnostic of network health as well, since wide delay
dispersions and offset glitches are sensitive indicators of path switching and
congestion. Milo Medin at NASA/AMES, Rich Wales at UCLA and Mike Petry at U
Maryland have NTP non-fuzzball peers running and, hopefully, can report how
well things work via other paths and using other systems.

Finding and fixing time warps during the shakedown of NTP in distributed-peer
mode (see RFC-958) has been surprisingly hard. since the system amounts to a
set of mutually coupled, nonlinear, phase-locked oscilators. As many know, the
theory of linear phase-locked oscillators is well trampled in Electrical
Engineering, as are models of mutual trust/distrust in Computer Science. The
present problems seem to lie more in the area of nonlinear statistics, for
which the technology of nonlinear filtering (e.g. order statistics, median
filters), clustering algorithms (e.g. RFC-956) and multivariate estimation are
proving excellent tools. These tools, incidently, are excellent for the study
of large, ill-disciplined Internets in general. Which suggests, of course,
further instrumentation of NTP peers as a network monitoring mechanism.

Dave
-------
-----------[000102][next][prev][last][first]----------------------------------------------------
Date:      Mon, 24-Nov-86 15:58:15 EST
From:      TS0400@OHSTVMA.BITNET (Bob Dixon)
To:        mod.protocols.tcp-ip
Subject:   Multiport ethernet bridge

We have need for a device that will filter ethernet traffic among N ethernets.
The DEC DELNI and similar devices are not useful in this regard, as they do
not filter traffic intelligently. One could use a DEC Lan Bridge to isolate
each ethernet, but that is expensive. Does there exist a device that will
intelligently filter ethernet traffic among N ethernets, inexpensively ?
N could be as small as 2 or as large as 40 for the problem we are trying to
solve. Any suggestions would be appreciated.

                                            Bob Dixon
                                            Ohio State University
Acknowledge-To: <TS0400@OHSTVMA>

-----------[000103][next][prev][last][first]----------------------------------------------------
Date:      Mon, 24 Nov 86 15:58:15 EST
From:      Bob Dixon  <TS0400%OHSTVMA.BITNET@WISCVM.WISC.EDU>
To:        TCP-IP@SRI-NIC.ARPA
Subject:   Multiport ethernet bridge
We have need for a device that will filter ethernet traffic among N ethernets.
The DEC DELNI and similar devices are not useful in this regard, as they do
not filter traffic intelligently. One could use a DEC Lan Bridge to isolate
each ethernet, but that is expensive. Does there exist a device that will
intelligently filter ethernet traffic among N ethernets, inexpensively ?
N could be as small as 2 or as large as 40 for the problem we are trying to
solve. Any suggestions would be appreciated.

                                            Bob Dixon
                                            Ohio State University
Acknowledge-To: <TS0400@OHSTVMA>
-----------[000104][next][prev][last][first]----------------------------------------------------
Date:      25 Nov 1986 09:34-EST
From:      CERF@A.ISI.EDU
To:        mills@HUEY.UDEL.EDU
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: NTP ticks getting louder
Dave,

Let me encourage your efforts on the monitoring side - we really need to gather
live and detailed information about the behavior of the internet so we can
set about improving its reliability and performance.

Vint
-----------[000105][next][prev][last][first]----------------------------------------------------
Date:      Tue, 25 Nov 86 15:46:07 CST
From:      Phil Howard  <PHIL%UIUCVMD.BITNET@WISCVM.WISC.EDU>
To:        TCP/IP List <TCP-IP@SRI-NIC.ARPA>
Subject:   secondary gateways
I am planning the design of a new type of bulletin board system for IBM
VM/CMS systems.  Due to the inherent limitations of this operating system
to 8 characters for userids, and the fact that I would like to have mail
for this bbs directed to topics as userids, I need to bypass this limit.
The way I best see to do this is to create a psuedo-node and run a "gateway"
to it somehow.  One problem I see with this is that mail is gatewayed into
BITNET, all that remains normally is the RFC822 header.  Sometimes this
does not indicate the real destination that SMTP really knew.  I am
inquiring into the possibilities of getting mail from the gateway in BSMTP
envelopes.  What I would like to know is if there are any alternative ways
to get this information (the real destination) and if any new standards
development is planning to address this.

Thanks.           /Phil/ka9wgn
-----------[000106][next][prev][last][first]----------------------------------------------------
Date:      Tue, 25-Nov-86 16:46:07 EST
From:      PHIL@UIUCVMD.BITNET (Phil Howard)
To:        mod.protocols.tcp-ip
Subject:   secondary gateways

I am planning the design of a new type of bulletin board system for IBM
VM/CMS systems.  Due to the inherent limitations of this operating system
to 8 characters for userids, and the fact that I would like to have mail
for this bbs directed to topics as userids, I need to bypass this limit.
The way I best see to do this is to create a psuedo-node and run a "gateway"
to it somehow.  One problem I see with this is that mail is gatewayed into
BITNET, all that remains normally is the RFC822 header.  Sometimes this
does not indicate the real destination that SMTP really knew.  I am
inquiring into the possibilities of getting mail from the gateway in BSMTP
envelopes.  What I would like to know is if there are any alternative ways
to get this information (the real destination) and if any new standards
development is planning to address this.

Thanks.           /Phil/ka9wgn

-----------[000107][next][prev][last][first]----------------------------------------------------
Date:      Thu, 27-Nov-86 01:01:00 EST
From:      SYMALG@YUSOL.BITNET
To:        mod.protocols.tcp-ip
Subject:   Encore Annex or Bridge Terminal Server


(I tried posting this last week,but it doesn't seem to have made it.
 My apologies in advance if anyone sees this twice.)

First of all I would like to thank all the people who took the time to
reply to my request for information on Encore and Bridge.  The speed and
uniformally high technical content of the replies was simply amazing.
My sincerest thanks goes out to:

         chris@columbia.edu
     ROODE%BIONET@SUMEX-AIM
       dudek%endor@harvunxt
  hedrick@topaz.rutgers.edu
  swb@devvax.tn.cornell.edu
  steve%umnd-cs-gw%umn-dulu
  kincl%hplnmk@hplabs.HP.CO
              lars@acc.arpa
   SATZ@Sierra.Stanford.EDU
  weinberg%necis.UUCP@harvu
          ihnp4!uokmax!mike
  ott!encore!pinocchio!alan
  eismo!mcvax!daimi!pederch


Editing of a summary always makes it possible to have misrepresented what
others meant to say - the reponsibilty is mine alone.  Sorry for the length of
the posting, but there was so much *good* info that I thought I'd let you
decide how much you were up to reading.

In a one line summary:
Encore wins especially for Unix systems, but also check out Cisco.

Original-posting:
===============================================================================
Newsgroups: mod.protocols.tcp-ip
Subject: Encore Annex or Bridge Terminal Server

We have a Sun 160  to which we would like to add about 10 terminals lines.
The Sun is mainly a file server for 3 3/50's, and the amount of use of
the terminal ports is expected to be light.  There are other machines on the
Ethernet that I would like to reach with the terminals, but I could always
log into the Sun first, and then Telnet or whatever across the ethernet.

1)      What do I lose by putting my terminals on the ethernet?  For example,
will ^O ^S ^Q all get gobbled rather than passed to Emacs for instance.
Will a terminal server on the ethernet affect paging from the Sun 50's to
the Sun 160?

2)      Besides flexibilty, what do I gain by having my terminals on the
ethernet?  Performance?  Would I be better off with an ALM on the VME bus?

3)      What advantages does the Encore Annex have over the Bridge boxes?
I would love to hear from anyone who has one.

4)      Does anyone have the phone number for Encore?  I have the Bridge info.


===============================================================================

Selected comments to highlight the major points:

-------------------------------------------------------------------------------
We 3 Bridge CS/1T terminal servers with 96 lines attached.  We are not
particularly happy with the software support; in particular, the boxes
are memory-starved and inefficient due to some fundamental design limitation
that restricts the box to using very small packets (default 82 bytes, and
this is a max for some of their boxes).

You might want to check out Cisco Systems's ASM, which is probably the
best-performing and cheapest box you'll find.  The company is tiny and in its
infancy, but the guy maintaining the software is really sharp.  I don't have
any of their boxes, but I've played with them a little and read their
documentation and I'd buy some if we had the money.  Their number in Menlo Park
is (415) 326-1941.  I think an 8-line box with a parallel printer port is only
around $7K, and you can expand it in 8-line increments up to 48 lines or so, or
add additional network interfaces to use it as a gateway.

-------------------------------------------------------------------------------
You should consider that you would be better off if those people
desiring terminal access to a Sun went not to a file server
but rather to one of the clients (preferably one not then in use).
To have a couple users on the file server can degrade performance for
the clients more so than adding those same users on clients.
-------------------------------------------------------------------------------
  We looked
closely at Encore and Bridge before deciding to go with Encore.
The advantages that Bridge had was that they were slightly cheaper per port,
and they supported connections from a host to a remote Bridge port, to operate
modems and in general bridge from a terminal on the ethernet to an
arbitrary RS232 connection.  Encore had a more familiar user interface
(mimics UNIX BSD commands rlogin, telnet, csh job control), the software
was downloaded from a central host instead of distributed on floppies,
and in general Encore seemed to have more of a commitment to the UNIX/TCP-IP
world than Bridge (Bridge started their terminal servers with XNS).
The ability of the Encore Annex boxes to offload some of the terminal i/o
load from the host by running an Encore-modified version of GNU-Emacs
clinched it for us.

    We have been *very* satisfied with the Encore Annex terminal servers.
We now have 6, and I anticipate getting at least 2 more before the end
of the year.  Encore has certainly shown a commitment to UNIX and the
TCP world - they anticipate support of BSD 4.3 based TCP/IP in their
servers (including subnet support), and ARPA domain-server support in
their next release (it will be in beta test soon).  We have not had a hardware
failure since we got them (only a couple of months, but still a good sign).
They have responded to complaints of bugs with rapid software updates which
correct the problems.  The technical assistance I have received over the
telephone was knowledgeable and helpful.

    Performance-wise, the cost of rlogin connections to the SUN is certainly
worse than a direct serial connection - on the order of twice as cpu-intensive.
The ability of SUNs to push bytes out their ethernet controllers is very good,
so this isn't as much of a problem as it could be.  We are pushing this to
the limit, and though the machine still behaves pretty well and services NFS
disk requests without many problems, the interactive-echo response to users
sometimes suffers.  Part of a solution to this problem is to implement
the NVS and NVT kernel enhancements from rick@nyit, which effectively connects
the incoming TCP socket with the input/output of the pseudo-tty which the
rlogin is using (instead of passing all input and output through rlogind).
It sounds to me like your configuration won't be pushing things too hard,
so you could probably get away without this.  I am hoping SUN will support
such mods in future releases of their OS - I have mentioned this to them,
but I have no idea whether they will or not.
-------------------------------------------------------------------------------
At some point terminal service will put a load on your Ethernet.  But
that point is several thousand users.

If your system is going to be used for heavy timesharing, the host end
of telnet will use some CPU time.  For a couple of users I wouldn't
expect to see an effect.  Also, character echo will slow down as the
system gets loaded, since telnetd has to be shceduled twice for each
character.  On our Pyramids we put telentd in the kernel, which both
removes the loading effect and removes the echo delay.  We have not
tried this on the Sun, but no doubt we will eventually.

The terminal server is a lot more flexible.  We are tending to use
that for most new terminals.  But it is more complex, and so there
are more things that can go wrong.  Where we have a machine whose
users will always be connected to that one machine, we still use
direct terminals.  But that is increasingly rare.
-------------------------------------------------------------------------------
Encore vs. Bridge -- well, actually, I'd also check out cisco - talk
to Len Bosack, President (it's a small company).  415-326-1941.  The
only reason we didn't go with cisco was because we didn't think they
could give us much support (they are small and in California).  They
have nice boxes.  Bridge is cheaper.  Cisco and Encore are more
Unix-like.  Encore has foreground, background, stopped "jobs", and in
the future will have hooks for implementing your own commands and for
a "security server" running on a BSD Unix system which gets control at
critical times.  You can get source for the Annex too, if you want to
do development -- e.g. Univ of Oklahoma is developing some cooperative
processing stuff for "vi" between BSD Unix and the Annex.  Oh, I
didn't mention the "leap" code -- Encore has built cooperative
processing into GNUemacs so it talks a special protocol to the Annex
and offloads the host from trivial screen management.  I don't use it
personally.  There's even a GNU function to show you how many
characters have been saved from host processing through this protocol.
It's not at all clear that the potential bells & whistles on the Annex
save you anything, but the programmers on this Annex say the CS/100
feels quite drab compared to it.

The Annex listens to "rwho" and "routed" packets to learn what to do.
The code to use nameservers is done but they didn't release it yet
because they don't feel like it's industrial strength.  The CS/100
etc. have hardcoded routing and macros in them.  Which is better?
Depends on who you talk to and how.  There are also a bunch of new
options going into the Annex.  I don't know what the future of the
Bridge servers is.  If you get them with a large number of ports
they're certainly cheaper.

-------------------------------------------------------------------------------

>Will a terminal server on the ethernet affect paging from the Sun 50's to
>the Sun 160?

Terminal activity is extremely small to begin with, it shouldn't affect
your ethernet traffic.  The current release of the Annex software is
geared towards your running the rwho daemon, though.  If you don't want
the overhead of the rwho daemon, the only thing you can do is refer to
machines by their internet numbers (this is the route we chose with our
Sun/2's).  This changes in their next release scheduled for December (?).
It has some kind of name server builtin.

>2)      Besides flexibilty, what do I gain by having my terminals on the
>ethernet?  Performance?  Would I be better off with an ALM on the VME bus?

Encore has taken a lot of the overhead related to emacs (screen movement, etc)
and let the annex do the work.  This results in fewer packets being
propagated over the ethernet => lower overhead for the host computer.
I only know of this working when the host computer is an Encore MultiMax
(which we have), but the propaganda they put out says this works with
any UNIX 4.2 host.  It wouldn't be unreasonable to assume they are correct.

Something else you gain is increased baud rate.  We run all our staff
terminals (~20) at 19.2 now.  It's a noticable difference from 9600 baud.
16 of these terminals are on the same Annex.

The Annex also allows 3 connections for each port.  A "simple" job control
facility at the annex level.  It may not be readily apparant, but there's
1001 uses for this feature.

I am not immediately familiar with the ALM.

>3)      What advantages does the Encore Annex have over the Bridge boxes?
>I would love to hear from anyone who has one.

I am not familiar with the Bridge features, but I can say that Encore is a
UNIX house, and will always be.  They would be more likely to cater
to UNIX extras as they came along.

>4)      Does anyone have the phone number for Encore?  I have the Bridge info.

Chicago Sales Office:           +1 312 380-1256
Headquarters Marlborough, MA:   +1 617 460-0500

We have been very pleased with our Annexes.  They come with 16 serial ports
and 1 parallel port per annex.  We had one shipped DOA, but no other problems
hardware-wise.  With a University discount, I believe they are somewhere in
the $5-7K range.

-------------------------------------------------------------------------------

Biases:  I am an employee of Encore Computer Corporation; I think the Annex
is cool; an old friend of mine has worked extensively with the Annex.  However,
I don't work on the Annex (I'm part of the OS group).

> We have a Sun 160  to which we would like to add about 10 terminals lines.
> The Sun is mainly a file server for 3 3/50's, and the amount of use of
> the terminal ports is expected to be light.

A terminal server may not be strictly cost effective in your environment.  Such
a beastie may cost you several hundred dollars per port.  An extra serial i/o
card for your Sun, with 16 ports, may be available more cheaply.

> There are other machines on the
> Ethernet that I would like to reach with the terminals, but I could always
> log into the Sun first, and then Telnet or whatever across the ethernet.
 
> 1)      What do I lose by putting my terminals on the ethernet?  For example,
> will ^O ^S ^Q all get gobbled rather than passed to Emacs for instance.
> Will a terminal server on the ethernet affect paging from the Sun 50's to
> the Sun 160?
 
> 2)      Besides flexibilty, what do I gain by having my terminals on the
> ethernet?  Performance?  Would I be better off with an ALM on the VME bus?

There are different performance tradeoffs here.  With a serial i/o card, the
Sun probably has to do less work to get a single character; with a terminal
server, the Sun has to process an Ethernet packet.  On the other hand, you have
much lower overhead using rlogin or telnet from the terminal server (Annex
support both, don't know about Bridge) directly to another host than doing the
same thing from the Sun.  I've spent too much of my life logged into one host
and then rlogin'd/telnet'd to another -- that's several layers of host software
times two host equals painful.  For the last several months I've been using a
variety of Annexes to talk to a variety of hosts on the Encore ethernet.  The
performance is great -- I can't tell I'm going over the ethernet.  I can also
have three simultaneous sessions to different hosts.

The Annex supports RAW, CBREAK, and COOKED modes, so you don't have to send an
entire packet over the net for each character the user types.  In the
appropriate mode, character or line i/o is done.  I think the Annex can gather
characters from multiple ports destined for the same host and send them in the
same packet.  I presume these features may be available on the Bridge as well.

The Annex can be set to pass through ^O ^S ^Q and whatever else you want.  I run
gnu-emacs with no trouble, ^S bound to incremental-search, and so on.

Annex traffic doesn't seem to greatly affect or be affected by other network
traffic.  I've never noticed a delay.  I've also got my Sun NFS'd to a VAX,
sharing the common ethernet.  Again, I haven't noticed any problems.  However, I
don't do any remote paging (I've only got a single Sun-2 but I'll be getting a
Sun-3 as well in a few weeks).

Terminal servers in general offers more flexibility than serial i/o cards.

> 3)      What advantages does the Encore Annex have over the Bridge boxes?
> I would love to hear from anyone who has one.

The Annex also supports a distributed editing protocol known as LEAP.  My old
friend who worked with the Annex modified gnu-emacs to support LEAP, which puts
a good chunk of the i/o processing burden in the Annex, where it belongs.  The
host doesn't hear from the Annex during editing operations that don't cause
screen refreshes.  Encore distributes this version of gnu, and I believe that
the leap modifications are available as part of the standard distribution now,
to boot.

The Annex also supports csh-style job control syntax.  Other Annex features,
like inactivity timers, port passwords, and so on may be more or less standard
across terminal servers.  One feature I like that may not be unique to the Annex
is the ability to tell the Annex what kind of terminal I have.  The Annex will
pass this information along to my login process, in the term variable, so no
matter what machine I login to my termcap information will be set correctly.

-------------------------------------------------------------------------------

-----------[000108][next][prev][last][first]----------------------------------------------------
Date:      Thu, 27 Nov 86 01:01 EST
From:      <SYMALG%YUSOL.BITNET@WISCVM.WISC.EDU>
To:        tcp-ip@sri-nic.arpa
Subject:   Encore Annex or Bridge Terminal Server

(I tried posting this last week,but it doesn't seem to have made it.
 My apologies in advance if anyone sees this twice.)

First of all I would like to thank all the people who took the time to
reply to my request for information on Encore and Bridge.  The speed and
uniformally high technical content of the replies was simply amazing.
My sincerest thanks goes out to:

         chris@columbia.edu
     ROODE%BIONET@SUMEX-AIM
       dudek%endor@harvunxt
  hedrick@topaz.rutgers.edu
  swb@devvax.tn.cornell.edu
  steve%umnd-cs-gw%umn-dulu
  kincl%hplnmk@hplabs.HP.CO
              lars@acc.arpa
   SATZ@Sierra.Stanford.EDU
  weinberg%necis.UUCP@harvu
          ihnp4!uokmax!mike
  ott!encore!pinocchio!alan
  eismo!mcvax!daimi!pederch


Editing of a summary always makes it possible to have misrepresented what
others meant to say - the reponsibilty is mine alone.  Sorry for the length of
the posting, but there was so much *good* info that I thought I'd let you
decide how much you were up to reading.

In a one line summary:
Encore wins especially for Unix systems, but also check out Cisco.

Original-posting:
===============================================================================
Newsgroups: mod.protocols.tcp-ip
Subject: Encore Annex or Bridge Terminal Server

We have a Sun 160  to which we would like to add about 10 terminals lines.
The Sun is mainly a file server for 3 3/50's, and the amount of use of
the terminal ports is expected to be light.  There are other machines on the
Ethernet that I would like to reach with the terminals, but I could always
log into the Sun first, and then Telnet or whatever across the ethernet.

1)      What do I lose by putting my terminals on the ethernet?  For example,
will ^O ^S ^Q all get gobbled rather than passed to Emacs for instance.
Will a terminal server on the ethernet affect paging from the Sun 50's to
the Sun 160?

2)      Besides flexibilty, what do I gain by having my terminals on the
ethernet?  Performance?  Would I be better off with an ALM on the VME bus?

3)      What advantages does the Encore Annex have over the Bridge boxes?
I would love to hear from anyone who has one.

4)      Does anyone have the phone number for Encore?  I have the Bridge info.


===============================================================================

Selected comments to highlight the major points:

-------------------------------------------------------------------------------
We 3 Bridge CS/1T terminal servers with 96 lines attached.  We are not
particularly happy with the software support; in particular, the boxes
are memory-starved and inefficient due to some fundamental design limitation
that restricts the box to using very small packets (default 82 bytes, and
this is a max for some of their boxes).

You might want to check out Cisco Systems's ASM, which is probably the
best-performing and cheapest box you'll find.  The company is tiny and in its
infancy, but the guy maintaining the software is really sharp.  I don't have
any of their boxes, but I've played with them a little and read their
documentation and I'd buy some if we had the money.  Their number in Menlo Park
is (415) 326-1941.  I think an 8-line box with a parallel printer port is only
around $7K, and you can expand it in 8-line increments up to 48 lines or so, or
add additional network interfaces to use it as a gateway.

-------------------------------------------------------------------------------
You should consider that you would be better off if those people
desiring terminal access to a Sun went not to a file server
but rather to one of the clients (preferably one not then in use).
To have a couple users on the file server can degrade performance for
the clients more so than adding those same users on clients.
-------------------------------------------------------------------------------
  We looked
closely at Encore and Bridge before deciding to go with Encore.
The advantages that Bridge had was that they were slightly cheaper per port,
and they supported connections from a host to a remote Bridge port, to operate
modems and in general bridge from a terminal on the ethernet to an
arbitrary RS232 connection.  Encore had a more familiar user interface
(mimics UNIX BSD commands rlogin, telnet, csh job control), the software
was downloaded from a central host instead of distributed on floppies,
and in general Encore seemed to have more of a commitment to the UNIX/TCP-IP
world than Bridge (Bridge started their terminal servers with XNS).
The ability of the Encore Annex boxes to offload some of the terminal i/o
load from the host by running an Encore-modified version of GNU-Emacs
clinched it for us.

    We have been *very* satisfied with the Encore Annex terminal servers.
We now have 6, and I anticipate getting at least 2 more before the end
of the year.  Encore has certainly shown a commitment to UNIX and the
TCP world - they anticipate support of BSD 4.3 based TCP/IP in their
servers (including subnet support), and ARPA domain-server support in
their next release (it will be in beta test soon).  We have not had a hardware
failure since we got them (only a couple of months, but still a good sign).
They have responded to complaints of bugs with rapid software updates which
correct the problems.  The technical assistance I have received over the
telephone was knowledgeable and helpful.

    Performance-wise, the cost of rlogin connections to the SUN is certainly
worse than a direct serial connection - on the order of twice as cpu-intensive.
The ability of SUNs to push bytes out their ethernet controllers is very good,
so this isn't as much of a problem as it could be.  We are pushing this to
the limit, and though the machine still behaves pretty well and services NFS
disk requests without many problems, the interactive-echo response to users
sometimes suffers.  Part of a solution to this problem is to implement
the NVS and NVT kernel enhancements from rick@nyit, which effectively connects
the incoming TCP socket with the input/output of the pseudo-tty which the
rlogin is using (instead of passing all input and output through rlogind).
It sounds to me like your configuration won't be pushing things too hard,
so you could probably get away without this.  I am hoping SUN will support
such mods in future releases of their OS - I have mentioned this to them,
but I have no idea whether they will or not.
-------------------------------------------------------------------------------
At some point terminal service will put a load on your Ethernet.  But
that point is several thousand users.

If your system is going to be used for heavy timesharing, the host end
of telnet will use some CPU time.  For a couple of users I wouldn't
expect to see an effect.  Also, character echo will slow down as the
system gets loaded, since telnetd has to be shceduled twice for each
character.  On our Pyramids we put telentd in the kernel, which both
removes the loading effect and removes the echo delay.  We have not
tried this on the Sun, but no doubt we will eventually.

The terminal server is a lot more flexible.  We are tending to use
that for most new terminals.  But it is more complex, and so there
are more things that can go wrong.  Where we have a machine whose
users will always be connected to that one machine, we still use
direct terminals.  But that is increasingly rare.
-------------------------------------------------------------------------------
Encore vs. Bridge -- well, actually, I'd also check out cisco - talk
to Len Bosack, President (it's a small company).  415-326-1941.  The
only reason we didn't go with cisco was because we didn't think they
could give us much support (they are small and in California).  They
have nice boxes.  Bridge is cheaper.  Cisco and Encore are more
Unix-like.  Encore has foreground, background, stopped "jobs", and in
the future will have hooks for implementing your own commands and for
a "security server" running on a BSD Unix system which gets control at
critical times.  You can get source for the Annex too, if you want to
do development -- e.g. Univ of Oklahoma is developing some cooperative
processing stuff for "vi" between BSD Unix and the Annex.  Oh, I
didn't mention the "leap" code -- Encore has built cooperative
processing into GNUemacs so it talks a special protocol to the Annex
and offloads the host from trivial screen management.  I don't use it
personally.  There's even a GNU function to show you how many
characters have been saved from host processing through this protocol.
It's not at all clear that the potential bells & whistles on the Annex
save you anything, but the programmers on this Annex say the CS/100
feels quite drab compared to it.

The Annex listens to "rwho" and "routed" packets to learn what to do.
The code to use nameservers is done but they didn't release it yet
because they don't feel like it's industrial strength.  The CS/100
etc. have hardcoded routing and macros in them.  Which is better?
Depends on who you talk to and how.  There are also a bunch of new
options going into the Annex.  I don't know what the future of the
Bridge servers is.  If you get them with a large number of ports
they're certainly cheaper.

-------------------------------------------------------------------------------

>Will a terminal server on the ethernet affect paging from the Sun 50's to
>the Sun 160?

Terminal activity is extremely small to begin with, it shouldn't affect
your ethernet traffic.  The current release of the Annex software is
geared towards your running the rwho daemon, though.  If you don't want
the overhead of the rwho daemon, the only thing you can do is refer to
machines by their internet numbers (this is the route we chose with our
Sun/2's).  This changes in their next release scheduled for December (?).
It has some kind of name server builtin.

>2)      Besides flexibilty, what do I gain by having my terminals on the
>ethernet?  Performance?  Would I be better off with an ALM on the VME bus?

Encore has taken a lot of the overhead related to emacs (screen movement, etc)
and let the annex do the work.  This results in fewer packets being
propagated over the ethernet => lower overhead for the host computer.
I only know of this working when the host computer is an Encore MultiMax
(which we have), but the propaganda they put out says this works with
any UNIX 4.2 host.  It wouldn't be unreasonable to assume they are correct.

Something else you gain is increased baud rate.  We run all our staff
terminals (~20) at 19.2 now.  It's a noticable difference from 9600 baud.
16 of these terminals are on the same Annex.

The Annex also allows 3 connections for each port.  A "simple" job control
facility at the annex level.  It may not be readily apparant, but there's
1001 uses for this feature.

I am not immediately familiar with the ALM.

>3)      What advantages does the Encore Annex have over the Bridge boxes?
>I would love to hear from anyone who has one.

I am not familiar with the Bridge features, but I can say that Encore is a
UNIX house, and will always be.  They would be more likely to cater
to UNIX extras as they came along.

>4)      Does anyone have the phone number for Encore?  I have the Bridge info.

Chicago Sales Office:           +1 312 380-1256
Headquarters Marlborough, MA:   +1 617 460-0500

We have been very pleased with our Annexes.  They come with 16 serial ports
and 1 parallel port per annex.  We had one shipped DOA, but no other problems
hardware-wise.  With a University discount, I believe they are somewhere in
the $5-7K range.

-------------------------------------------------------------------------------

Biases:  I am an employee of Encore Computer Corporation; I think the Annex
is cool; an old friend of mine has worked extensively with the Annex.  However,
I don't work on the Annex (I'm part of the OS group).

> We have a Sun 160  to which we would like to add about 10 terminals lines.
> The Sun is mainly a file server for 3 3/50's, and the amount of use of
> the terminal ports is expected to be light.

A terminal server may not be strictly cost effective in your environment.  Such
a beastie may cost you several hundred dollars per port.  An extra serial i/o
card for your Sun, with 16 ports, may be available more cheaply.

> There are other machines on the
> Ethernet that I would like to reach with the terminals, but I could always
> log into the Sun first, and then Telnet or whatever across the ethernet.

> 1)      What do I lose by putting my terminals on the ethernet?  For example,
> will ^O ^S ^Q all get gobbled rather than passed to Emacs for instance.
> Will a terminal server on the ethernet affect paging from the Sun 50's to
> the Sun 160?

> 2)      Besides flexibilty, what do I gain by having my terminals on the
> ethernet?  Performance?  Would I be better off with an ALM on the VME bus?

There are different performance tradeoffs here.  With a serial i/o card, the
Sun probably has to do less work to get a single character; with a terminal
server, the Sun has to process an Ethernet packet.  On the other hand, you have
much lower overhead using rlogin or telnet from the terminal server (Annex
support both, don't know about Bridge) directly to another host than doing the
same thing from the Sun.  I've spent too much of my life logged into one host
and then rlogin'd/telnet'd to another -- that's several layers of host software
times two host equals painful.  For the last several months I've been using a
variety of Annexes to talk to a variety of hosts on the Encore ethernet.  The
performance is great -- I can't tell I'm going over the ethernet.  I can also
have three simultaneous sessions to different hosts.

The Annex supports RAW, CBREAK, and COOKED modes, so you don't have to send an
entire packet over the net for each character the user types.  In the
appropriate mode, character or line i/o is done.  I think the Annex can gather
characters from multiple ports destined for the same host and send them in the
same packet.  I presume these features may be available on the Bridge as well.

The Annex can be set to pass through ^O ^S ^Q and whatever else you want.  I run
gnu-emacs with no trouble, ^S bound to incremental-search, and so on.

Annex traffic doesn't seem to greatly affect or be affected by other network
traffic.  I've never noticed a delay.  I've also got my Sun NFS'd to a VAX,
sharing the common ethernet.  Again, I haven't noticed any problems.  However, I
don't do any remote paging (I've only got a single Sun-2 but I'll be getting a
Sun-3 as well in a few weeks).

Terminal servers in general offers more flexibility than serial i/o cards.

> 3)      What advantages does the Encore Annex have over the Bridge boxes?
> I would love to hear from anyone who has one.

The Annex also supports a distributed editing protocol known as LEAP.  My old
friend who worked with the Annex modified gnu-emacs to support LEAP, which puts
a good chunk of the i/o processing burden in the Annex, where it belongs.  The
host doesn't hear from the Annex during editing operations that don't cause
screen refreshes.  Encore distributes this version of gnu, and I believe that
the leap modifications are available as part of the standard distribution now,
to boot.

The Annex also supports csh-style job control syntax.  Other Annex features,
like inactivity timers, port passwords, and so on may be more or less standard
across terminal servers.  One feature I like that may not be unique to the Annex
is the ability to tell the Annex what kind of terminal I have.  The Annex will
pass this information along to my login process, in the term variable, so no
matter what machine I login to my termcap information will be set correctly.

-------------------------------------------------------------------------------



-----------[000109][next][prev][last][first]----------------------------------------------------
Date:      Thu, 27-Nov-86 06:58:45 EST
From:      hedrick@TOPAZ.RUTGERS.EDU.UUCP
To:        mod.protocols.tcp-ip
Subject:   terminal servers: minireview of Bridge CS/100 and Cisco ASM

A message that just appeared on this list gave a reasonably complete
description of the Encore terminal server.  Since we have used Bridge
CS-100's extensively, and are now using cisco ASM's, I thought it
might be helpful to give a description of them as well.  I hope the
following review isn't too long to read, but it seemed worth trying to
give a good feel for the products.  There are a number of similarities
in what these two products do.  Both of them implement telnet using
special-purpose software (i.e. they do not run Unix or a Unix-like
shell).  The user interfaces look like a typical DEC command scanner:
keywords which can be abbreviated, you can type ?  at any point to see
what is wanted there.  (With Bridge you have to hit carriage return
after the ?.  Cisco activates immediately.)  A number of the keywords
are even similar: connect, resume, disconnect, and show (though the
things that show will show are different).  In both cases, you can
have multiple sessions active.  You switch by typing an escape
character to get back to the terminal server, and then resuming the
session that you want to go back to.  Because they implement telnet
and not rlogin, they are not as optimized for use with Unix as it
sounds like Encore is.  However they do implement telnet sync, so
things are not as bad as they might be.  The big problem with terminal
servers is that ^C ^O ^S and ^Q tend to have very long delays unless
you do something special.  The difficulty is that in general there are
large buffers at both ends, and several packets can be "in flight".
So after you type ^S, ^C or ^O, you can still get several thousand
characters.  Rlogin solves the problem by integrating control
character handling on the terminal server and the host.  The host uses
TCP out of band messages to keep the terminal server apprised of what
mode the tty is supposed to be in.  Thus the terminal server handles
^S locally when appropriate.  But when you are in Emacs, the host
tells it not to do ^S, and so that character works as the search
command.  ^C and ^O are handled by cooperation between the server and
host.  These features give rlogin a big advantage over a naive telnet
implementation.  However it turns out that if you are careful, it is
possible to do nearly as well using telnet.  The telnet protocol
includes the same out of band features as rlogin for implementing ^C
and ^O.  It's just that most hosts and telnets don't bother to
implement these features.  (In particular, 4.2 telnet and telnetd do
not.)  Both Bridge and Cisco implement them in the terminal server.
So one need only repair the telnetd on your host.  We have done this
on our primary timesharing machines.  (Pyramid 90x systems.  Alas, our
code may not be easy to import, since our telnetd is in the kernel,
for performance reasons.)  This leaves ^S.  It would be possible to
use telnet negotations to turn on and off local ^S handling.
Unfortunately, no one seems to have defined such a negotiation.  Thus
we simply pick a character that is not used by Emacs for anything very
important (by default ^\ is used), and set up our terminal servers to
use that as a local XOFF.  The same character is set as XON.  I.e. it
is a toggle.  [An editorial comment:  Why did Berkeley define a new
proocol, rlogin, rather than simply implementing telnet fully, and
adding a negotiation to handle toggling XOFF.]

Enough of generic descriptions.  Now down to details.  I'll start with
Bridge, because that is what we have had for the longest.  Bridge has
at least 3 different terminal servers: CS-1, CS-100, and CS-200.  They
have different numbers of ports: 32 for the CS-1 (I have heard rumors
that they may allow more now), 14 for the CS-100, and some smaller
number for the CS-200.  The CS-100 is the only one that I know, though
I think the CS-200 might be more attractive for a new installation.
The CS-100 uses several 68000's to get enough bandwidth to drive all
of the lines at full speed.  It boots from a floppy (although it is
also possible to get diskless machines, which boot over the network
from a special server CS-100.  This would be a nice idea for any
installation that intends to have a number of boxes.  The server can
also be used to help monitor the network, and to diagnose problems.)
Major networking parameters are set via a sysgen, which must be done
standalone (i.e. the system is not in normal operation).  This allows
you to set the network addresses, subnet masks, and servers used for
various purposes.  The system uses IEN116 for name service.  (This is
an older name server standard.  There are Unix implementations
available.)  You can use one or more of the boxes as name server --
they keep the name table on floppy, or set up one or more of your Unix
systems as a server.  The terminal interface is highly configurable.
You can set up any characters to do character echoing, tailor the
prompts and greeting messges, etc.  We set it up to have the same
control characters as TOPS-20 or VMS.  It appears that they have
options oriented to half duplex, and every other conceivable kind of
terminal environment.  Of course, you can choose parity, character
size, and all of that.  The box is designed to allow you to support
machines that don't have their own TCP/IP implementations.  You can
connect a group of ports from a CS-100 to ports on the host.  You can
define those ports as a hunt group with its own Internet address.
Then anyone who telnets to that address will get the first free line
in the group.  There is enough processing power in the box to be able
to handle 9600 baud output under normal circumstances.  However the
CS-100 is short on memory, so certain combinations of uses can cause
trouble (e.g. if you are doing this, and using the same box to drive a
printer).  I'll say a bit more about this below.  The following
transcript will give you an idea of the configuration options, as well
as the available commands.  (By the way, the mechanism that we used to
produce this transcript puts us into system manager mode from our
favorite Unix machine.  We could do it to any box on the Internet,
without typing a password.  Fortunately, it more than a simple telnet,
and the program does not appear to be widely distributed.  There is
also a limit to the damage one could do, since serious configuration
changes have to be done with a sysgen.)

Remote: show parameters
...............................Global Parameters...............................
DATE = Thu Nov 27 05:52:54 1986
WelcomeString = "^G^J^MBridge CS/100, Rutgers LCSR Ethernet, Node Hill-SYS, Version 1.2000^J^M"
PROmpt = "Hill-Sys> "                   NMPrompt = "SYS> "
LocalPassWord = ...                     GlobalPassWord = ...
CONNectAudit = OFF                      ERRorAudit = ON

Remote: show allsessions [NB: It's before dawn on Thanksgiving Day]
Port/session#  state                    Port/session#  state
! 0   LISTEN                            ! 1   LISTEN 
! 2   LISTEN                            ! 3   LISTEN 
! 4   LISTEN                            ! 5   LISTEN 
! 6   LISTEN                            ! 7   LISTEN 
! 8   LISTEN                            ! 9   CONCTD with 128.006.005.107
!10   LISTEN                            !11   LISTEN 
!12   LISTEN                            !13   LISTEN 

Remote: show (!9) parameters
Parameters  for PortId !9, current session
...................Port Transmission and VTP Characteristics...................
BUffersize = 82     DeVice = ( Terminal, Glass )
InterAction = ( Verbose, Echo, NoMacroEcho, BroadcastON, NoLFInsert )
InitMacro = "motd"  MaxSessions = 6     PRIvilege = User
.........................Port Physical Characteristics.........................
BAud = 9600         BSPad = None        CRPad = None        FFPad = None
LFPad = None        TabPad = None       DataBits = 8        DUplex = Full
LinePRotocol = ASynchronous             PARIty = None       StopBits = 1
UseDCDout = ( AlwaysAssert, NoToggle )  UseDTRin = Ignore
.................Session Transmission and VTP Characteristics..................
BReakAction = ( InBand )                BReakChar = Disabled
DIsconnectAction = None                 DataForward = None  ECHOData = OFF
ECHOMask = ( AlphaNum, CR, Term, Punct )                    ECMChar = ^^
EOM = Disabled      FlowControlFrom = Xon_Xoff
FlowControlTo = Xon_Xoff                FlushVC = OFF       IdleTimer = 2
LongBReakAction = IGnore                LFInsertion = None  MOde = Transparent
XOFF = ^\           XON = ^\
..................Sess
Remote: ?
      BRoadcast   ( <addr> ) <string>
      Connect     ( <addr> ) <address> [ ECM ]
      DEFine      <macro-name> = ( <text> )
      DisConnect  ( <addr> ) [<session number>]
      DO          <macro-name>
      Echo        <string>
      Listen      ( <addr> ) 
      ReaD        ( <addr> ) <option> <parameter>
      ROtary      !<rotary> [+|-]= !<portid>[-!<portid>] , ...
      SAve        ( <addr> ) <option> <filename>
      SET         <param-name> = <value> ...
      SETDefault  ( <addr> ) [<param-name> = <value>] ...
      SHow        ( <addr> ) <argument> ...
      UNDefine    <macro-name>
      UNSave      <filename>
      ZeroStats   
      <BREAK>     (to leave remote mode)

Remote: sho (!9) stat   [This was done Thanksgiving day.  Not very busy...]

PORT # 000.000.000.000 STATISTICS REPORT: -------------
DAILY-AVERAGE:   CALL/D    PKT/S     BYTE/S    ERROR/D
                 0         0         0         0         
BUSIEST-MINUTE:  CALL/M    PKT/S     BYTE/S    ERROR/M
                 0         0         0         0         
BUSIEST-SAMPLE:  MAX#SN    PKT/S     BYTE/S    ERROR
                 1         0         0         0         
HOUR CALL/H PKT/S  BYTE/S ERR/H         HOUR CALL/H PKT/S  BYTE/S ERR/H
0    0      0      0      0             1    0      0      0      0      
2    0      0      0      0             3    0      0      0      0      
4    0      0      0      0             5    0      0      0      0      
6    0      0      0      0             7    0      0      0      0      
8    0      0      0      0             9    0      0      0      0      
10   0      0      0      0             11   0      0      0      0      
12   0      0      0      0             13   0      0      0      0      
14   0      0      0      0             15   0      0      0      0      
16   0      0      0      0             17   0      0      0      0      
18   0      0      0      0             19   0      0      0      0      
20   0      0      0      0             21   0      0      0      0      
22   0      0      0      0             23   0      0      0      0      

Remote: sho ?
      SHow    ADDRess 
      SHow    AllSessions [ p ]
      SHow    CONFigurationS [<filename>]
      SHow    ( <addr> ) DefaultParameters [<param-name> ...]
      SHow    GLobalParameters 
      SHow    InternetPorts 
      SHow    InternetServers 
      SHow    MACros [<macro-name>]
      SHow    NAmes [<host name>]
      SHow    NetMAP 
      SHow    ( <addr> ) PARAmeterS [<param-name> ...]
      SHow    <param-name> ...
      SHow    ROtaries 
      SHow    ( <addr> ) SESsions [ P ]
      SHow    ( <addr> ) STATisticS [ Sample | Min | <hour> | Day ]
      SHow    VERSion 
      SHow    VirtualPorts 
      <BREAK>     (to leave remote mode)

Remote: ^C

The CS-100's are reasonably reliable when used as simple terminal
servers.  However from time to time we have run into glitches in their
TCP, which make us wary of using it for anything unusual.  For
example, at one point we ran a printer off a line on a CS-100.  The
host would connect to that port in order to access the printer.  The
box where we did this always seemed to crash more often than others.
Also, if there were problems, we had to reboot the box.  Apparently,
they did not implement RST in TCP.  If the host crashed, this could
lead the CS-100 to keep trying to send a character to a connection
that no longer existed.  It would ignore the RST that was sent to tell
it to desist.  At times, the rate of retry could be high enough to
noticably affect the performance of the machine being attacked.
Historically, we have had fairly long delays in getting TCP problems
of this sort fixed.  TCP/IP was clearly a lower priority with them
than XNS. However over the last few months, they have apparently
raised the priority of TCP/IP, and we are now getting fixes to long-
standing problems.  Whether the particular problem I just described
is now fixed, I don't know.  The most serious problem is that the
boxes are short of memory.

  - There is a limit to the number of sessions you can have active
	at once.  If you use all 14 ports on a CS-100, there are
	only 4 extra sessions.  I.e. 4 people can have two sessions,
	or one person can have 4, but that's all.

  - The TCP buffers are small.  The packet sizes tend to be very
	small.  (I just telnetted to one, and got a send window
	of 102.)  This can increase the CPU overhead on the host
	and the network traffic	when doing such things as Emacs
	screen refreshes.

This is a known problem, and may not be present with all of the models,
or even on newer CS-100's.  So you should check with your salesman
to find out the current limits.

Note that Bridge has a large line of TCP/IP products.  I don't know
the current product list, but it includes gateways of various sorts,
and I think also a box that can handle 3270's.

=============

The cisco ASM is one of a set of products that use the same chassis.
It is a standard Multibus backplane, into which various boards can be
inserted.  Unlike Bridge, they don't actually have a finite set of
products.  They have these things with product numbers, like ASM-32/S.
But in fact all they are is a certain set of boards plugged into the
box.  You can add boards and produce objects that are a combination of
different announced products.  (E.g. if you add a second Ethernet card
to a terminal server, you have a thing that works as both a terminal
server and a gateway.)  There is a single CPU, a 68000 on a cisco
version of the SUN card.  (No connection with Sun Microsystems.  Sun
was one of several people who licensed the original SUN design from
Stanford.  Cisco's card is most similar to the Forward Technology SUN
board.)  The board has plenty of memory.  (1MB, which is loads of
memory for this application.)  Terminals are connected to terminal
interface cards that handle 16 ports.  The 3Com Ethernet card is
currently used.  Cisco supports up to 80 terminals on one box.
However if they are all active at once, there will not be enough CPU
power in the 68000.  This would be used if you had lots of offices,
but you knew that not everybody was logged in at once.  Cisco claims
that 32 ports can be in use at once with no problem.  We have a number
of 32-port boxes, and have never seen any slowdowns.  I think we are
probably going to start adding port cards so that we have 48-port
boxes.  They have a packaging problem with all of these ports.  How do
you put 80 RS232 connectors on the back of a box?  Obviously, you
don't.  Their preferred configuration uses 50-wire phone company
cables.  They have the standard phone company connectors on the back
of the boxes.  We run the cables to a board on the wall, where we fan
the wires out to phone company punch blocks.  Other wires are then run
out to the terminals.  We can then cross patch any terminal to any
port.  This is certainly the best way to handle large installations.
It results in compact connectors and fewer cables.  However if you
prefer RS232 connectors, they will put up to 32 on the back of their
box.  I think they also have a kludge for putting extra ones near the
box, connected by ribbon cable.  The boxes normally have their code in
ROM, though there are provisions to load it using TFTP from any host
system that supports that.  (Cisco sends out new code by giving us new
ROM's.)  When a machine comes up, it needs two services from some
server: (1) It has to find its Internet address.  It sends out both
RARP and bootp requests.  Bootp servers are available for 4.3.  We
also run it on 4.2, but I think it needs one extra ioctl.  RARP is
also widely available.  However before buying one of these boxes, you
should verify that you can run one or the other of these.  (2) It
attempts to load a configuration file using TFTP.  This contains
information such as terminal speeds, host name, greeting message,
routing tables, etc.  TFTP servers are widely available, and should
run on just about any system that supports TCP/IP.  Host name can be
handled via either IEN116 (an older TCP/IP name server protocol), or a
domain server.  It will broadcast, or you can give it a list of
servers to use.

System administration can be done using any terminal or an incoming
telnet connection.  Simply enable (which requires a password).  It
is possible to type any of the commands interactively that go in the
configuration file, though it would be more likely to update the
configuration file and request that it be reloaded. (This will not
disturb other users, as they won't change parameters of terminals
that are currently in use.)  One slight security problem: the password
is defined in the configuration file.  Most versions of TFTP will only
access files that are protected so that everyone can read them.  This
means that some machine on your network will have the password in a 
file that is readable by the world.  (We have altered TFTP to allow
us to protect the file.  It uses a slightly nonstandard interpretation
of .rhosts to let us limit access to just cisco terminal servers.)

The command language looks like a typical DEC command scanner.  It's
modelled after TOPS-20, but would look familiar to any VMS user, and
indeed to most Unix users.  It responds to rubout or backspace, ^U or
^X, ^W [word delete], and ^R [retype line].  If you just type a host
name, it will connect.  We have not seen any limit to the number of
connections you can have active at once.  The example below will give
a feeling for the commands and options available.  (It was obtained by
a normal telnet connection.)  One of the strong points of the system
is that they try to let you see as many internal tables and parameters
as possible.  This can be very useful in debugging situations.  Note
that no ? help is available for the configuration commands.  However
the results of the "show" commands will give a good feeling for the
sorts of options that can be set.  There are not quite as many
terminal options as with Bridge.  In particular

  - I don't think any attempt is made to handle half-duplex terminals
  - You can't change the editing characters (backspace/rubout, etc)
  - No padding is supported.  (The host system is assumed to do that.)

However there are more network configuration options.  There is also
access control.  You can control incoming or outgoing connections on
any port.  Access control lists can contain wildcards.  You can also
use this mechanism in their gateways to control which hosts access a
given network.  (We use it for Arpanet access control.  If you list
individual hosts, a hash table is used, so it should not cause a
performance problem.  A list with several wildcards might not be quite
so fast.)  Cisco appears to have implemented the IEEE 802.whatever
encapsulation.  In addition to the usual Ethernet encapsulation using
ARP's, they support two versions of IEEE/ISO encapsulation, one which
follows the newest proposed method, and the other which seems to be
peculiar to H-P.

Connected to hilltop.rutgers.edu.
Escape character is '^]'.
     cisco ASM-32, Rutgers LCSR Computer Science Network, Node Hilltop.

   Please type name of machine you wish to connect to, followed by a <CR>.

hilltop>enable

Password: 

hilltop#?

banner          Change message of the day banner
clear           Reinitialization functions, type "clear ?" for list
configure       Configure from terminal or over network
connect <host>  Connect to host - same as typing just a host name
disconnect <cn> Break the connection specified by name or number
disable         Turn off privileged commands
enable          Turn on privileged commands
exit, quit      Exit from the EXEC
name-connection Give a connection a logical name
ping            Send ICMP Echo message
reload          Halt and reload system
resume          Make the named connection be current
send <line>|*   Send message to a terminal line or lines
set <option>    Set an option, type "set ?" for list
show <cmd>      Information commands, type "show ?" for list
systat          Show terminal lines and users
terminal        Change terminal's hardware parameters, type "terminal ?"
unset <option>  Clear an option, type "unset ?" for list
where           Show open connections
<cr>            To resume connection


hilltop#set ?

download.mode  Optimize settings for using Kermit, etc.
egp-tracing    Detailed printout of EGP transactions
escape <ch>    Local escape character
event-watching Display special gateway events
gateway        Gateway processing activity
hold <ch>      Local hold character
imp-loopback   Put any ARPA-1822 interfaces into self loopback
line-debugging Helpful debugging printout for RS232 lines
notify         Notification of data pending on idle connections
tcp-debugging  Debugging printout for TCP connections
tracing        Print datagram routing information

hilltop#sho ?

access-lists   Access control lists
arp            ARP cache
buffers        Network buffer utilization
controllers    Serial network interface statistics
egp            EGP neighbors
hardware       Hardware configuration
hosts          Host/address cache
imp-hosts      Active IMP hosts
interfaces     Network interface statistics
line <line>    Line information, may specify a line
memory         Memory utilization statistics
options        Options configured via "set" and "unset"
printers       Parallel printer status
processes      Active system proceses
redirects      ICMP redirect cache
routes         Network routing table
stacks         Process and interrupt stack use
terminal       Terminal parameters
tcp <line>     TCP information, may specify a line
traffic        Network protocol statistics
users          Summary of active lines and connections

hilltop#term ?

databits  5|6|7|8
flowcontrol  none|hardware|software  [in|out]
length  <length>
parity  none|even|odd|space|mark
speed  300|1200|2400|4800|9600|19200
start-character  <decimal-number>
stop-character  <decimal-number>
stopbits  1|2
terminal-type  <string>

hilltop#sh line 14

 Tty Typ    Tx/Rx    A Mode      Status  Capab Roty AccO AccI  Uses    Noise
* 14 TTY  1200/1200  L modem     100448   1100    -    1    -   139       79

Location: "Dialup x2970", Type: "", Length: 24 lines
TX/RX speeds are 1200/1200, 8 databits, 1 stopbits, no parity
No flowcontrol in effect.
Status currently=0x100448, default=0x100020, permanent=0x40
Capability currently=0x1100, default=0x1100, permanent=0x0
Idle EXEC timeout is 5 minutes.
Idle session timeout is 120 minutes.
Escape character is ^^ (30), default is ^^ (30)
Hold character is ^\ (28), default is ^\ (28)
Disconnect character is not set
Activation character is ^M (13)
hilltop#systat

 TTY       Host(s)             Location
 tty14     H008-19             Dialup x2970
 tty22     TOPAZ               Dialup x2976
 tty34     TOPAZ               Dialup x2954
*vty41     idle                TOPAZ
 vty42     idle                192.12.88.3

hilltop#sho traffic

IP statistics:
  Rcvd:  3075505 total, 23 format errors, 79 checksum errors, 0 bad hop count
         0 unknown protocol, 2998056 local destination, 77347 not a gateway
  Frags: 0 reassembled, 0 timeouts, 0 fragmented, 0 couldn't fragment
  Bcast: 974 received, 4 sent
  Sent:  0 forwarded, 2395890 generated, 64 encapsulation failed, 0 no route

ICMP statistics:
  Rcvd: 1 checksum errors, 85 redirects, 46 unreachable, 342 echo
        0 echo reply, 35 mask requests, 20 mask replies, 0 other
  Sent: 0 redirects, 0 unreachable, 0 echo, 342 echo reply
        1 mask requests, 0 mask replies

UDP statistics:
  Rcvd: 1526 total, 0 checksum errors, 1131 no port
  Sent: 1653 total, 0 forwarded broadcasts

TCP statistics:
  Rcvd: 2996037 total, 471 checksum errors, 716 no port
  Sent: 2394255 total

 --More--  

ARP statistics:
  Rcvd: 47699 requests, 1177 replies, 78 reverse, 69 other
  Sent: 289 requests, 533 replies (0 proxy), 1 reverse

Xerox ARP statistics:
  Rcvd: 0 requests, 0 replies
  Sent: 0 requests, 0 replies

Probe statistics:
  Rcvd: 16217 address requests, 81 address replies, 0 other
  Sent: 289 address requests, 4 address replies (0 proxy)

hilltop#sho interface

Ethernet #0 is up, hardware address 0260.8C02.5606, IP address 128.6.4.56
  MTU is 1504 bytes, encapsulation is ARPA, no access checking
  Address determined by Reverse ARP from host 128.6.4.194
  Time since last input is 0:00:00.000
  Time since last successful output is 0:00:00.016
  No output failure has occurred
     3601442 input, 4119 with errors, 541 no input buffers
     2401375 output, 3443 with errors, 0 congestion drops
     202 resets, 0 runts rcvd, 0 giants rcvd

hilltop#sho ?

access-lists   Access control lists
arp            ARP cache
buffers        Network buffer utilization
controllers    Serial network interface statistics
egp            EGP neighbors
hardware       Hardware configuration
hosts          Host/address cache
imp-hosts      Active IMP hosts
interfaces     Network interface statistics
line <line>    Line information, may specify a line
memory         Memory utilization statistics
options        Options configured via "set" and "unset"
printers       Parallel printer status
processes      Active system proceses
redirects      ICMP redirect cache
routes         Network routing table
stacks         Process and interrupt stack use
terminal       Terminal parameters
tcp <line>     TCP information, may specify a line
traffic        Network protocol statistics
users          Summary of active lines and connections

hilltop#sho line

 Tty Typ    Tx/Rx    A Mode      Status  Capab Roty AccO AccI  Uses    Noise
   0 CTY             - direct        40      0    -    -    -     1        1
   1 TTY  4800/4800  - direct    400040      0    -    2    -     0      107
   2 TTY  4800/4800  - direct    400040      0    -    2    -    70     2757
   3 TTY  4800/4800  - direct    400040      0    -    2    -    83      147
   4 TTY  4800/4800  - direct    400040      0    -    2    -    70       13
   5 TTY  4800/4800  - direct    400040      0    -    2    -    33       46
   6 TTY  4800/4800  - direct    400040      0    -    2    -    54     2036
   7 TTY  4800/4800  - direct    400040      0    -    2    -     0  2322046
  10 TTY  2400/2400  - direct        40      0    -    1    -    39        8
  11 TTY  2400/2400  - direct    400040      0    -    1    -     0        0
  12 TTY  2400/2400  - direct        40      0    -    1    -    30       41
  13 TTY  2400/2400  - direct        40      0    -    1    -    57        8
* 14 TTY  1200/1200  L modem     100448   1100    -    1    -   139       79
  15 TTY  1200/1200  L modem     100020   1100    -    1    -   112       23
  16 TTY  1200/1200  L modem     100020   1100    -    1    -   103      107
  17 TTY   300/300   L modem     100020   1100    -    1    -    81       13
  20 TTY  1200/1200  L modem     100020   1100    -    1    -    70       21
  21 TTY  1200/1200  L modem     100020   1100    -    1    -    41        3
* 22 TTY   300/300   L modem     500600   1100    -    1    -     2      166
  23 TTY  1200/1200  L modem     100020   1100    -    1    -    22        1

 --More--  
 Tty Typ    Tx/Rx    A Mode      Status  Capab Roty AccO AccI  Uses    Noise
  24 TTY  1200/1200  L modem     100020   1100    -    1    -     7       45
  25 TTY  1200/1200  L modem     100020   1100    -    1    -    13       12
  26 TTY  1200/1200  L modem     100020   1100    -    1    -    35       29
  27 TTY  1200/1200  L modem     100020   1100    -    1    -     5        0
  30 TTY  9600/9600  L modem     100020   1100    -    1    -     0        0
  31 TTY  9600/9600  L modem     100020   1100    -    1    -     0        0
  32 TTY  2400/2400  L modem     108000   1100    -    1    -    78      389
  33 TTY  2400/2400  L modem     100020   1100    -    1    -    34       19
* 34 TTY  2400/2400  L modem     100448   1100    -    -    -    25       34
  35 TTY  9600/9600  L modem     100020   1100    -    1    -     0        0
  36 TTY  2400/2400  L modem     100020   1100    -    1    -    49     2258
  37 TTY  2400/2400  L modem     100020   1100    -    1    -     9        0
  40 TTY  9600/9600  L modem     100020   1100    -    1    -     0        0
* 41 VTY             - virtual   120440      1    -    2    -    45        0
* 42 VTY             - virtual   120440      0    -    2    -     3        0
  43 VTY             - virtual   120240      0    -    2    -     8        0
  44 VTY             - virtual   120020      0    -    2    -     0        0
  45 VTY             - virtual   120020      0    -    2    -     0        0

hilltop#quit
Connection closed by foreign host.

The primary issue with cisco is that they are a new and small company.
We have had problems show up, as you would expect with any new
product.  They have all been fixed reasonably quickly.  Simple coding
blunders are normally fixed within a couple of days.  (I trust no one
expects a bug free product.  We are not so concerned about finding
bugs in new products.  We had at least as many bugs in the early days
of the CS-100's.  We are more concerned with how hard it is to get
them fixed.  As far as I know, there are no unfixed bugs at the moment
in the Cisco software.)  Rutgers has, as usual, run into a few really
difficult problems.  The most difficult ones turned out to be design
problems with two different boards used in the Arpanet gateway (both
from established vendors).  The question that a few of us have is
whether they will be able to continue their good support when they
have hundreds of customers.  The biggest problem we have with small
companies is that when they succeed, it is no longer practical for
everyone to talk to their wizards.  Either they supply no support, or
they build up a large staff of turkeys to deal with the users.  (We
have seen both strategies.)  However we don't know of any more
established vendors that have really brilliant solutions to this
problem either.  One of the stengths of the company is Len Bosack's
expertise in the area of TCP/IP and routing technology.  The folks
at Bridge are certainly competent, but our evidence does not suggest
that they have anyone of his caliber.  This shows up in how the
nooks and cranies of TCP are handled.  (It is inconceivable that
cisco would put out a TCP that failed to implement RST.)  It also
matters if you are thinking of building a large network, where
routing technology matters.  (However Bridge has in the past tended
to license DEC's technology for routing.  That is a perfectly
acceptable solution.)

In short, both Bridge and cisco make useful products.  We think
cisco's software design is somewhat superior.  But you have to balance
this against the dangers of dealing with a startup company, with
the details of what the particular products do and don't support,
and with the cost of the equipment needed for your particular
configuration.  (E.g. the Bridge CS-200 would tend to be more
cost-effective for locations with very small numbers of terminals,
but in most situations, cisco would probably come out ahead.)

-----------[000110][next][prev][last][first]----------------------------------------------------
Date:      Thu, 27-Nov-86 23:52:01 EST
From:      mills@huey.udel.edu.UUCP
To:        mod.protocols.tcp-ip
Subject:   Trailblazer cool to packets

Folks,

I have been evaluating a new Trailblazer packet-ensemble modem made by Telebit
for possible use to connect IP hosts together via ordinary dial-up lines. This
interesting modem packetizes serial-asynchronous data on multiple carriers and
is theoretically capable of speeds to 14 Kbps. It operates in buffered,
half-duplex mode with error control by retransmission. I connected a pair of
Trailblazers between two fuzzballs in the same calling area operating with the
SLIP protocol at 4800 bps. For comparison a leased line operating with
conventional full-duplex, synchronous modems at 4800 bps was also available
between these machines. This is a short note describing the results of delay
and throughput tests.

The tests used ICMP Echo/Echo Reply messages with lengths randomly distributed
in the range 40-256 octets and were conducted in the same manner as described
in RFC-889. Each test accumulated 512 samples, where a sample consisted of one
ICMP Echo/Echo Reply volley across the link with no other traffic on the link.
The samples were then displayed on a bitmap display as a scatter diagram of
length versus delay and saved in a file (telbit.bit on udel2.udel.edu in Sun
format if you're interested). The linear regression line was then computed to
determine the intrinsic delay and throughput as a function of packet length.

From the scatter diagram it is apparent that the Trailblazer packetizing
mechanism is multi-modal, in that different packetization algorithms are used
for the shorter packets and for the longer ones. By trial and error it was
discovered that the boundary between the two lies at about 125 octets, so
separate regression lines were then computed for each region. The (one-way)
results are shown below along with the results for the conventional 4800-bps
modem:

			Trailblazer		4800-bps modem
			40-125	125-256		40-125	125-256
	-------------------------------------------------------
	min delay (ms)	756	1093		103	247
	max delay (ms)	1084	1388		244	467
	slope (bps)	2075	3551		4802	4767

Obviously, the Trailblazer performance doesn't look too impressive compared to
the 4800-bps modem when both are operated at the same interface speed. One
reason for this is that the Trailblazer packetizes data internally upon
submission and depacketizes it before delivery. Unfortunately, the particular
fuzzball interfaces used here are simple character-at-a-time devices that do
not work reliably above 4800 bps.

If the interface speeds could be increased at both ends of the link without
limit, the Trailblazer delays could in principle approach values given by
subtracting twice the delays in the fourth column from those in the second and
third columns of the table. This leaves about 500 ms to be accounted for by
the Trailblazer packetization and transmission protocols, including the
turnaround and resynchronization procedures necessary for half-duplex
operation. Not bad, but not thrilling either.

Inspection of the scatter diagram suggests the 3551-bps slope for the
Trailblazer may be characteristic beyond the 256-octet limit of the
measurements. Above this value; however, the modem begins to flow-control the
source, at least in the test configuration, where the telephone line is only
about three miles long. The Trailblazer line speed can be estimated as
follows: of the (1388-1093) = 295 ms to transmit a (256-125) = 131-octet
message, (467-247) = 220 ms are consumed in the interface, so that 75 ms
represents the time required for transmission. Therefore the Trailblazers are
humping data at 131*8/.075 = 13973 bps, which could be predicted on the
assumption of good local line quality.

I conclude the peformance of the Trailblazer, while potentially brilliant,
is badly eroded in applications where the data already are packetized. This
would seem a shame, considering the thunderous horsepower of the modem
circuitry (M68000, TMS3020, oodles of memory), and could be easily remedied
by the inclusion of a parallel port and appropriate handshaking protocol.

Dave
-------

-----------[000111][next][prev][last][first]----------------------------------------------------
Date:      28 Nov 1986 06:06-EST
From:      CERF@A.ISI.EDU
To:        mills@HUEY.UDEL.EDU
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: Trailblazer cool to packets
Dave,

I tested the Trailblazer, too, while at MCI. We ran into problems when the
unit was transmitting via digital PBX since the complex analog waveform
emerging from the modem was digitized by the PBX and converted back to
analog for injection into the public telephone network. This introduced
enough phase and other quantization problems to require a lot of retransmission.
Used with an async source over the MCI backbone, however, it delivered about 
11 kbps pretty consistently.

Parameters for packetizing are indeed an interesting issue - the target
market was not packetized, as you can well imagine. I strongly urge you to
get in touch with Telebit to see if they could tune things better for your
application.
Vint
-----------[000112][next][prev][last][first]----------------------------------------------------
Date:      28-Nov-86 02:10:42-UT
From:      mills@huey.udel.edu
To:        tcp-ip@sri-nic.arpa
Subject:   Trailblazer cool to packets
Folks,

I have been evaluating a new Trailblazer packet-ensemble modem made by Telebit
for possible use to connect IP hosts together via ordinary dial-up lines. This
interesting modem packetizes serial-asynchronous data on multiple carriers and
is theoretically capable of speeds to 14 Kbps. It operates in buffered,
half-duplex mode with error control by retransmission. I connected a pair of
Trailblazers between two fuzzballs in the same calling area operating with the
SLIP protocol at 4800 bps. For comparison a leased line operating with
conventional full-duplex, synchronous modems at 4800 bps was also available
between these machines. This is a short note describing the results of delay
and throughput tests.

The tests used ICMP Echo/Echo Reply messages with lengths randomly distributed
in the range 40-256 octets and were conducted in the same manner as described
in RFC-889. Each test accumulated 512 samples, where a sample consisted of one
ICMP Echo/Echo Reply volley across the link with no other traffic on the link.
The samples were then displayed on a bitmap display as a scatter diagram of
length versus delay and saved in a file (telbit.bit on udel2.udel.edu in Sun
format if you're interested). The linear regression line was then computed to
determine the intrinsic delay and throughput as a function of packet length.

From the scatter diagram it is apparent that the Trailblazer packetizing
mechanism is multi-modal, in that different packetization algorithms are used
for the shorter packets and for the longer ones. By trial and error it was
discovered that the boundary between the two lies at about 125 octets, so
separate regression lines were then computed for each region. The (one-way)
results are shown below along with the results for the conventional 4800-bps
modem:

			Trailblazer		4800-bps modem
			40-125	125-256		40-125	125-256
	-------------------------------------------------------
	min delay (ms)	756	1093		103	247
	max delay (ms)	1084	1388		244	467
	slope (bps)	2075	3551		4802	4767

Obviously, the Trailblazer performance doesn't look too impressive compared to
the 4800-bps modem when both are operated at the same interface speed. One
reason for this is that the Trailblazer packetizes data internally upon
submission and depacketizes it before delivery. Unfortunately, the particular
fuzzball interfaces used here are simple character-at-a-time devices that do
not work reliably above 4800 bps.

If the interface speeds could be increased at both ends of the link without
limit, the Trailblazer delays could in principle approach values given by
subtracting twice the delays in the fourth column from those in the second and
third columns of the table. This leaves about 500 ms to be accounted for by
the Trailblazer packetization and transmission protocols, including the
turnaround and resynchronization procedures necessary for half-duplex
operation. Not bad, but not thrilling either.

Inspection of the scatter diagram suggests the 3551-bps slope for the
Trailblazer may be characteristic beyond the 256-octet limit of the
measurements. Above this value; however, the modem begins to flow-control the
source, at least in the test configuration, where the telephone line is only
about three miles long. The Trailblazer line speed can be estimated as
follows: of the (1388-1093) = 295 ms to transmit a (256-125) = 131-octet
message, (467-247) = 220 ms are consumed in the interface, so that 75 ms
represents the time required for transmission. Therefore the Trailblazers are
humping data at 131*8/.075 = 13973 bps, which could be predicted on the
assumption of good local line quality.

I conclude the peformance of the Trailblazer, while potentially brilliant,
is badly eroded in applications where the data already are packetized. This
would seem a shame, considering the thunderous horsepower of the modem
circuitry (M68000, TMS3020, oodles of memory), and could be easily remedied
by the inclusion of a parallel port and appropriate handshaking protocol.

Dave
-------
-----------[000113][next][prev][last][first]----------------------------------------------------
Date:      Sat, 29-Nov-86 19:20:06 EST
From:      mills@huey.udel.edu
To:        mod.protocols.tcp-ip
Subject:   Updated scatter diagrams for ARPAnet/MILnet paths

Folks,

For those of you that can stroke a Sun, I have a bunch of scatter diagrams
showing some interesting characteristics of the ARPAnet, MILnet and the
gateways between them. For comparison, I also have scatter diagrams showing a
typical ARPAnet path in December, 1983. The diagrams can be FTPed from
UDEL2.UDEL.EDU (binary/image mode) and lit using the Sun screenload program.
Each diagram is about 40K octets in length and is stored in a file with BIT
extension. Most were made using fuzzballs either connected directly to the
ARPAnet or MILnet or behind a fast, lightly loaded gateway.

Each diagram shows delay versus length and was constructed using ICMP pings in
the manner described in RFC-889. These data should be considered only a sample
of Internet characteristics, so it is possible that collection of additional
data may reveal new surprises. Following are some brief comments, which should
be read with Sun in hand.

Files ISID.BIT and VENERA.BIT show a typical transcontinental path via
ARPAnet. The former reflects the path as of December 1983, while the latter
the path as of today. Regression lines are also shown on the diagrams. Note
the two-step delay characteristic for ISID, which was due to the ARPAnet
design at that time which used different allocation strategies for single and
multiple packet messages. The two-step characteristic is also apparent for
VENERA, but not as pronounced. Note the increased dispersion in the
contemporary data, which is hardly surprising to any of us.

Files ARPMIL.BIT, MILARP.BIT and ISIA.BIT show typical Internet paths between
hosts on ARPAnet and MILnet via an ARPAnet/MILnet gateway. ARPMIL shows an
ARPAnet path, MILARP a MILnet path and ISIA a combined path. The effect of
network load is clearly apparent when compared with VENERA. What bothers me
here is the huge dispersion at the lower packet lengths. A more clever routing
algorithm would show dispersion roughly proportional to length.

Files ARPNIC.BIT and MILNIC.BIT show typical ARPAnet (ARPNIC) and MILnet
(MILNIC) paths between east-coast hosts and the Network Information Center
host, which is connected directly to both nets. The effect of additional
trunking capacity on MILnet is obvious. Comparison with ARPMIL and MILARP
show that maybe that capacity is in the wrong place.

I have an extensive set of diagrams also for NSFnet and some of its
tributaries. It would be interesting to extend these measurements to other
paths, including SATNET, WIDEBAND and SURAN hosts. All it takes is a
convenient fuzzball and a vampire tap or alligator clips.

Dave
-------

-----------[000114][next][prev][last][first]----------------------------------------------------
Date:      Sat, 29-Nov-86 22:15:18 EST
From:      uppal@uwvax.UUCP (Sanjay Uppal)
To:        mod.protocols.tcp-ip
Subject:   Submission for mod-protocols-tcp-ip

Path: uwvax!uppal
From: uppal@rsch.WISC.EDU (Sanjay Uppal)
Newsgroups: mod.protocols.tcp-ip,misc.wanted
Subject: MIT PC/IP request
Keywords: HELP, crosscompiler, MITPC/IP
Message-ID: <3010@rsch.WISC.EDU>
Date: 30 Nov 86 03:15:18 GMT
Distribution: na
Organization: U of Wisconsin CS Dept
Lines: 20

HELP! We obtained the PC/IP code of March 1986 from the MIT Lab.(the tar 
tape). However, we are unable to get the cross compiler. The person
at MIT didn't know from where we could get it.

Meanwhile, we had an old cross compiler version, and went about
making the changes as mentioned in the INSTALL file in the release.
The "context diffs were merged by hand", and the instructions were
followed to the letter. However, we are in a deeper mess now, as we
are getting strange errors while doing "make".

Could somebody please direct us to the source of the cross compiler
which is compatible with the March 1986 distribution? Or better,
send/mail/ftp to us if you have a running version?

Thanks
(In desperation)

Sanjay Uppal (uppal@rsch.wisc.edu)
C.W. Bhide (bhide@jack.wisc.edu)

-----------[000115][next][prev][last][first]----------------------------------------------------
Date:      29-Nov-86 22:32:25-UT
From:      mills@huey.udel.edu
To:        tcp-ip@sri-nic.arpa
Subject:   Updated scatter diagrams for ARPAnet/MILnet paths
Folks,

For those of you that can stroke a Sun, I have a bunch of scatter diagrams
showing some interesting characteristics of the ARPAnet, MILnet and the
gateways between them. For comparison, I also have scatter diagrams showing a
typical ARPAnet path in December, 1983. The diagrams can be FTPed from
UDEL2.UDEL.EDU (binary/image mode) and lit using the Sun screenload program.
Each diagram is about 40K octets in length and is stored in a file with BIT
extension. Most were made using fuzzballs either connected directly to the
ARPAnet or MILnet or behind a fast, lightly loaded gateway.

Each diagram shows delay versus length and was constructed using ICMP pings in
the manner described in RFC-889. These data should be considered only a sample
of Internet characteristics, so it is possible that collection of additional
data may reveal new surprises. Following are some brief comments, which should
be read with Sun in hand.

Files ISID.BIT and VENERA.BIT show a typical transcontinental path via
ARPAnet. The former reflects the path as of December 1983, while the latter
the path as of today. Regression lines are also shown on the diagrams. Note
the two-step delay characteristic for ISID, which was due to the ARPAnet
design at that time which used different allocation strategies for single and
multiple packet messages. The two-step characteristic is also apparent for
VENERA, but not as pronounced. Note the increased dispersion in the
contemporary data, which is hardly surprising to any of us.

Files ARPMIL.BIT, MILARP.BIT and ISIA.BIT show typical Internet paths between
hosts on ARPAnet and MILnet via an ARPAnet/MILnet gateway. ARPMIL shows an
ARPAnet path, MILARP a MILnet path and ISIA a combined path. The effect of
network load is clearly apparent when compared with VENERA. What bothers me
here is the huge dispersion at the lower packet lengths. A more clever routing
algorithm would show dispersion roughly proportional to length.

Files ARPNIC.BIT and MILNIC.BIT show typical ARPAnet (ARPNIC) and MILnet
(MILNIC) paths between east-coast hosts and the Network Information Center
host, which is connected directly to both nets. The effect of additional
trunking capacity on MILnet is obvious. Comparison with ARPMIL and MILARP
show that maybe that capacity is in the wrong place.

I have an extensive set of diagrams also for NSFnet and some of its
tributaries. It would be interesting to extend these measurements to other
paths, including SATNET, WIDEBAND and SURAN hosts. All it takes is a
convenient fuzzball and a vampire tap or alligator clips.

Dave
-------

END OF DOCUMENT