The 'Security Digest' Archives (TM)

Archive: About | Browse | Search | Contributions | Feedback
Site: Help | Index | Search | Contact | Notices | Changes

ARCHIVE: TCP-IP Distribution List - Archives (1982)
DOCUMENT: TCP-IP Distribution List for September 1982 (4 messages, 15108 bytes)
SOURCE: http://securitydigest.org/exec/display?f=tcp-ip/archive/1982/09.txt&t=text/plain
NOTICE: securitydigest.org recognises the rights of all third-party works.

START OF DOCUMENT

-----------[000000][next][prev][last][first]----------------------------------------------------
Date:      13 September 1982 1343-EDT (Monday)
From:      Richard H Gumpertz <Rick.Gumpertz@Cmu-10a>
To:        MsgGroup at BRL
Subject:   RFC 822
RFC stands for Request For Comments.  It is not at all clear that
comments were really wanted.  If there were such a thing as an
acceptance vote on 822, I guess I would have to vote against it -- too
many of the changes seem to be capricious and without agreement by a
large number of the people impacted.

In fact, I am not sure why 822 even exists.  Why not leave things as
they are until we can agree upon a complete (not incremental) overhaul
of mail protocols?

		Rick



-----------[000001][next][prev][last][first]----------------------------------------------------
Date:      Thu Sep 16 00:47:27 1982
From:      TCP-IP@BRL
To:        fa.tcp-ip
Subject:   TCP-IP Digest, Vol 1 #22


TCP/IP Digest            Tuesday, 14 Sept 1982     Volume 1 : Issue 22

Today's Topics:
                SAMNET TCP/IP Implementation Observations
       Pointer to Gateway Article in Data Communications Magazine
                   Query about "Standard Gateway" Code
             TCP/IP - VDH:  Queries and Answers for VAX UNIX
    TCP/IP for PWB/UNIX Query && Cray for TCP/IP -- Request for Help
                      TCP/IP for Intel MDS (8086)?
            TCP/IP for Honeywell Level 6?  How about IBM MVS?
                  TCP/IP on a PDP-11/40 with 80Kbytes?
----------------------------------------------------------------------
                         LIMITED DISTRIBUTION
          For Research Use Only --- Not for Public Distribution
----------------------------------------------------------------------

Date: 27 Jul 1982 0708-EDT
From: GEORGE at Afsc-Hq
To: tcp-ip at BRL
Subject: Testing SAMNET TCP/IP implementation

In testing the SAMNET (PDP-11, RSX11-M) TCP/IP implementation against
its first service host, the ISI TOPS-20 system, some peculiar aspects
of retransmission are becoming evident.  These may be of interest to
other implementors.

First, a few notes about the SAMNET implementation -

  The TCP/IP/1822 layers are implemented in a multiprocess task with
  internal scheduling on a prioritized basis; lower layers have higher
  priority than higher layers, receive side has higher priority than
  send side .  All told there are about 10 processes with a timer process
  which gets a chance to execute once-a-second.  As soon as a process
  gives up control the highest priority process with a need to execute
  gets control.  Of course, all this is executing under the RSX11-M
  operating system but the TCP/IP/1822 task is generally at a
  higher priority than the various user tasks (TELNET and FTP).

  At the TCP layer, SEND requests from higher level protocols (TELNET,
  FTP) are processed by (1) moving the data to a TCP buffer, (2) freeing
  the requestor to issue additional SENDs, and (3) forming TCP segments
  to pass to the IP layer.  Send-flow is blocked for a connection when
  the send-side of the connection owns too many TCP buffer resources for
  unacknowledged data octets and segments.
  Retransmission timeout is set using the technique illustrated in
  RFC793, p41, with ALPHA=15/16, BETA=2, UBOUND=255, and LBOUND=2.
  Only the oldest segment will be retransmitted.
  The ACK strategy is to delay an ACK until the next second-boundary
  unless data is flowing in the direction of the ACK.  Issuing
  immediate ACKs or timing more frequently places an excessive
  burden on processing and seemed impractical.

  The IP layer has no significance to the discussion at hand.

  At the local net (ARPANET) layer, messages are queued to a host
  table if another message is in transit to the destination host.
  (Note: the use of "message-ID" field for link number seems to
  prevent having more than one message outstanding to a host if
  one wants to handle the 1822 IMP-host error messages).  Since the
  higher layers can produce messages faster than the local net layer
  can dispatch them, a queue of messages quickly builds waiting
  for the one active output to get a RFNM.

Now for the retransmission pecularities -

(1) When the TCP level decides to retransmit a segment, the IP and
   local net layers have no way to be aware that the segment from
   TCP should be put at the front of the host table queue, so it
   goes to the end of the queue to wait its turn.

   A retransmitted segment usually is being passed through the IP and
   local net layers when an ACK arrives for the original segment.
   The TCP layer discards an ACKed entry from the connection retransmission
   queue and uncovers the next entry which perhaps has a retransmission
   timeout equivalent to the one just ACKed due to a bursty mode of
   segment generation.  Frequently, this next segment on the connection
   retransmission queue also will get retransmitted just before its ACK
   arrives at the TCP layer (this is particularly true when the receiving
   host send ACKs for each message arriving).

   This situation seems to propagate into an interleaving of
   retransmitted segments and new segments as both the TCP segment
   generator and the retransmitter do their jobs.  The result is that
   the IP and local net layers process retransmitted segments even
   after an ACK has been processed for the original segment.

   A temporary resolution to this problem has been to not put so
   much reverence in the calculated response time (especially when
   it gets down to the range of 1-2 seconds).  When the retransmission
   timeout is delayed to twice the calculated response time, the
   retransmissions disappeared.

(2) For incoming messages to the SAMNET system, the ACK delay to the
   next second-boundary is causing the remote host to retransmit,
   sometimes 3-4 times.  Of course the outgoing ACK is somewhat
   impeded because it's generated by a lower priority process
   (high layer, send side) and must contend with the queue which
   might exist at the local net host table (in this situation
   there's been little contention at the queue because the
   data flow was principally incoming).

In both cases the retransmissions have been kicked off by a too-strict
adherence to the retransmission timeout when it is in the range of
1-2 seconds.  The highly modularized, layered approach taken by
the SAMNET implementation is producing results that are not entirely
satisfactory.  Some more sophisticated cross-talk between the TCP
layer and the IP/local net layer and between the segment generator
and the retransmission processor are being investigated in search
for a more satisfactory performance.

Implementors need to be aware of other implementation internal techniques.
Some dissemination of strategies along the lines of Dave Clark's recent
RFCs will be of use, but we've a ways to go yet.

   Chuck Norris (george@afsc-hq)

------------------------------

Date: 17 Aug 1982 12:37:42 EDT (Tuesday)
From: Jack Haverty <haverty@Bbn-Unix>
To: iccb at Bbn-Unix
Subject: Gateway article in Data Communications
Redistributed-by: Bob Hinden <hinden@Bbn-Unix>
Redistributed-to: gateway-info at BBN-UNIX

The August 1982 issue [of Data Communications] contains an article about
the gateway system.  It made it through the editorial pass largely
unscathed (at least they didn't put TCP on top of X.25 or some similar
'clarification'...).

Jack

------------------------------

Date:    8-Sep-82 10:00PM-EDT (Wed)
From:    Nathaniel Mishkin <Mishkin@Yale>
Subject: Standard Gateway Code
To:      Tcp-Ip at BRL

I have a question with what must be an obvious answer: what is the
"Standard Gateway Code available from BBN" that I've seen referred
to on several occasions?  I get the impression it runs on -11s.  Is
this true?  What does it gateway between (among)?

                -- Nat

[ The MACRO-11 Gateway is implemented in MACRO-11 for the smaller
PDP-11s, and supports a variety of "standard" network peripherals, such
as the ACC LH/DH-11. ]

------------------------------

Date:    23-Aug-82 9:36AM-EDT (Mon)
From:    Nathaniel Mishkin <Mishkin@Yale>
Subject: TCP/IP - VDH Query
To:      Tcp-Ip at BRL

I have heard that there exists or someone is working on a VDH driver
for either the BBN 4.1 Unix/IP or the Berkeley 4.2 Unix/IP.  Can anyone
supply a pointer?

		-- Nat

------------------------------

Date:     4 Sep 82 12:03:29-EDT (Sat)
From:     Mark Weiser <mark.umcp-cs@Udel-Relay>
To:       unix-wizards at Sri-Csl
cc:       tcp-ip at Mit-Ai
Subject:  Very Distant Host support under 4.1bsd Unix.

Our 50 kbit line to the Arpanet arrives next month, I thought I was
all prepared by obtaining the IP/TCP kernel from BBN (although I
haven't looked at installing it yet), but now I hear a rumor...

We want to run Very Distant Host protocol directly on our Vax-11/780
running Berkeley 4.1 Unix.  Is this possible?  Friends at the University
of Rochester tell me they have tried and it doesn't work, only local
host support is any good.  Help!  Do I really need a C30?

[ For now, you either need ECUs, or a C/30.  -Mike ]

------------------------------

Date:     8 Sep 82 4:44:12-EDT (Wed)
From:     Doug Kingston <dpk@BRL>
To:       Mark Weiser <mark.umcp-cs@Udel-Relay>
cc:       unix-wizards at Sri-Csl, tcp-ip at Mit-Ai
Subject:  Re:  Very Distant Host support under 4.1bsd Unix.

	I can't speak to the condition of the VDH (or HDH) code
in 4.1, but even if it worked, you would not want to use it,
the system load of doing all the VDH (HDH) protocol on top of
TCP/IP is prohibitively expensive.  A better solution to your
problem of being distant from the IMP, is to buy a pair of ECU's
(error correction units) which are manufactured by the same people
who bring you the LH/DH-11 ArpaNet Interface (ACC, Associated Computer
Consultants).  These wonderful boxes are designed to be placed between
the IMP host port and your CPU interface.  The ECU at the IMP end looks
like a host, and the ECU at the HOST end looks like an IMP.  In the
you hook up the best phone line you can get (the one you would be
running VDH on) and the ECU do the error correction in hardware to
provide an error free link from HOST to IMP.  The ECU can look like
a DISTANT or LOCAL host on either end, and the ECU is good for all the
bandwidth you can give it.  Its also a lot cheaper than getting another
IMP.
					Don't let VDH get you down,
							-Doug-

------------------------------

Date: 8 Sep 1982 09:43:45-PDT
From: mo at Lbl-Unix (Mike O'Dell [system])
To: mark.umcp-cs at Udel-Relay
cc: tcp-ip at Mit-Ai
Subject: Re: Very Distant Host support under 4.1bsd Unix.

With all due respect for the people that did the VDH back in the
Dark Ages, the VDH is famous for not working very well.  They
have been the cause of premature retirement for more than one
good ARPAnet software support person. There are persistant
rumors that Greg Noel down at NOSC may be working on a VDH
driver for 4.1a, but if he gives up, noone would blame him.  The best
thing to do, if your IMP must be remote from you, is to get a pair
of ECU-II's from ACC.  The Error Control Units connect to modems
between the pair, and to the host, it looks like a local IMP interface,
and to the IMP, it looks like a local host interface.  ECU's are not
perfect, and if you have a really rotten phone line between them,
you will have grief, but not nearly as much as with a VDH.  The other
alternative would be to do an HDH driver, which would actually be
doing the world a large service.  But I am somewhat curious about
your comment about "do I really need a C30?"  If you don't
have one, who does??  I have been assuming from your questions that
you will be remote from some other IMP.  Somewhere the must be an IMP
HOST PORT to which your machine is attached.

	-Mike

------------------------------

Date: Wednesday,  8 Sep 1982 10:29-PDT
From: obrien at Rand-Unix
To: Mark Weiser <mark.umcp-cs@Udel-Relay>
Cc: unix-wizards at Sri-Csl, tcp-ip at Mit-Ai
Subject: Re:  Very Distant Host support under 4.1bsd Unix.

	I have never actually attempted to run VDH, but everyone I've ever
talked to has been very negative about the experience.  It is apparently
extremely difficult to get the software exactly right for checksumming, etc.
the packets from the IMP, and the link is very, very slow.

	If your phone Co. is good you might try running a local host
interface using ACC ECU boxes, which shove 1822 over a phone line using
SDLC.  These let you run a local/distant host interface over miles and miles
of phone line, if your IMP actually has room in it for a local/distant
interface.  Our own experience in this department has not been sterling,
because we have General Telephone here.  The link to Rand-Relay is an ECU
link over a leased line to an IMP four or five miles away.  This link worked
just fine until the Rixon-Sangamo T209 modem on the other end blew up, and
it hasn't been right since.  Our new C-30 eliminates our need for this link.

	It seems to be a little-known fact that on a Honeywell TIP, the TIP
hardware takes up so much rack space that the fourth hookup to the IMP MUST
be a VDH.  On C-30's this restriction has been removed.  That was our
situation and is the reason we chose to run with ECU's to an IMP miles away,
rather than attempting to run VDH for 50 feet.  After Gen. Tel. I'm not sure
which alternative was worse.


------------------------------

Date: 10 Sep 1982 01:32:33-PDT
From: CCVAX.ron at Nosc-Cc
To: mark.umcp-cs at Udel-Relay, mo at Lbl-Unix
Subject: Re: Very Distant Host support under 4.1bsd Unix.
Cc: tcp-ip at Mit-Ai, unix-wizards at Sri-Csl

The "persistent rumors that Greg Noel down at NOSC may be working on a VDH
driver for 4.1a" are no longer valid.  Greg quit NPRDC a few weeks ago
and the VDH project is now dead.  The day after he quit his management
scrambled for the $$$ to buy an LHDH and a pair of ECUs.

--Ron

------------------------------

Date: 10 Sep 1982 1207-PDT
From: DEDWARDS at Usc-Isi
Subject: Re: Very Distant Host support under 4.1bsd Unix.
To:   mark.umcp-cs at Udel-Relay, unix-wizards at Sri-Csl
cc:   tcp-ip at Mit-Ai

Don't know about the VDH support, but you do NOT need a c/30.
You can use a pair of ECU's to go the long haul and they allow you
to look like a local host (ECU's are built by ACC).

Howard Weiss

------------------------------

Date: 23 Aug 1982 at 1020 Pacific Daylight Time
From: wss at Lll-Unix (Walter Scott - Consultant/SC)
Subject: Berkeley TCP/IP on 2.8bsd at SRI?
To: tcp-ip at BRL
Cc: gp kawin at lll-unix

A few months ago, I remember reading on this mailing list that
some folks at SRI were attempting to port the Berkeley TCP/IP
code to 2.8 bsd on an 11/70.  What's the story now?  Was it
successful?  Is it available?  Has anybody else tried to do
something similar?

I guess what I really want to know is whether anybody has TCP/IP
working under 2.8bsd.

	Thanks,
	Walter Scott

[ The SRI project to integrate the 4.1a code into 2.8 is well underway.
  I'm sure we will be hearing a status report soon.  -Mike ]

------------------------------

Date: 17 Jun 1982 1242-PDT
From: PADLIPSKY at Usc-Isi
Subject: A Possibly Old Question
To:   tcp-ip at BRL

It's probably come up before, but I haven't been saving
back Digests, so with apologies for redundancy can anybody
tell me what the state of availability of TCP/IP (and even
process-level protocols) is for PWB UNIXtm (Version 6,
I think).
Thanks
and cheers,
map

------------------------------

Date: 22 Jul 1982 1722-PDT
From: Johnson at Sumex-Aim
Subject: Cray TCP/IP
To:   tcp-ip at BRL
cc:   samuelson at Sandia

TCP/IP implementation is not available for Cray.  CRI  (Cray Research
Inc.) has been gathering data on what protocols they will need to
support in the future.  I suggested that since the government is
a large customer of theirs, they should consider TCP/IP support.
They were not familiar with the protocol.  They are, however,
interested in Cray user response, so I suggest you contact:

	Derek Robb
	CRI
	1440 N. Land Road
	Mendota Heights, MN  55120
	800-328-0248

(In fact, if anything will move them toward support, demand from
current, or prospective, Cray owners will do it.)

		--Suzanne Johnson

------------------------------

Date:     24 Jul 82 23:47:21-EDT (Sat)
From:     J. C. Pistritto <jcp@brl-bmd>
To:       tcp-ip at Brl-Bmd
Subject:  TCP-IP for Intel MDS?

Does anyone know of a TCP-IP implementation being done for any of the
Intel MDS-240 series processors?  These are the 8086 development systems
that Intel markets.  An implementation using either the 3Com or Interlan
Ethernet boards would be of interest.  On the subject of which, does
anyone have experience/comments on using these on a PDP-11 series
processor, (not VAX)?
						-JCP-

------------------------------

Date:     5 Aug 82 19:19:08-EDT (Thu)
From:     Doug Kingston <dpk@BRL>
To:       tcp-ip at BRL
Subject:  TCP/IP for hard cases

	This is an inquiry for a third party...

	TCP/IP for the following machines or close enough to consider
	porting:
			Honeywell Level 6 running GECOS
			IBM 30xx (3033?) running MVS

					Sigh.
						-Doug-

------------------------------

Date: 7 Sep 1982 1202-PDT (Tuesday)
From: yale at Nosc-Sdl (Bob Yale)
Subject:  Re:  tcp mailing list
To: tcp-ip at BRL

Mike,

I think I am going to have a problem getting the TCP/IP protocols
up and running.  The problem is that I have a PDP 11/40 running V6
with only 80kb of memory.  Do you know of anyone who has brought
a V7 system up on an 11/40?

Also what is the size of the TCP/IP code?  I might able to make it
fit but if not we will have to get a new CPU.  That will be a
real nightmare getting the paper work through for new ADP equipment.

Basically, my question is:  How much memory will I need to bring
a V7 Unix with a TCP/IP kernel?  And does Bell Labs distribute a
V7  for 11/40's?

Thanks
Bob Yale\@nosc-sdl

[ Unless you use an ENABLE/34 from Able to add more memory to your 11/40,
  it will be *very* *painful* to fit UNIX plus TCP/IP in an 11/40.
  You might consider running the 11/40 as a Mills FUZZBALL.  -Mike  ]

END OF TCP-IP DIGEST
********************

-----------[000002][next][prev][last][first]----------------------------------------------------
Date:      23 Sept 1982
From:      TCP-IP at BRL
To:        TCP-IP at BRL
Subject:   TCP-IP Digest, Vol 1 #23
TCP/IP Digest           Thursday, 23 Sept 1982     Volume 1 : Issue 23

Today's Topics:
                        Comments about ACC ECU's
                           Reliability of VDH
                        No VDH under 4.1 BSD UNIX
             Performance of TCP/IP on Pronet Ring (4.1a BSD)
                Error Recovery and Host Access on the DDN
                 Lead on TCP/IP Implementation for GCOS
                    TCP/IP on an 11/40 class machine
              TCP/IP for Ungermann-Bass NET/ONE?  For SELs?
----------------------------------------------------------------------
                         LIMITED DISTRIBUTION
          For Research Use Only --- Not for Public Distribution
----------------------------------------------------------------------

Date:  15 September 1982 09:48 edt
From:  Charles.Hornig at Mit-Multics
Subject:  ACC ECU's
Sender:  Hornig.Multics at Mit-Multics
To:  tcp-ip at BRL

I don't know about the experiences of others with ECU's, but mine have
been rather negative.  What you ought to know about them is:

(1) They do not exactly simulate a local IMP.  In particular, until you
set HOST UP, it will not set IMP UP.  This forced us to patch out
software to ignore the IMP state until we set HOST UP.

(2) When they work, they work well.  When they break, they are very hard
to fix.  The ACC people I met (this was a few years ago) were very good
but they did not have the necessary tools to debug the interfaces.

------------------------------

Date: 15 Sep 1982 at 1011-EDT
From: avrunin at Nalcon
Subject: VDH vs ECU-LHDH
To: tcp-ip at BRL

We (Navy labs) have been using VDH software and Hardware for many years
now and believe that many of the problems we have are due to glitches in
the software even though it is very stable now.  We are in the process of
changing to LHDH 11s from ACC at the sites where we have local IMPS and 
the use the LHDH 11s with ECUs for the other sites.

The biggest problem we have had with the VDH software is that we are 
the only users of the software we have and therefore have to maintain it
ourselves.  We believe we will be better off being in the mainstream and
having more supportable software.

If anyone is serious about using VDH we will soon have several VDH11Cs that
will probably be made available to DoD through equipment reutilization
procedures.

Larry Avrunin

------------------------------

Date: 14 Sep 82 10:27:52-PDT (Tue)
From: harpo!floyd!cmcl2!philabs!sdcsvax!greg at Ucb-C70
Subject: Re: Very Distant Host support under 4.1bsd Unix.
References: sri-unix.3199

I'm sorry to report that I will NOT be doing a VDH driver for 4.1aBSD.
The reason is not the technical difficulty, but politics -- the facility
where I worked was transfered (over my protests) to the control of
technically incompetent people whose ethics I find questionable.  I had
no choice but to resign my post.  I still hope someone will do it, but
I'm afraid it will not be me.

MO is wrong on one point, though -- we found the VDH very reliable, once
the hardware pecularities were understood.  Both the C and the E versions
are bitches to drive, but we never had hardware problems with the E and
only twice with the C (over a period of about eight years).  I wish the
CPU had been that reliable.....

The best bet for people who currently have a VDH is to get some
of the magic boxes from ACC as indicated by MO and Doug (and others).
I'm told that the measured bandwidth is about ten to fifteen percent
less than the most recent version of the VDH driver (I've improved it
a lot), but, of course, the host overhead is somewhat less.  You pays
your money and you takes your choice.

-- Greg Noel, now working for NCR     ...!ucbvax!sdcsvax!greg

------------------------------

Date: 15 Sep 1982 2204-PDT
From: MCCUNE at Usc-Eclc
Subject: Remote connection of VAX/UNIX to ARPAnet
To:   Mark.UMCP-CS at Udel-Relay
cc:   TCP-IP at BRL, UNIX-WIZARDS at Sri-Csl

True VDH running in the VAX is certainly not the way to go.

ECUs are somewhat tempermental and a link that uses them isn't
considered part of the ARPAnet backbone.

AIDS-UNIX is going to be a server host sometime in the next month or so
by connecting to the SUMEX C/30 IMP.  We're using BBN's HDH
protocol over a 9.6 kb leased phone line.  This setup requires an
RS232C interface board in the C/30, an ACC IF-11/HDH interface
board in the VAX, and a UNIX driver for this new interface (being
written by BBN).  I'm not sure how production versions of the
ACC and C/30 interfaces will compare with a pair of ECUS in price,
but reliability and throughtput will probably be better.  DCA and
BBN would like you more, too.

	Brian McCune
	Advanced Information & Decision Systems
	Mt. View, CA

------------------------------

Date: 20 Sep 1982 10:23:58-PDT
From: mo at Lbl-Unix (Mike O'Dell [system])
To: TCP-IP at BRL
Subject: ring performance numbers

At the request of the Moderator, I am reporting the results of
some VERY preliminary experiments with the Pronet ring and the 4.1a
IP/TCP implementation.  At the outset, let me say that measuring
protocol/network performance is very hard, and trying to attribute
whatever you see to real mechanisms is even harder.  Therefore,
keep in mind these numbers are tentative and should be viewed
with healthy suspicion.

The experiment:

	Base software:
	Berkeley 4.1a IP/TCP implementation
	This version is descended from Rob Gurwitz's implementation,
	but they are now rather different.  I won't attempt to
	predict how the differences might affect performance.
	The version used in the test has full IP routing and
	class A,B, and C network address support.  Additionally,
	the interface driver for the ring hardware support
	"trailer protocols" (more about this below).

	Network hardware:
	Proteon "Pronet" 10 Megabit Ring. This was developed by
	Jerome Saltzer and Ken Pogren of MIT (Ken is currently at
	BBN) and Howard Salwan and Alan Marshall of Proteon.
	It is a token-passing ring with a fully-distributed token
	regeneration algorithm.  It is essentially singly-buffered
	on the recieve side, but this was not observed to be
	a problem, most likely because starting DMA into memory doesn't
	require an interrupt.

	Systems for test:
	Two VAX-11/780 machines.  Each had 4 megs of memory
	and were running off CDC 9766 disks on SI Massbus Emulator
	9400 controller.  The systems on the two machines were
	identical, except for trivial differences (the hostnames!).

	Description of the test:
	The test involved running the following producer and
	consumer processes:

		Producer:

			char buf[1024];
			for (i=0; i<4096; i++)
				write(fd, buf, 1024);

		Consumer:
			char buf[1024];
			for (;;)
				read(fd, buf, 1024);

	The tests involved running the producer on one machine,
	and one consumer on the other, and then repeating the test
	with one of each on each machine (connected across the net!).
	This test caused 1024 byte TCP segments to be sent, along with
	the TCP and IP headers, making the packet size a bit
	less than 1100 bytes.

	What we saw:
	First, we saw no evidence of wire limiting.  The ring
	hardware provides a "refused" notification which is used
	in the interface driver to trigger automatic retransmission of
	packets refused do to the single-buffered input side.  We saw
	about 5% packets refused; in no case did TCP have to resend segments.
	The aggregate throughput in the one-producer/one-consumer case
	was about 90-100 kiloBYTES/second end-to-end.  This was computed by
	dividing the transfer volume (4 megabytes) by the time
	in seconds.  In this test, the transmit machine was running
	95-100% cpu load, minus the error induced by "vmstat 5"
	watching it go.  The recieve side was running about 60-70%
	busy with no trailer protocol, and about 50-65% with tailer
	protocols.  Don't yet know what causes the variability.
	With two connections running, the throughput was a bit higher,
	but we don't know if the difference is significant.
	The two-connection case might produce very different results
	if the reads and writes were big enough to cause "forward
	windowing". 

	Final comments:
	Again, I cannot overstress the difficulty of comparing
	network hardware (implementations vs. architectures)
	and protocols (ditto; implementations vs. architectures).
	It is EXTREMELY difficult to know what you are comparing,
	and even more difficult to reliably attribute the cause
	of any differences.  Additionally, these are VERY preliminary
	results and are submitted with some reluctance, but maybe
	some numbers are better than pure folklore.  We will be
	conducting more, in-depth measurements of both the Pronet
	ring implementation and the Berkeley TCP implementation,
	and we hope to have a much better idea of what is really
	going on when it is all over.  We will also be repeating
	the experiment reported here in a more controlled fashion.

	Finally, I repeat these numbers are tentative and are
	only indicative of "order-of-magnitude" performance.
	How these numbers will change as a function of machine
	load (like doing a real "ftp") is impossible to say
	at the moment (at least for me!).

		Cautiously yours,
		-Mike O'Dell

PS - An explanation of "trailer protocols".

	Because the IP/TCP headers are variable length, on the recieve
	side, at least one copy is required if the packet is sent
	in "normal form".  "Trailer" format takes the normal packet
	and breaks it at the point between the header and data,
	and sends the header after the data.  The advantage is that
	if the data segment is a mulitple of the pagesize, or can
	be padded to such a multiple, by reverse-mapping the recieving
	pages, you can get the headers and data into pages
	on page boundaries without a copy.  This lets you do
	exchange buffering with the user's address space to
	further reduce the number of copies.  It should be clear
	in only wins on the recieve side, because the transmit side
	has knowledge of of the header size and can force page
	alignment.  The recieve side cannot, however, be aware of the
	header size before the packet arrives, so using the 
	trailer format allows the clever allocations.
	Note that "trailer form" has nothing to do with IP/TCP;
	it is purely a "local encapsulation" and is invisible
	(above the interface driver) to IP and its clients.

------------------------------

Date:     15 Sep 82 17:21:18-EDT (Wed)
From:     Michael Muuss <mike@brl-bmd>
cc:       tcp-ip at Brl-Bmd
Subject:  DDN Host Access, Error Correction

[ I enclose the following message because some of the readers may be
  unfamiliar with the error correction in the ArpaNet/DDN.  -Mike ]

	The backbone of the DDN is fully error corrected using
the BBN RTP (Reliable Transport Protocol) between the C/30 IMPs.
This is documented in some detail in BBN Report 1822.

	There are several host-to-IMP connection possibilities,
with varying degrees of error correction protocol superimposed:

1)  LOCAL HOST	This is a TTL-level bit-serial connection, good for
connections less than 25 feet.  The link is assumed reliable, and
no error correction is performed.

2)  DISTANT HOST  This is a balanced line driver version of the LOCAL
HOST, and has the same reliability assumption.  This is good to 2000
feet, with a somewhat slower data rate than the LOCAL HOST (about 1Mbit/sec).

3)  VERY DISTANT HOST (VDH)  This is roughly the same as the RTP that
the IMPs use to talk to each other.  The Host CPU overhead required
to run this protocol is extensive.  The link is IBM bi-sync.

4)  HDLC VERY DISTANT HOST (HDH)  This is a reimplementation of the VDH
concept, only using bit-stuffed HDLC links rather than the IBM bi-sync
that the old VDH (above) used.  HDH is strongly prefered over VDH.

5)  X.25 DISTANT HOST  This is the same idea as VHD and HDH, but using
the X.25 (Levels 1 through 3) to provide a reliable, flow controled,
error corrected link between the IMP and the Host.  Note that the Host
must still implement TCP and IP modules, in addition to X.25.  The X.25
is merely being used to provide the interface to the IMP, becuase of the
availibility of X.25 interface hardware and drivers.

[ The X.25 interface option will not be availible until sometime 1984,
  according to current predictions. ]

Of course, the IMPs provide reliable transport once they get a hold of
the data.

In addition, communications through TCP are further checksumed (16 bits)
in each packet, so an additional level of data asurance is provided. 
This may be excessive for the fairly reliable C/30 DDN backbone, but
becomes necessary when less reliable (eg, packet radio) communications
links may be traversed.

I know of no plans to support SIP (from AUTODIN II) in the current DDN.
Does anybody seriously think SIP is good for anything?

[ I have been informed by DCA that SIP *will* be supported within the DDN,
  but only as a community unto itself.  That is, the plans for interoperability
  between SIP hosts and TCP/IP hosts are still in the formative stages.
  It is likely to be difficult. ]

				Best,
				 -Mike

------------------------------

Date:  15 September 1982 09:41 edt
From:  Charles.Hornig at Mit-Multics
Subject:  TCP/IP on Honeywell Level-6
Sender:  Hornig.Multics at Mit-Multics
cc:  tcp-ip at BRL

I believe that a TCP which may be portable to GCOS is being developed by
Honeywell Federal Systems Division for WWMCCS.

------------------------------

Date: 16 Sep 1982 at 0732-EDT
From: hsw at Tycho (Howard Weiss)
Subject: TCP/IP on 11/40 class machine
To: yale at Nosc-Sdl
cc: tcp-ip at BRL

[ Please note that Howard refers to the older V6 BBN implementation of TCP/IP
  which operates primarily in user mode.  The Gurwitz VAX implementation
  ported to the PDP-11s by SRI clearly will not fit in an 11/40.  -Mike ]
 
You should be able to bring up TCP on your 11/40 without too much heartache.
I have an 11/34 and an 11/23 (same class machine as your 11/40) and had
a version of BBNs TCP sort of running a while back (I ran into problems
because I was using a very old v6 without the new BBN IPC stuff and
also using a different 1822 interface).  I never got the TCP to
actually talk to anyone at that time, but I had NO problems at all
getting it onto the machine.  Without an NCP, UNIX (at least a V6
flavored one) fits quite nicely onto a non-split I/D machine.  We
have been running our 11/34 with the NCP for years now - barely
fitting everything in without yet resorting to overlays.  We even
have a few spare bytes of kernel space left over!!  But, without
NCP, there are little problems in getting UNIX into the 11/40
address space.  If you try to install a BBN type TCP which is
slow and clunky because it lives outside the kernel, you win
big on the 11/40 type machine since no kernel space is used
up.  I am awaiting a tape from DCEC with their improved version of
the BBN TCP that I will be installing on my 11/23 and 11/34 and
only eventually on my 11/70.
 
Howard Weiss

------------------------------

Date: 22 Sep 1982 04:09:16-PDT
From: ssc-vax!foo at Lbl-Unix
To: lbl-unix!TCP-IP at BRL
Subject:  IP/TCP for Ungermann-Bass NET/ONE?  For SELs?

We are considering implementing IP/TCP on a local area network using
Ungermann-Bass NET/ONE.  There will be one VAX 11/780 (running VMS)
and at least 7 SELs (running whatever their OS is).

My problem is that I don't want to re-invent the wheel for IP/TCP,
hence I am looking for:
	1. IP/TCP implementation for VAX/VMS.  Code should be in "C" or
	   Fortran 77.
	2. IP/TCP implementation for SEL.  Code should be in Fortran 77.

Anyone out there willing to give, trade or sell us code?  I have already
talked to Digital Technology in Illinois who have a VAX/VMS implementation
for about $15K which includes higher level software for UNET from 3Com,
about $6K for just IP/TCP.

Any help will be appreciated.  Thanks in advance.

					Y. Pin Foo
					MS 8H-56
					Boeing Aerospace Co.
					Seattle, Wa 98124.
					(206)773-3436

			Usenet address: ...!ssc-vax!foo
			Arpanet: ssc-vax!foo at Lbl-Unix

------------------------------

END OF TCP-IP DIGEST
********************

-----------[000003][next][prev][last][first]----------------------------------------------------
Date:      30 Sep 1982 1922-PDT
From:      Henry W. Miller <Miller>
To:        tcp-ip
Cc:        Miller
Subject:   TCP-IP Distribution List
	Welcome to the TCP-IP Distribtion List!!!

	Jon Postel has asked the NIC to take over the
responsibility of the periodic update of the TCP-IP-STATUS.
Therefore, we would like to ask that you provide an updated
report of the state of your efforts to implement TCP-IP on
your system by October 15th.  In addition, we would like to
have the questionaires sent out be the NIC a short time back
completed and returned.

	The latest update was released on June 8th, 1982, and
can be obtained via FTP by ANONYMOUS login.  The filepath is:
[SRI-NIC]<PROTOCOLS>TCP-IP-STATUS.TXT.

	 We are particularly interested in the addition and
expansion of TCP services.

	In addition to this function, it is hoped that this
distribution list can aid in the following areas:

*	To act as an on-line exchange among TCP developers and
	maintainers.

*	To announce new and expanded services in a timely manner.



	In the preparation of this mailing list, I am certain
that I have missed including some people who should be added,
and probably have included some people who do not desire to
be on this list.  In either case, I'll apoligize in advance.

	This distribution is an open membership list, and any
interested party with a legitimate need can be added.

	To be added (or removed) from this list, please send
your request to me directly.  (MILLER@SRI-NIC)  Please do not
send your request to TCP-IP@SRI-NIC, as this list is self
forwarding to the entire list!!!

	A few words of warning:  do not use this open forum
to discuss matters which should be construed as to be for
limited distribution.  Please use private messages, if need be.

	Again, let me welcome you to this list.  It is hoped
that this forum will help the network reach transition by the
required date.  Should you have any questions, please send
electronic mail, or call me by phone.  (415) 859-5303.

	Cheers!!!

Henry W. Miller/Network Information Center
-------

END OF DOCUMENT