The 'Security Digest' Archives (TM)

Archive: About | Browse | Search | Contributions | Feedback
Site: Help | Index | Search | Contact | Notices | Changes

ARCHIVE: TCP-IP Distribution List - Archives (1987)
DOCUMENT: TCP-IP Distribution List for October 1987 (294 messages, 154785 bytes)
SOURCE: http://securitydigest.org/exec/display?f=tcp-ip/archive/1987/10.txt&t=text/plain
NOTICE: securitydigest.org recognises the rights of all third-party works.

START OF DOCUMENT

-----------[000000][next][prev][last][first]----------------------------------------------------
Date:      Thu, 1-Oct-87 10:05:22 EDT
From:      howard@COS.COM (Howard C. Berkowitz)
To:        comp.protocols.tcp-ip
Subject:   Re: TCP/IP and DECnet

In article <8709290741.AA02129@sluggo.sun.com>, melohn@SUN.COM (Bill Melohn) writes:
> Not true. Downline loading, upline dumping, and the remote console
> utility are handled by MOP, the Maintenance Operations Protocol, yet
> another Digital propriatary protocol that is NOT included under the
> publically available DECnet suite.


MOP is fairly thoroughly documented in the DDCMP specification
manual, part of publicly available Digital Network Architecture
documentation.

Use of MOP by DECnet software, however, deals more with VMS
design than the protocols.  Using the DDCMP manual, however,
I was, in a previous job, able to design a dump/reload handler
using MOP (which, for other reasons, we did not use).


-- 
-- howard(Howard C. Berkowitz) @cos.com
 {uunet,  decuac, sun!sundc, hadron, hqda-ai}!cos!howard
(703) 883-2812 [ofc] (703) 998-5017 [home]
DISCLAIMER:  I explicitly identify COS official positions.

-----------[000001][next][prev][last][first]----------------------------------------------------
Date:      Thu, 1-Oct-87 11:23:05 EDT
From:      backman@interlan.UUCP (Larry Backman)
To:        comp.protocols.tcp-ip
Subject:   Re: RCTE

In article <8709291511.AA18314@ucbvax.Berkeley.EDU> PADLIPSKY@A.ISI.EDU (Michael Padlipsky) writes:
>Never having been at all fond of reinventing wheels, I hastened to
>FTP the SUPDUP RFC and print it out at my terminal.  When I got to
>"Due to the highly interactive characteristics of both the SUPDUP
>protocol and the ITS system [which was the original Server for which
>the protocol was developed], all transactions are strictly character
>at a time and all echoing is remote" I aborted the printing.  Am I


	[]

	Me too.  SUPDUP has been in the back of my mind for the past year
	as a viable TELNET alternative.  However, examination of the
	spec reveals that it too does remote host echoing.  The product
	that we provide, TELNET through a TCP gateway from a Novell LAN to the w
	orld has
	4 hops to go through before a typed character reappears on the
	screen.  Each keystroke on a PC workstation goes across the Novell
	subnet to the gateway,  from the gateway to the remote host, and
	thence back from where it came.  We do all sorts of tricks in the
	PC to limit subnet traffic, buffering et. al. but no matter what
	you do, the remote echo is a killer.

	I am looking for alternatives also.  Ideas? solutions?


					Larry Backman
					Micom - Interlan

-----------[000002][next][prev][last][first]----------------------------------------------------
Date:      Thu, 1-Oct-87 14:41:58 EDT
From:      mckee@MITRE.ARPA (H. Craig McKee)
To:        comp.protocols.tcp-ip
Subject:   RCTE

There has been much discussion of SUPDUP and what "we" might do to
minimize the need for remote echoing.  I think "we" are the ARPANET
community.  Can anyone offer assurance that the ISO community, in the
development of the Virtual Terminal Protocol, is equally interested in
minimizing the need for remote echoing?

Regards - Craig

-----------[000003][next][prev][last][first]----------------------------------------------------
Date:      Thu,  1 Oct 87 18:06 PDT
From:      Michael Stein                        <CSYSMAS@UCLA-CCN.ARPA>
To:        TCP-IP@sri-nic.arpa
Subject:   Re: TCP performance limitations
> I have been doing some work in this area to predict potential
> performance at 100 Mb/s FDDI rates. First, you can expect a
> TCP to execute about 1000-1200 instructions per packet,
> assuming all is well. Some of this is checksum. In fact, I
> recall some early statistics in which the checksum alg took up 40% of the
> CPU cycles for processing incoming segments of TCO. I am lumping
> IP level processing into TCP in the 1000-1200. This is absolutely
> a WAG - so anyone with some hard data on instruction count is
> most welcome to provide better info.

The instruction count sounds low to me, how about 10 times more?

(A 1000 byte packet sounds like it would take 500 adds just to
compute the TCP checksum, not to mention a 64K packet).

Speaking of checksums, it seems to me that the IP header checksum
could be replaced with a "packet" level CRC at the link level and
done by hardware.  Most (all?) HDLC type chips provide this
without any extra hardware (or effort).

Unfortunately for a TCP connection, most of the checksum overhead
is in the TCP checksum (which is an end-to-end check) and this
sounds harder to move off of the general purpose CPU.  The idea
would be to let your general purpose 14 MIP CPU do general
purpose work rather than adding up checksums.

> With no constraints, and using 1000 byte packets, you would
> be sending 10,000 packets/second to achieve full throughput.
> That gives you 100 microseconds worth of insruction time.
> At 14 MIPs, that is 1400 instructions. So, if you did nothing
> else but TCP/IP, you might make it, with respect to
> instruction rate.

I have been thinking of how to design a T3 (45Mb) type speed
packet switch (just thinking) and there are some real problems
with doing IP packet header processing when you need to process a
packet every 6 us.  (Voice packets want to be about 100 bytes so
you need to be able to handle about 56K packets/sec).

Virtual circuts sure seem easer at the packet level at this speed
(smaller packet overhead too).  Of course, a virtual circut could
carry embedded IP packets.

-----------[000004][next][prev][last][first]----------------------------------------------------
Date:      Thu 1 Oct 87 19:56:50-PDT
From:      Karl Auerbach <AUERBACH@CSL.SRI.COM>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   More on TCP performance
Thank you all for all the comments.

It seems that if one can get a net with high enough bandwidth,
low enough error rate, and short enough delay, that TCP could, in
theory,	consume a significant portion of the available bandwidth.

An example of the net that raised my initial question is the Los Alamos CP*
net which will run its links at 640Mbits/sec.  Total capacity of the
net will be on the order of 20G (yes that's a 'G', not an 'M') bits/sec.
Lengths are short, so delays will probably be less than a few thousand
bit times (i.e. much less than a millisecond.  The end nodes will be
Cray X-MP, Cray 2, and faster.

(At 640Mbits it only takes about 8 milliseconds to transmit a full
segment!)

What bothers me the most is that we've had HYPERchannels for a long
time and 80meg Proteon rings for a while, but the highest TCP
value I heard was about 17megabits.  Why the discrepency?  Is it something
intrinsic to the TCP protocol (and probably in ISO TPx as well) or
in the implementations or hardware?

		--karl--
-------
-----------[000005][next][prev][last][first]----------------------------------------------------
Date:      Thu, 1-Oct-87 21:06:00 EDT
From:      CSYSMAS@OAC.UCLA.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: TCP performance limitations

> I have been doing some work in this area to predict potential
> performance at 100 Mb/s FDDI rates. First, you can expect a
> TCP to execute about 1000-1200 instructions per packet,
> assuming all is well. Some of this is checksum. In fact, I
> recall some early statistics in which the checksum alg took up 40% of the
> CPU cycles for processing incoming segments of TCO. I am lumping
> IP level processing into TCP in the 1000-1200. This is absolutely
> a WAG - so anyone with some hard data on instruction count is
> most welcome to provide better info.

The instruction count sounds low to me, how about 10 times more?

(A 1000 byte packet sounds like it would take 500 adds just to
compute the TCP checksum, not to mention a 64K packet).

Speaking of checksums, it seems to me that the IP header checksum
could be replaced with a "packet" level CRC at the link level and
done by hardware.  Most (all?) HDLC type chips provide this
without any extra hardware (or effort).

Unfortunately for a TCP connection, most of the checksum overhead
is in the TCP checksum (which is an end-to-end check) and this
sounds harder to move off of the general purpose CPU.  The idea
would be to let your general purpose 14 MIP CPU do general
purpose work rather than adding up checksums.

> With no constraints, and using 1000 byte packets, you would
> be sending 10,000 packets/second to achieve full throughput.
> That gives you 100 microseconds worth of insruction time.
> At 14 MIPs, that is 1400 instructions. So, if you did nothing
> else but TCP/IP, you might make it, with respect to
> instruction rate.

I have been thinking of how to design a T3 (45Mb) type speed
packet switch (just thinking) and there are some real problems
with doing IP packet header processing when you need to process a
packet every 6 us.  (Voice packets want to be about 100 bytes so
you need to be able to handle about 56K packets/sec).

Virtual circuts sure seem easer at the packet level at this speed
(smaller packet overhead too).  Of course, a virtual circut could
carry embedded IP packets.

-----------[000006][next][prev][last][first]----------------------------------------------------
Date:      Thu, 1-Oct-87 22:56:50 EDT
From:      AUERBACH@CSL.SRI.COM.UUCP
To:        comp.protocols.tcp-ip
Subject:   More on TCP performance

Thank you all for all the comments.

It seems that if one can get a net with high enough bandwidth,
low enough error rate, and short enough delay, that TCP could, in
theory,	consume a significant portion of the available bandwidth.

An example of the net that raised my initial question is the Los Alamos CP*
net which will run its links at 640Mbits/sec.  Total capacity of the
net will be on the order of 20G (yes that's a 'G', not an 'M') bits/sec.
Lengths are short, so delays will probably be less than a few thousand
bit times (i.e. much less than a millisecond.  The end nodes will be
Cray X-MP, Cray 2, and faster.

(At 640Mbits it only takes about 8 milliseconds to transmit a full
segment!)

What bothers me the most is that we've had HYPERchannels for a long
time and 80meg Proteon rings for a while, but the highest TCP
value I heard was about 17megabits.  Why the discrepency?  Is it something
intrinsic to the TCP protocol (and probably in ISO TPx as well) or
in the implementations or hardware?

		--karl--
-------

-----------[000007][next][prev][last][first]----------------------------------------------------
Date:      02-Oct-87 01:05:04-UT
From:      mills@udel.edu
To:        tcp-ip@sri-nic.arpa
Subject:   NSFNET woe: causes and consequences
Folks,

Things have been very bad around the NSFNET since last Thursday. After several
16-hour days and much experimentation, I think I understand at least some of
the reasons. If I am correct, you are not going to like the consequences.

Last Thursday the primary NSFNET gateway psc-gw became increasingly flaky,
eventually to the point where it and its seventy-odd nets disappeared from EGP
updates. Backup gateways linkabit-gw and cu-arpa picked up the slack, but not
without considerable losses and delays due to congestion. When the new ARPANET
code was installed over the weekend, psc-gw and its PSN (14) both completely
expired, reportedly due to "resource shortage," the usual BBN euphemism for
insufficient storage or table overflow, especially for connection blocks which
manage ARPANET virtual circuits. Apparently, BBN backed out of the new code,
so the PSN is unchanged from Thursday.

Meanwhile, Maryland gateway terp, also connected to a PSN (20) running the new
ARPAware, began behaving badly, so much so that terp was simply turned off,
leaving another Maryland gateway to hump the load. At this time (Thursday
evening) the gateway is still off. Since both psc-gw and terp have similar
configurations, connectivity and PSN (X.25) interfaces, one would assume the
same varmit bit both of them.

Meanwhile, I was sitting off PSN 96 trying to figure out what was going on and
noticed linkabit-gw 10.0.0.111 and dcn-gw 10.2.0.96 could not reach psc-gw at
its ARPANET address 10.4.0.14. However, both of these buzzards could reach
other hosts with no problem. Furthermore, EGP updates received from the usual
corespeakers revealed psc-gw was working just fine. I concluded something
wierd was spooking the ARPANET; however, I found that cu-arpa 10.3.0.96 and
louie 10.0.0.96 could work psc-gw at its ARPANET address. I thought maybe X.25
was the key, since all of the other PSN 96 machines use 1822, and cranked up
swamp-gw 10.9.0.96 using X.25, but found no joy with psc-gw either.

When Dave O'Leary of PSC called to tell me their ACC 5250 X.25 driver for the
MicroVAX was spewing out error comments to the effect that insufficient
virtual circuits were available, all the cards fell into place. The 5250
supports a maximum of 64 virtual circuits. Apparently the number of ARPANET
gateways and other (host) clients has escalated to the point that the
64-maximum was exceeded. Probably the PSN was groaning even before that, which
might have led to the earlier problems over the weekend. The reason some
gateways could work psc-gw anyway was that they had captured the virtual
circuits due to significant traffic loads and frequent connection attempts. My
tests were from lightly loaded host ports which couldn't break into the mayhem
which must be going on in the psc-gw 5250 board.

I have looked at the 5250 driver code, which is pretty simplistic on how it
manages the virtual-circuit inventory. It appears now of the highest priority
that a more mature approach be implemented in the driver, so that
virtual-circuit resources can be reclaimed on the basis of use, age, etc. In
principle, this is not very hard, but would have to be done quickly.
Meanwhile, I suspect a lot of X.25 client gateways (not just NSFNET) are or
soon will be very sick indeed. Note that reclamation requires that open
circuits to one destination may have to be closed abruptly, which can result
in loss of data, then reopened to another destination. Under thrashing
conditions where the load is spread over lots of other gateways and virtual
circuits are flapping like crazy, the cherished ARPANET reputation for
reliable transport may be considerably tarnished.

Those of us who have pondered the wisdom of underlaying X.25 virtual circuits
beneath a connectionless service have repeatedly said that this kind of
problem was certain to occur sooner or later. There are now about 200 gateways
and 300 networks out there. As the ARPANET evolves toward a gateway-gateway
(many-to-many) service, rather than a host-gateway (few-to-many) service, the
problem can only get much worse. I personally believe the ARPANET architects
and engineers, as well as the host and gateway vendors, must quickly come to
solid grips on this issue. Our most precious resource may not be packet
buffers, but connection blocks.

Dave
-------
-----------[000008][next][prev][last][first]----------------------------------------------------
Date:      Fri, 2-Oct-87 09:04:59 EDT
From:      howard@cos.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: TCP performance limitations

In article <2922@ames.arpa>, lamaster@pioneer.arpa (Hugh LaMaster) writes:
> In article <8709290008.AA00477@ucbvax.Berkeley.EDU> AUERBACH@CSL.SRI.COM (Karl Auerbach) writes:
> 
> >Someone asked me today what are the performance limits on a TCP connection.
> >The situation he posited was on in which there are no intervening
> :
> >resources in the hosts, and low noise.  It was further posited that
> 
> An interesting question. It depends on HOW low noise your connection is.
> Because of acknowledgement and retransmission requirements, the faster the
> link, the lower noise it has to be to maintain a high delivered fraction of
> the raw channel speed.  This is in addition to the question of the interaction
> of acknowledgement delay and window size, which some have mentioned, and which
> is also a big problem.  To realize the high bandwidth in practice requires
> host software smart enough to adaptively distinguish between a high bandwidth
> low noise channel, and something like Arpanet, and adjust its behavior
> appropriately to either situation.  Offhand, I am not sure how to do this.
> Anybody have any ideas?

     A mechanism for implementing adaption to noise could be
   drawn from the SNA technique for varying the window size
   on FID4 Transmission Groups.  Essentially, a given path
   starts out with a default window size, which can be changed
   by nodes along the way based on their buffer availability
   status.  For severe congestion, a node can reset the
   window size to 1, for minor congestion, a node can decrease
   the window size BY 1.  If a node feels it can handle more
   traffic, it can also set a bit indicating it would like
   to increase the window size by 1.

     While this mechanism is intended more for congestion control
   adaption rather than error adaption, I did use it in a
   protocol for a network management system which used both
   dedicated and dial lines for control channels.  I started
   with a value assumed for dial lines, and gradually increased
   the frame and window size based on modified error-free 
   interval length.  By "modified," I mean that I kept an error
   counter which was decremented and incremented differently
   for retransmissions and for successfully transmitted sequences --
   retransmissions decremented the error-free interval counter
   by a lesser value than did long sequences of successful
   transmission.  This modification protected the frame and
   window sizes from radical changes due to error bursts.
   In general, those sizes were changed occasionally by 1,
   at thresholds determined by simulation, to give the
   best mixture of parameters for the encountered error conditions.  
-- 
-- howard(Howard C. Berkowitz) @cos.com
 {uunet,  decuac, sun!sundc, hadron, hqda-ai}!cos!howard
(703) 883-2812 [ofc] (703) 998-5017 [home]
DISCLAIMER:  I explicitly identify COS official positions.

-----------[000009][next][prev][last][first]----------------------------------------------------
Date:      Fri, 2-Oct-87 09:40:14 EDT
From:      brennan@alliant.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: SUPDUP protocol

In article <12338315541.10.BILLW@MATHOM.CISCO.COM> BILLW@MATHOM.CISCO.COM (William Westfield) writes:
>Although the SUPDUP [...] it does not provide for any
>local echoing capability at all, nor does it provide for local "break
>characters" [...]
>TOPS20 has the concept of a break character bitmap, but not for an
>echoing bitmap, [...]
>DEC's CTERM protocol provides the means of transmitting such info over
>a network connection [...]
>Unix, one of the worlds most popular operating systems, doesn't have
>either concept.  The Annex boxes implement some local editing
>functionality, but this requires both a custom version of the editor,
>and custom software on the Annex, and is not a published standard.
>(nor does it help outside of the editor...)
>
>Bill Westfield
>cisco Systems

The local editing mentioned above is a special mode the Annex may enter
in either telnet, rlogin, or local mode. In fact, it does require a custom
version of an editor (gnu-emacs, currently). However, in its "native mode",
(Annex to Encore's Multimax), local editing, character batching, etc. are
all performed at the Annex. The initial developers of the Annex, Jonathan
Taylor (now of SUN Microsystems) and Rob Drelles (now of Stratus) designed
a "distributed tty driver" for the Annex and Encore's Unix OSes (both 4.2
and V); the bulk of the Unix tty driver(s) (both 4.2 and V), runs in the
Annex. In cooked mode, nary a character is returned to the host until a
"normal Unix forwarding character" is typed, i.e. something that would
cause characters to be moved from the raw to canonical queue. IOCTLs for
modem handling, lines speed changing, etc. are all processed. Of course in
raw/cbreak mode, there is little that may be done, though some character
batching (under control of a forwarding timer) may still be performed.

Rrrrrrrich.

-----------[000010][next][prev][last][first]----------------------------------------------------
Date:      Fri, 2-Oct-87 10:08:00 EDT
From:      CERF@A.ISI.EDU
To:        comp.protocols.tcp-ip
Subject:   Re: More on TCP performance

Karl,

Even when you aren't hurt by delay, CPU speed can prevent full
bandwidth achievement - I would guess that for the high speed
nets, CPU resource has been the limiting factor.

Vint

-----------[000011][next][prev][last][first]----------------------------------------------------
Date:      Fri, 2-Oct-87 10:09:45 EDT
From:      bzs@BU-CS.BU.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   SUPDUP protocol


From: David C. Plummer <DCP@QUABBIN.SCRC.Symbolics.COM>
>The cynic in me says you won't see much real improvement in Unix or VMS
>or whatever unless and until their owners bite the bullet, commit to
>entering the 1980s (from the 1960s), and pour money into the development
>hole.  I would actually suggest they try to be visionaries and enter the
>1990s.

Although I can't speak to VMS one would think the current efforts in
windowing standards (eg. X and NeWS) for Unix indicate about as strong
a commitment to advancing interactive interfaces as one sees elsewhere
today. Unix has always been perhaps unique in this area in that all
fundamental developments such as this have been viewed in terms of the
widest possible view of machine architectures (currently ranging at
least from PC/AT to Cray-2's in sheer size, RISC, CISC, parallel
architectures etc on another dimension, variety I suppose.)

When one has narrowed their view to purely the current architectural
technology it's not surprising that some speed in introduction of
products is gained. I can only make allusion to the hare and the
tortoise to perhaps put this into some perspective. Consider, for
example, the status of (eg) ITS and Unix today, their age in fact is
not all that different.

I believe the current widespread introduction of remote window
standards such as X and NeWS render the above anything but
hypothetical.

In fact I think they are "SUPDUP". It's the discussion of dumb ASCII
terminals at all (and their optimization) that casts this conversation
into ancient terms. I believe we are simply in a similar transition
phase towards bitmapped (etc), locally intelligent interfaces that we
were several years past when co-workers would tell me "how can you
work on all that CRT stuff when everyone around here has this large
investment in keypunches and teletypes, you live in the clouds..."

Put simply, a Macintosh or a Sun workstation (&c) attached to a real
network (ie. not emulating RS232) are about the dumbest "terminals" I
want to think about anymore.

	-Barry Shein, Boston University

-----------[000012][next][prev][last][first]----------------------------------------------------
Date:      Fri, 2-Oct-87 11:55:48 EDT
From:      narten@PURDUE.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: TCP performance limitations

Does anyone have pointers to work being done in the performance of
transport protocols (including TCP), when communications links are not
reliable?  E.g., how do various protocols behave in the presence of
lost datagrams. I am asking this more from the theoretical point of
view than from the angle of tuning an existing implementation. For
instance, if 10% of the packets are lost, what happens to the
throughput of TCP? I realize that there are a lot of variables that go
into this, but it is still interesting to fix various parameters
while varying packet loss rate, or to observe how window size, RTT,
and packet lossage interact.

Thomas

-----------[000013][next][prev][last][first]----------------------------------------------------
Date:      Fri, 2-Oct-87 12:02:54 EDT
From:      mckenzie@LABS-N.BBN.COM.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re:  RCTE

The ISO VT service/protocol divides the world into synchronous and asynchronous
terminals.  Synchronous terminals don't echo ever.  Async terminals have a
RCTE-like mechanism defined (but perhaps not required).

Alex McKenzie
 

-----------[000014][next][prev][last][first]----------------------------------------------------
Date:      Fri, 2 Oct 87 12:02:54 EDT
From:      Alex McKenzie <mckenzie@LABS-N.BBN.COM>
To:        "H. Craig McKee" <mckee@MITRE.ARPA>
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re:  RCTE
The ISO VT service/protocol divides the world into synchronous and asynchronous
terminals.  Synchronous terminals don't echo ever.  Async terminals have a
RCTE-like mechanism defined (but perhaps not required).

Alex McKenzie
 
-----------[000015][next][prev][last][first]----------------------------------------------------
Date:      Fri, 02 Oct 87 15:09:31 -0400
From:      Craig Partridge <craig@NNSC.NSF.NET>
To:        narten@purdue.edu
Cc:        tcp-ip@sri-nic.ARPA
Subject:   TCP and Loss

Thomas,

The work I was doing on RDP led me into the problems of how transport
protocols perform in the face of loss.  I haven't had much time to
look at the problem recently (I'm doing this in my spare time -- and
don't have much) but keep plugging away.

I'm interested in situations in which loss is inherent -- that
is, you will get loss, no matter what your TCP does.  Packet radios
suffering from noise or jamming are good examples.  Loss caused by
congestion is a different problem.

The paper Phil Karn and I gave at SIGCOMM touches on the problems. Karn's
algorithm is a method for keeping an accurate round-trip time estimate
even when loss rates reach 50%.

Currently I'm seeing if I can model what happens to acknowledgement and
transmission strategies if the loss rate really climbs.  So far I
know that there are situations in which at least some strategies to
reduce the number of acks you send fail *big* under loss -- you get
more total packets sent than if you had had sent the acks you suppressed.
Beyond that I don't have anything to say yet -- I'm finding the mathematics
of it difficult.

I'd be interested to know of other people working in this area.

Craig
-----------[000016][next][prev][last][first]----------------------------------------------------
Date:      Fri, 2-Oct-87 14:19:47 EDT
From:      haas%gr@CS.UTAH.EDU (Walt Haas)
To:        comp.protocols.tcp-ip
Subject:   Re: RCTE

I've been following the discussion about TELNET echoing with some interest.
The problem has long since been solved in the big (public) network world.
A good example of how the solution works is represented by my X.25
implementation for the DEC-20 (RIP, sigh...).  There were two cases worth
distinguishing:

1) Minimum packet charge.  In this case the PAD which was connected to the
   user's terminal did echoing of characters, and forwarded a packet only
   when there were enough characters to fill one, OR the user entered a
   transmission character, OR the user didn't type anything for a while.
   In this case the TOPS-20 system was set for page mode, half duplex
   operation.  The PAD grabbed ^Q/^S to use for terminal flow control.

2) Screen editting.  In this case characters were echoed by the host.
   The PAD forwarded soon after each character was keyed in.  The TOPS-20
   system was set for full duplex, and passed ^Q/^S thru transparently to
   the applicateion (usually EMACS or some such).

I wrote a little command which switched between the two modes by sending
an X.29 packet from host to PAD and, at the same time, switching terminal
modes inside TOPS-20.  With just a little more work this sequence could
have been built into EMACS.

So how did it work?  Great!  I had the pleasure of sitting in New York
running EMACS on UTAH-20 over Telenet, with good response.  Then I could
quickly switch back to mode 1 (the default) for normal TOPS-20 command
processing.

One of the reasons this is hard to do with TELNET is that the TELNET
standard is worded in such a way that you don't have to implement these
functions in order to say you have a standard TELNET implementation.
The CCITT standard for PADs, in contrast, requires that you actually
implement a lot of functionality before you can say you conform.

----------------*
Cheers  -- Walt     ARPA: haas@cs.utah.edu     uucp: ...utah-cs!haas

DISCLAIMER: If my boss knew I was using his computer to spread opinions
            like this around the net, he'd probably unplug my ter`_{~~~

-----------[000017][next][prev][last][first]----------------------------------------------------
Date:      Fri, 2-Oct-87 15:34:31 EDT
From:      craig@NNSC.NSF.NET (Craig Partridge)
To:        comp.protocols.tcp-ip
Subject:   TCP and Loss


Thomas,

The work I was doing on RDP led me into the problems of how transport
protocols perform in the face of loss.  I haven't had much time to
look at the problem recently (I'm doing this in my spare time -- and
don't have much) but keep plugging away.

I'm interested in situations in which loss is inherent -- that
is, you will get loss, no matter what your TCP does.  Packet radios
suffering from noise or jamming are good examples.  Loss caused by
congestion is a different problem.

The paper Phil Karn and I gave at SIGCOMM touches on the problems. Karn's
algorithm is a method for keeping an accurate round-trip time estimate
even when loss rates reach 50%.

Currently I'm seeing if I can model what happens to acknowledgement and
transmission strategies if the loss rate really climbs.  So far I
know that there are situations in which at least some strategies to
reduce the number of acks you send fail *big* under loss -- you get
more total packets sent than if you had had sent the acks you suppressed.
Beyond that I don't have anything to say yet -- I'm finding the mathematics
of it difficult.

I'd be interested to know of other people working in this area.

Craig

-----------[000018][next][prev][last][first]----------------------------------------------------
Date:      Fri, 2-Oct-87 21:53:23 EDT
From:      mills@UDEL.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   NSFNET woe: causes and consequences

Folks,

Things have been very bad around the NSFNET since last Thursday. After several
16-hour days and much experimentation, I think I understand at least some of
the reasons. If I am correct, you are not going to like the consequences.

Last Thursday the primary NSFNET gateway psc-gw became increasingly flaky,
eventually to the point where it and its seventy-odd nets disappeared from EGP
updates. Backup gateways linkabit-gw and cu-arpa picked up the slack, but not
without considerable losses and delays due to congestion. When the new ARPANET
code was installed over the weekend, psc-gw and its PSN (14) both completely
expired, reportedly due to "resource shortage," the usual BBN euphemism for
insufficient storage or table overflow, especially for connection blocks which
manage ARPANET virtual circuits. Apparently, BBN backed out of the new code,
so the PSN is unchanged from Thursday.

Meanwhile, Maryland gateway terp, also connected to a PSN (20) running the new
ARPAware, began behaving badly, so much so that terp was simply turned off,
leaving another Maryland gateway to hump the load. At this time (Thursday
evening) the gateway is still off. Since both psc-gw and terp have similar
configurations, connectivity and PSN (X.25) interfaces, one would assume the
same varmit bit both of them.

Meanwhile, I was sitting off PSN 96 trying to figure out what was going on and
noticed linkabit-gw 10.0.0.111 and dcn-gw 10.2.0.96 could not reach psc-gw at
its ARPANET address 10.4.0.14. However, both of these buzzards could reach
other hosts with no problem. Furthermore, EGP updates received from the usual
corespeakers revealed psc-gw was working just fine. I concluded something
wierd was spooking the ARPANET; however, I found that cu-arpa 10.3.0.96 and
louie 10.0.0.96 could work psc-gw at its ARPANET address. I thought maybe X.25
was the key, since all of the other PSN 96 machines use 1822, and cranked up
swamp-gw 10.9.0.96 using X.25, but found no joy with psc-gw either.

When Dave O'Leary of PSC called to tell me their ACC 5250 X.25 driver for the
MicroVAX was spewing out error comments to the effect that insufficient
virtual circuits were available, all the cards fell into place. The 5250
supports a maximum of 64 virtual circuits. Apparently the number of ARPANET
gateways and other (host) clients has escalated to the point that the
64-maximum was exceeded. Probably the PSN was groaning even before that, which
might have led to the earlier problems over the weekend. The reason some
gateways could work psc-gw anyway was that they had captured the virtual
circuits due to significant traffic loads and frequent connection attempts. My
tests were from lightly loaded host ports which couldn't break into the mayhem
which must be going on in the psc-gw 5250 board.

I have looked at the 5250 driver code, which is pretty simplistic on how it
manages the virtual-circuit inventory. It appears now of the highest priority
that a more mature approach be implemented in the driver, so that
virtual-circuit resources can be reclaimed on the basis of use, age, etc. In
principle, this is not very hard, but would have to be done quickly.
Meanwhile, I suspect a lot of X.25 client gateways (not just NSFNET) are or
soon will be very sick indeed. Note that reclamation requires that open
circuits to one destination may have to be closed abruptly, which can result
in loss of data, then reopened to another destination. Under thrashing
conditions where the load is spread over lots of other gateways and virtual
circuits are flapping like crazy, the cherished ARPANET reputation for
reliable transport may be considerably tarnished.

Those of us who have pondered the wisdom of underlaying X.25 virtual circuits
beneath a connectionless service have repeatedly said that this kind of
problem was certain to occur sooner or later. There are now about 200 gateways
and 300 networks out there. As the ARPANET evolves toward a gateway-gateway
(many-to-many) service, rather than a host-gateway (few-to-many) service, the
problem can only get much worse. I personally believe the ARPANET architects
and engineers, as well as the host and gateway vendors, must quickly come to
solid grips on this issue. Our most precious resource may not be packet
buffers, but connection blocks.

Dave
-------

-----------[000019][next][prev][last][first]----------------------------------------------------
Date:      Sat, 3-Oct-87 06:15:00 EDT
From:      CERF@A.ISI.EDU
To:        comp.protocols.tcp-ip
Subject:   Re: RCTE

The ISO community, to the extent it works in the public networking
domain, is equally interested in avoiding costly char at a time modes,
in my opinion.

Vint Cerf

-----------[000020][next][prev][last][first]----------------------------------------------------
Date:      Sat, 3-Oct-87 06:36:00 EDT
From:      CERF@A.ISI.EDU
To:        comp.protocols.tcp-ip
Subject:   Re: TCP performance limitations

As you move upward in the bandwidth range, you have decreasing amounts of
time to process each object passing through - the high speed switching fabrics
developed at Bell Labs and Bell Core have the characteristic that very little
per packet processing is being done and, as you surmise, a kind of virtual
circuit is set up; however, the switching fabric is able to share the
bandwidth of the transmission resources, despite the VC set-up, because
the VCs are just table entries and not reservations of actual capacity
within each trunk.

At the terminations of such a switching fabric, however, I think one
still will need some end/end checking. Moreover, if we contemplate
the kind of linking of networks we have today (vastly different internal
operation), we may still need the end/end checking that TCP does. Rather
than putting the TCP in the mainframe, anymore, though, it is fair to
consider the sort of DMA interface which permits the TCP and perhaps
other layers of protocol to be housed on an external board, equipped with
special purpose logic or at least dedicated processing, placing data or
taking data from the main processor's memory. Communications becomes a
matter of placing buffers in memory and possibly signalling their arrival
to the communications processor.

I suspect that this description is not far away from the kinds of
equipment already made and sold by companies like EXCELAN and ACC.
Forgive me if I failed to mention the dozens of other companies whose
products may work this way - I'm not up to speed on all the commercial
products now flowering in the TCP/IP fields.

At an IP switch point, you are still faced with at least full IP
level processing and handling 45 Mb/s and up at that point is
still a big processing load, unless we re-think the IP level and
try to find a way to fabricize it, as the Bell folks have with
the lower level packet switching. Hmm.

Vint

-----------[000021][next][prev][last][first]----------------------------------------------------
Date:      Sat, 03 Oct 87 11:45:11 EST
From:      Thomas Narten <narten@purdue.edu>
To:        Michael Stein <CSYSMAS@OAC.UCLA.EDU>
Cc:        TCP-IP@sri-nic.arpa
Subject:   Re: TCP performance limitations
>Speaking of checksums, it seems to me that the IP header checksum
>could be replaced with a "packet" level CRC at the link level and
>done by hardware.  Most (all?) HDLC type chips provide this
>without any extra hardware (or effort).

One should be careful not to underestimate the need for end-to-end
checksums between the various protocol layers talking to one another.
I am reminded once again of the times (note the plural) that our
Ethernet with 16 bit CRCs at the link level disintegrated. The
symptoms were the scambling of random bits of data in the Ethernet
frame, apparently before the frame was sent out on the wire. Hence, no
checksum errors. Because any of the bits, including those in the
header could be trashed, all hosts on the cable were receiving
bogus packets. The protocol most effected by this was ARP, which
relies on the link level checksum for error detection. Needless to
say, when ARP gets confused nothing works.

Another example of the danger of relying on link level checksums is
given in Clark/Reed/Saltzer's "End-to-End Arguments in System Design".
There, a transient error in copying data within the gateway was not
detected because link level checksums were relied upon. The real
kicker in their example was the lack of end-to-end transport level
checksums (e.g. TCP checksum).

A few factors need to be carefully weighed.

1) Where are datagrams corrupted? Historically, it has been in the
transmission on the "wire". Perhaps now, corruption takes place
primarily within gateways and at the boundaries between the machine
and the communications device. Can we ignore them?

2) How serious are the effects of undetected datagram corruption? One
can argue that in IP's case, higher layer protocols will detect errors
and that will be sufficient. However, changing a few bits in the IP
header changes the semantics of the datagram, which might dramatically
effect the subnet. Consider the destination address changing from
directed packet to multicast.

3) How often do the errors occur? As we rely more and more on network
protocols, it may be the case that we need more, not less error
detection.

Granted, it may be necessary for performance reasons to do away with
some types of error detection. 

Another possible solution involves encapsulating IP datagrams within
another protocol for packet switching within a specific network. IP
checksumming would be done only at the gateways, and within the
network packets would be processed using only the optimized protocol.
By careful design of the protocol, one is better able minimize the
effects of undetected errors on the subnet, and get the benefits of
fast (e.g. no checksums) packet switching in point-to-point networks.
This approach is used in Cypress and Blazenet.

Of course, this does not solve all problems either. For one thing, it
only works within the logical network. This is great for large
backbone networks multiplexing much traffic from multiple connections,
but might not help much for a single TCP connection, since throughput
on any one connection is still limited by gateways (the weakest links
in the chain).

Thomas
-----------[000022][next][prev][last][first]----------------------------------------------------
Date:      Sat, 3-Oct-87 23:49:45 EDT
From:      Mills@UDEL.EDU
To:        comp.protocols.tcp-ip
Subject:   Internet Architecture Task Force workshop

Folks,

The Internet Architecture Task Force (INARC) studies technical issues in the
evolution of the Internet from its present architectural model to new models
appropriate for very large, very fast internets of the future. It is organized
as a recurring workshop where researchers, designers and implementors can
discuss novel ideas and experiences without limitation to the architecture and
engineering of the present Internet. The output of this effort represents
advance planning for a next-generation internet, as well as fresh insights
into the problems of the current one.

The INARC is planning a two-day retreat/workshop for 17-18 November at BBN to
discuss a fresh start on advanced internet concepts and issues. The agenda for
this meeting will be to explore architecture and engineering issues in the
design of a next-generation internet system. The format will consist of
invited presentations on selected topics followed by a general discussion on
related issues. Written contributions of suitable format and content will
be submitted for publication in the ACM Computer Communication Review.

In order to have the most stimulating discussion possible, the INARC is
expanding the list of invitees to include those researchers with agenda to
plow, axe to grind, sword to wield or any other useful instrument for that
matter. Contributors are invited to submit concise summaries of presentations
of from fifteen to forty minutes in electronic form to mills@udel.edu or in
hardcopy form to

Dr. David L. Mills
Electrical Engineering Department
University of Delaware
Newark, DE 19716
(302) 451-8247

Up to forty participants will be selected on the basis of quality, relevance
and interest. Following is a list of possible areas and issues of interest
to the community. Readers are invited to submit additions, deletions and
amendments.

1. How should the next-generation internet be structured, as a network of
   internets, an internet of internets or both or neither? Do we need a
   hierarchy of internets? Can/must the present Internet become a component of
   this hierarchy?

2. What routing paradigms will be appropriate for the new internet? Will the
   use of thinly populated routing agents be preferred over pervasive routing
   data distribution? Can innovative object-oriented source routing mechanisms
   help in reducing the impact of huge, rapidly changing data bases?

3. Can we get a handle on the issues involved in policy-based routing? Can a
   set of standard route restrictions (socioecononic, technopolitic or
   bogonmetric) be developed at reasonable cost that fit an acceptable
   administrational framework (with help from the Autonomous Networks Task
   Force)? How can we rationalize these issues with network control and
   access-control issues?

4. How do we handle the expected profusion of routing data? Should it be
   hierarchical or flat? Should it be partitioned on the basis of use, service
   or administrative organization? Can it be made very dynamic, at least for
   some fraction of clients, to support mobile hosts? Can it be made very
   robust in the face of hackers, earthquakes and martians?

5. Should we make a new effort to erase intrinsic route-binding in the
   existing addressing mechanism of the Internet IP address and ISO NSAP
   address? Can we evolve extrinsic binding mechanisms that are fast enough,
   cheap enough and large enough to be useful on an internet basis?

6. Must constraints on the size and speed of the next-generation internet be
   imposed? What assumptions scale on the delay, bandwidth and cost of the
   network components (networks and gateways) and what assumptions do not?

7. What kind of techniques will be necessary to accellerate reliable transport
   service from present speeds in the low megabit range to speeds in the
   FDDI range (low hundreds of megabits)? Can present checksum, window and
   backward-correction (ARQ) schemes be evolved for this service, or should we
   shift emphasis to forward-correction (FEC) and streaming schemes.

8. What will the internet switch architecture be like? Where will the
   performance bottlenecks likely be? What constraints on physical, link
   and network-layer protocols will be advisable in order to support the
   fastest speeds? Is it possible to build a range of switches running
   from low-cost, low-performance to high-cost, high-performance?

9. What form should a comprehensive congestion-control mechanism take? Should
   it be based on explicit or implicit resource binding? Should it be global
   in scope? Should it operate on flows, volumes or some other traffic
   characteristic?

10. Do we understand the technical issues involved with service-oriented
   routing, such as schedule-to-deadline, multiple access/multiple
   destination, delay/throughput reservation and resource binding? How can
   these issues be coupled with effective congestion-control mechanisms?

11. What will be the relative importance of delay-based versus flow-based
   service specifications to the client population? How will this affect the
   architecture and design? Can the design be made flexible enough to provide
   a range of services at acceptable cost? If so, can the internet operation
   setpoint be varied, automatically or manually, to adapt to different
   regimes quickly and with acceptable thrashing?

12. What should the next-generation internet header look like? Should it have
   a variable-length format or fixed-length format? How should options,
   fragmentation and lifetime be structured? Should source routing or
   encapsulation be an intrinsic or derived feature of the architecture?

13. What advice can we give to other task forces on the impact of the
   next-generation internet in their areas of study? What research agenda,
   if any, should we propose to the various NSF, DARPA and other agencies?
   What advice can we give these agencies on the importance, level of effort
   and probablity of success of the agenda to their current missions?

David L. Mills, Chairman INARC

-----------[000023][next][prev][last][first]----------------------------------------------------
Date:      Sun, 4-Oct-87 03:17:49 EDT
From:      Mills@UDEL.EDU
To:        comp.protocols.tcp-ip
Subject:   NSFNET connectivity

Folks,

The primary NSFNET gateway at the Pittsburg Computer Center (ARPANET
address 10.4.0.14) is presently unreachable from a large number
of low-traffic sites. Apparently, this is due to a shortage of
virtual-circuit resources in the ACC 5250 X.25 interface and/or PSN
14. High-traffic sites on the ARPANET with NSFNET clients, such as
the core gateways and certain other gateways and hosts, capture
these resources, leaving low-traffic sites out in the cold. This
phenomenon should be familiar to gateway watchers with experience
in the EGP swamps.

With a bit of ingenuity, it is possible to work around this problem.
The trick for an ARPANET host/gateway is to source-route to some
other gateway likely to have already captured a virtual circuit to
PSC or to some other gateway that connects to the NSFNET Backbone.
This should be considered a method of last resort and then only until
the virtual-circuit problem is resolved at PSC. The obvious choice
for the source route is via one of the EGP corespeakers; however,
there may be other gateways that will work as well.

Again, note that the above suggestion should apply only if users
begin to riot or carry weapons into the machine room. If anybody
attributes the suggestion to me in public, I will disown it and
subject the poor soul to a torrent of foul-smelling gongrams and
writhing nastygrams.

Be advised the PSC has notified the Pittsburg police to watch out
for convoys of irate gateway keepers. You may decide to march on
Santa Barbara (ACC) instead. All this in clean fun - those guys are
probably working through the night anyway.

Dave

-----------[000024][next][prev][last][first]----------------------------------------------------
Date:      Sun, 4-Oct-87 06:37:00 EDT
From:      CERF@A.ISI.EDU
To:        comp.protocols.tcp-ip
Subject:   Re: TCP performance

Art,

indeed, delay is the ultimate arbiter - but your figures left out
the lower delay paths for smaller nets (local ones).  I think we
ll agree, though, that window-based schemes suffer when long delays
are involved and high speeds. The source must be tolerant of 
very long relative delays before receiving acknowledgement so
hat pacing rather than buffer acks has to be the flow control
method because buffer acks come at too long delay to be an effective
low control - the loop time is too long.

nt

-----------[000025][next][prev][last][first]----------------------------------------------------
Date:      Sun, 4-Oct-87 09:43:07 EDT
From:      ron@TOPAZ.RUTGERS.EDU (Ron Natalie)
To:        comp.protocols.tcp-ip
Subject:   Re:  More on TCP performance

If you think that TCP is bad, you should try some NETEX bandwidth
studies on the Hyperchannel.  It's beyond me how they can make a
protocol custom to the hardware like that and do it so poorly.

Don't forget that it is frequently difficult to get more than 50%
of the available bandwidth (some Pessemists law).  My major problem
with the 10M proteon (which I doubt was changed much in the 80 M
version) is that there is no buffering on the card.  If in servicing
the last packet you frequently miss having the card ready for the
SOM on the next packet.  This packet has to bounce around the ring
again.

Another problem is that 17 Mbits is rapidly the DMA limits of most
of the cheap interfaces (That IS the speed of the UNIBUS).

-Ron

-----------[000026][next][prev][last][first]----------------------------------------------------
Date:      Sun, 4-Oct-87 13:10:59 EDT
From:      pogran@CCQ.BBN.COM (Ken Pogran)
To:        comp.protocols.tcp-ip
Subject:   Re:  NSFNET woe: causes and consequences

Dave,

The message you sent to the tcp-ip list the other day regarding
the NSFNET woes you observed caused us here at BBN to put on our
thinking caps.  We worked to understand how what you saw relates
to what we know about what's happening in the ARPANET these days.
I think we already understood what was behind a good bit of what
you observed, and your message gave us the impetus to investigate
a few more things as well.  This message describes the situation
as we understand it.

There are four separate underlying issues:

1.  The number of "reachable networks" in the Internet has just
    nudged upwards of 300 for the first time.  (The Internet used
    to be growing at a rate of about 10 networks/month; that rate
    has accelerated over the past few months.)

2.  For the week ending Thursday, 1 October, the ARPANET handled
    a record 202 million packets.  (Traffic over the past few
    months has been in the 180s -- itself a record over last
    spring.)

3.  We've begun the "beta test" on the ARPANET of the new PSN
    software release, PSN 7.0, and -- sure enough -- there have
    been a few problems.
    
And, finally,

4.  The limit, that you described in your message, of 64 virtual
    circuits in the ACC 5250 X.25 driver that is used by several
    X.25-connected gateways on the ARPARNET

The first two issues just demonstrate that things continue to get
busier and busier in the ARPANET and in the Internet.  We've put
out a new version of LSI-11 "core gateway" software that allows
for 400, rather than 300, reachable gateways to give the core
some breathing room again.  And I shudder to think what ARPANET
(and, hence, Internet) performance would be like if we tried to
handle over 200 million packets per week without the so-called
"Routing Patch" that was installed late in the summer that
considerably improved the performance of the ARPANET routing
algorithm.

I think the third issue, the beginning of the PSN 7.0 beta test
on the ARPANET, contributed to some of what you saw and helped to
obscure some of the other causes of what you observed.  As you
know, last weekend, we put PSN 7 into a portion of the ARPANET.
CMU was one of the nodes that got PSN 7.

PSN 7 contains a new "End-to-End" protocol for management of the
flow of data between source PSNs and destination PSNs.  It's the
first re-do of the End-to-End protocol in the ARPANET EVER.
We're expecting a lot of improvement in efficiency within the PSN
and, hence, some network performance improvement.  

To make a graceful, phased cutover to the New End-to-End
feasible, PSN 7.0 contains code for both the new and the old
End-to-End protocols.  So as we've introduced PSN 7.0, it's been
with the OLD end-to-end protocol.  Now unfortunately, having code
for two End-to-End protocols coresident takes up memory space
that would normally go to buffers, etc. for handling traffic.
So, yes -- during the 3-4 week phased cutover, the ARPANET PSN's
will be a little short on buffer space; there's not much that can
be done about that.  But once ALL nodes are cut over to the New
End-to-End protocol, we will install PSN 7.1, which will remove
the old End-to-End, reclaim that memory space, and -- in the case
of the ARPANET nodes in which C/300 processors have replaced the
C/30s -- be able to use DOUBLE the main memory.

Back to the problem at hand: You mentioned the report of
"resource shortage"s in the PSNs.  This happened with the CMU PSN
for reasons we still don't understand.  However, this WASN'T "the
usual BBN euphemism for ... connection blocks which manage
ARPANET virtual circuits" that you suggested in your message --
we've usually got plenty of those these days.  The resource
shortage the CMU PSN reported to the NOC had to do with the PSN's
X.25 interface.  Since several higher-priority problems showed up
with PSN 7, we decided the best thing to do was to return the CMU
node to PSN 6 and work on this one later.  We have some
preliminary ideas of what might have happened, and we'll be
investigating this week.

As for delays in the ARPANET: It turns out that the version of
PSN 7.0 that was deployed last weekend contained a bug in the
"Routing Patch" that worsened, instead of improved, the
performance of the routing algorithm.  We are frankly embarassed
about that.  This problem was fixed Thursday night, 1 October --
about the time you sent your message.  We'd be very interested in
hearing from you how things looked from the NSFNet side THIS
weekend.

From your description it certainly sounds like the 64 VC limit in
the ACC 5250 is the proximate cause of the problem at CMU last
weekend.  We now count 83 gateways attached to the ARPANET.  A
gateway on the ARPANET that's handling a lot of diverse traffic
to other gateways as well as to other ARPANET hosts is very
likely to need more than 64 VCs.

We think we can provide a work-around for this problem over the
short term.  The PSN has a "idle timer" for each VC, and can
initiate a Close of the VC if it hasn't been used for awhile.  We
can configure that timer to be pretty short and thus recyle the
gateway's VCs.  Of course, some overhead will be incurred to
re-establish a VC to send the next IP datagram to that
destination, but that's probably preferable to having things plug
up for lack of VCs.  Note that by having the PSN reclaim idle
VCs, we shouldn't see much "loss of data" that you alluded to in
your message.  We would be happy to work with administrators at
sites that have gateways with ACC 5250s who would like to try
this out.

In closing, let me say that we at BBN share your concerns about
the issues to be faced as the ARPANET evolves toward a
gateway-to-gateway service from its traditional host-to-host or
host-to-gateway service.  The way gateways are attached to the
network is one of a number of urgent architectural and
engineering issues that must be addressed.

Regards,
 Ken Pogran
 Manager, System Architecture
 BBN Communications Corporation


P.S. TO THE COMMUNITY:  As the PSN 7.0 upgrade proceeds in the
ARPANET, we'll probably encounter a few more problems.  As
described in the DDN Management Bulletin distributed earlier,
please send reports of problems to ARPAUPGRADE@BBN.COM.  BBN will
respond.

-----------[000027][next][prev][last][first]----------------------------------------------------
Date:      Sun, 4 Oct 87 13:10:59 EDT
From:      Ken Pogran <pogran@ccq.bbn.com>
To:        Mills@louie.udel.edu
Cc:        tcp-ip@sri-nic.arpa, pogran@ccq.bbn.com
Subject:   Re:  NSFNET woe: causes and consequences
Dave,

The message you sent to the tcp-ip list the other day regarding
the NSFNET woes you observed caused us here at BBN to put on our
thinking caps.  We worked to understand how what you saw relates
to what we know about what's happening in the ARPANET these days.
I think we already understood what was behind a good bit of what
you observed, and your message gave us the impetus to investigate
a few more things as well.  This message describes the situation
as we understand it.

There are four separate underlying issues:

1.  The number of "reachable networks" in the Internet has just
    nudged upwards of 300 for the first time.  (The Internet used
    to be growing at a rate of about 10 networks/month; that rate
    has accelerated over the past few months.)

2.  For the week ending Thursday, 1 October, the ARPANET handled
    a record 202 million packets.  (Traffic over the past few
    months has been in the 180s -- itself a record over last
    spring.)

3.  We've begun the "beta test" on the ARPANET of the new PSN
    software release, PSN 7.0, and -- sure enough -- there have
    been a few problems.
    
And, finally,

4.  The limit, that you described in your message, of 64 virtual
    circuits in the ACC 5250 X.25 driver that is used by several
    X.25-connected gateways on the ARPARNET

The first two issues just demonstrate that things continue to get
busier and busier in the ARPANET and in the Internet.  We've put
out a new version of LSI-11 "core gateway" software that allows
for 400, rather than 300, reachable gateways to give the core
some breathing room again.  And I shudder to think what ARPANET
(and, hence, Internet) performance would be like if we tried to
handle over 200 million packets per week without the so-called
"Routing Patch" that was installed late in the summer that
considerably improved the performance of the ARPANET routing
algorithm.

I think the third issue, the beginning of the PSN 7.0 beta test
on the ARPANET, contributed to some of what you saw and helped to
obscure some of the other causes of what you observed.  As you
know, last weekend, we put PSN 7 into a portion of the ARPANET.
CMU was one of the nodes that got PSN 7.

PSN 7 contains a new "End-to-End" protocol for management of the
flow of data between source PSNs and destination PSNs.  It's the
first re-do of the End-to-End protocol in the ARPANET EVER.
We're expecting a lot of improvement in efficiency within the PSN
and, hence, some network performance improvement.  

To make a graceful, phased cutover to the New End-to-End
feasible, PSN 7.0 contains code for both the new and the old
End-to-End protocols.  So as we've introduced PSN 7.0, it's been
with the OLD end-to-end protocol.  Now unfortunately, having code
for two End-to-End protocols coresident takes up memory space
that would normally go to buffers, etc. for handling traffic.
So, yes -- during the 3-4 week phased cutover, the ARPANET PSN's
will be a little short on buffer space; there's not much that can
be done about that.  But once ALL nodes are cut over to the New
End-to-End protocol, we will install PSN 7.1, which will remove
the old End-to-End, reclaim that memory space, and -- in the case
of the ARPANET nodes in which C/300 processors have replaced the
C/30s -- be able to use DOUBLE the main memory.

Back to the problem at hand: You mentioned the report of
"resource shortage"s in the PSNs.  This happened with the CMU PSN
for reasons we still don't understand.  However, this WASN'T "the
usual BBN euphemism for ... connection blocks which manage
ARPANET virtual circuits" that you suggested in your message --
we've usually got plenty of those these days.  The resource
shortage the CMU PSN reported to the NOC had to do with the PSN's
X.25 interface.  Since several higher-priority problems showed up
with PSN 7, we decided the best thing to do was to return the CMU
node to PSN 6 and work on this one later.  We have some
preliminary ideas of what might have happened, and we'll be
investigating this week.

As for delays in the ARPANET: It turns out that the version of
PSN 7.0 that was deployed last weekend contained a bug in the
"Routing Patch" that worsened, instead of improved, the
performance of the routing algorithm.  We are frankly embarassed
about that.  This problem was fixed Thursday night, 1 October --
about the time you sent your message.  We'd be very interested in
hearing from you how things looked from the NSFNet side THIS
weekend.

From your description it certainly sounds like the 64 VC limit in
the ACC 5250 is the proximate cause of the problem at CMU last
weekend.  We now count 83 gateways attached to the ARPANET.  A
gateway on the ARPANET that's handling a lot of diverse traffic
to other gateways as well as to other ARPANET hosts is very
likely to need more than 64 VCs.

We think we can provide a work-around for this problem over the
short term.  The PSN has a "idle timer" for each VC, and can
initiate a Close of the VC if it hasn't been used for awhile.  We
can configure that timer to be pretty short and thus recyle the
gateway's VCs.  Of course, some overhead will be incurred to
re-establish a VC to send the next IP datagram to that
destination, but that's probably preferable to having things plug
up for lack of VCs.  Note that by having the PSN reclaim idle
VCs, we shouldn't see much "loss of data" that you alluded to in
your message.  We would be happy to work with administrators at
sites that have gateways with ACC 5250s who would like to try
this out.

In closing, let me say that we at BBN share your concerns about
the issues to be faced as the ARPANET evolves toward a
gateway-to-gateway service from its traditional host-to-host or
host-to-gateway service.  The way gateways are attached to the
network is one of a number of urgent architectural and
engineering issues that must be addressed.

Regards,
 Ken Pogran
 Manager, System Architecture
 BBN Communications Corporation


P.S. TO THE COMMUNITY:  As the PSN 7.0 upgrade proceeds in the
ARPANET, we'll probably encounter a few more problems.  As
described in the DDN Management Bulletin distributed earlier,
please send reports of problems to ARPAUPGRADE@BBN.COM.  BBN will
respond.
-----------[000028][next][prev][last][first]----------------------------------------------------
Date:      Sun, 4-Oct-87 14:48:39 EDT
From:      alan@mn-at1.UUCP (Alan Klietz)
To:        comp.protocols.tcp-ip
Subject:   Re: RCTE

In article <244@mitisft.Convergent.COM> andrew@mitisft.Convergent.COM (Andrew Knutsen) writes:
<
<	It seems to me there are is a middle ground in here, between
<char-at-a-time and line- (or screen-) at-a-time, that can be implemented
<purely on the server side using the normal telnet protocol (ECHO negotiation).
<We are considering implementing this for support of low speed TCP links
<(eg async modems), and I'm curious if I'm going to run into some "common
<knowledge problem"...
<
<        The basic idea is to have the kernel (virtual terminal driver) inform
<the telnet daemon when it *would be* doing immediate character echo, and
<not do it.  The daemon turns this information into echo negotiation, which
<the client (hopefully) heeds.  This results in speeded echo response in
<(for example) un*x "cooked" mode, plus a reduction in packet traffic.
<
<	Has anyone tried this? 

We modified the Cray-2 UNICOS kernel to signal a modified (kludged)
version of telnetd on a change in tty state.   For example, when
the user typed "vi" on the Cray-2, the ioctl(TIOCSETA) call is sent
to the pty driver.  The information in the call is stored in a dummy
kernel tty structure and the telnetd process is signaled.  The telnetd
process wakes up and interrogates the terminal state by issuing an
ioctl(TIOCGETA) on the pseudo-tty.  It picks up the info and says
"humm, this user wants raw mode".  It then re-negotiates the ECHO
option with the client to switch to single character mode.

One problem with this approach is that the change of state is
asynchronous to the I/O.

--
Alan Klietz
Minnesota Supercomputer Center (*)
1200 Washington Avenue South
Minneapolis, MN  55415    UUCP:  ..rutgers!meccts!mn-at1!alan
Ph: +1 612 626 1836              ..ihnp4!dicome!mn-at1!alan (beware ihnp4)
                          ARPA:  alan@uc.msc.umn.edu  (was umn-rei-uc.arpa)

(*) An affiliate of the University of Minnesota

-----------[000029][next][prev][last][first]----------------------------------------------------
Date:      Mon, 5-Oct-87 02:40:57 EDT
From:      gnu@hoptoad.uucp (John Gilmore)
To:        comp.protocols.tcp-ip
Subject:   100Mbit networking stuff -- Chesson's P-engine

I'm surprised that nobody has mentioned Greg Chesson's work on a protocol
engine that can process 100 Mbit FDDI packets in real time, implementing
reliable connections and datagrams, and internetwork routing, in hardware.
Greg works for Silicon Graphics but the protocols and hardware involved
are non-proprietary and I believe there is a multivendor group working
on implementing his design.

See his paper, "Protocol Engine Design" in the Usenix conference
proceedings from summer 1987.
-- 
{dasys1,ncoast,well,sun,ihnp4}!hoptoad!gnu			  gnu@toad.com

-----------[000030][next][prev][last][first]----------------------------------------------------
Date:      Mon, 05 Oct 87 08:59:42 -0400
From:      Mike Brescia <brescia@park-street>
To:        Craig Partridge <craig@NNSC.NSF.NET>
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: TCP and Loss (inherently lossy nets)
Can anyone from Packet Radio or SURAN projects speak to the retransmission
attempts done on single hops to improve reliability (and increase delay
average and variance)?  Is hop-by-hop retransmission better than end-to-end
retransmission or not?  Why?  Should TCP rely on hop-by-hop reliability and
never retransmit?  (Before you answer that, recall that the vast majority of
lost packets are dropped in gateways because of congestion.)

Mike

     <msg from craig@nnsc.nsf.net to narten@purdue.edu>

     I'm interested in situations in which loss is inherent -- that
     is, you will get loss, no matter what your TCP does.  Packet radios
     suffering from noise or jamming are good examples.  Loss caused by
     congestion is a different problem.
-----------[000031][next][prev][last][first]----------------------------------------------------
Date:      Mon, 5-Oct-87 09:15:40 EDT
From:      brescia@park-street (Mike Brescia)
To:        comp.protocols.tcp-ip
Subject:   Re: TCP and Loss (inherently lossy nets)

Can anyone from Packet Radio or SURAN projects speak to the retransmission
attempts done on single hops to improve reliability (and increase delay
average and variance)?  Is hop-by-hop retransmission better than end-to-end
retransmission or not?  Why?  Should TCP rely on hop-by-hop reliability and
never retransmit?  (Before you answer that, recall that the vast majority of
lost packets are dropped in gateways because of congestion.)

Mike

     <msg from craig@nnsc.nsf.net to narten@purdue.edu>

     I'm interested in situations in which loss is inherent -- that
     is, you will get loss, no matter what your TCP does.  Packet radios
     suffering from noise or jamming are good examples.  Loss caused by
     congestion is a different problem.

-----------[000032][next][prev][last][first]----------------------------------------------------
Date:      Mon, 5-Oct-87 09:20:00 EDT
From:      DCP@QUABBIN.SCRC.SYMBOLICS.COM (David C. Plummer)
To:        comp.protocols.tcp-ip
Subject:   Re: RCTE


    Date: 3 Oct 1987 06:15-EDT
    From: CERF@A.ISI.EDU

    The ISO community, to the extent it works in the public networking
    domain, is equally interested in avoiding costly char at a time modes,
    in my opinion.

Opening the door to a more unhindered future question: Assuming
"costly" includes money, when will public networking come up with a
deterministic usage fee so that researchers can budget their
communications costs instead of fretting?  I imagine most researchers
want to spend money on research and correspond with collegues and know
from the outset how each will cost; having to worry about variable
communications charges that they possibly don't understand or care to
understand is probably an undesired and recurring distraction.

-----------[000033][next][prev][last][first]----------------------------------------------------
Date:      Mon, 5-Oct-87 11:24:26 EDT
From:      MAB@CORNELLC.BITNET (Mark Bodenstein)
To:        comp.protocols.tcp-ip
Subject:   Re: TCP performance limitations

>  ...           We recently unrolled the TCP checksum loop, and a ~35%
>speed improvement there produced a ~15% overall throughput increase on
>memory-to-memory TCP.
>  ...
>James B. VanBokkelen
>FTP Software Inc.
>

Could you provide more detail on how you unrolled this loop?

(The complication being that the length of the loop is determined by
the length of the data.  Some alternatives I can think of would be:

1. to keep checking to see if you're done

2. to unroll the loop for each possible data length, and chose and
   execute the appropriate unrolled loop

3. to pad the data with zeros to the maximum length to be processed

(One could also try various combinations of these three.)

1. seems inefficient, and thus defeats the purpose of unrolling the loop.
2. seems efficient, but perhaps excessive.
3. seems to penalize short segments (e.g. ACKs), of which there are
   many.

Or am I missing something?)

Thanks.

Mark Bodenstein      (mab@cornellc.bitnet@wiscvm.wisc.edu)
Cornell University

-----------[000034][next][prev][last][first]----------------------------------------------------
Date:      Mon, 5 Oct 1987 11:24:26 EDT
From:      Mark Bodenstein <MAB%CORNELLC.BITNET@wiscvm.wisc.edu>
To:        "James B. VanBokkelen" <JBVB%ai.ai.mit.edu@crnlvax2.BITNET>
Cc:        tcp-ip@sri-nic.arpa
Subject:   Re: TCP performance limitations
>  ...           We recently unrolled the TCP checksum loop, and a ~35%
>speed improvement there produced a ~15% overall throughput increase on
>memory-to-memory TCP.
>  ...
>James B. VanBokkelen
>FTP Software Inc.
>

Could you provide more detail on how you unrolled this loop?

(The complication being that the length of the loop is determined by
the length of the data.  Some alternatives I can think of would be:

1. to keep checking to see if you're done

2. to unroll the loop for each possible data length, and chose and
   execute the appropriate unrolled loop

3. to pad the data with zeros to the maximum length to be processed

(One could also try various combinations of these three.)

1. seems inefficient, and thus defeats the purpose of unrolling the loop.
2. seems efficient, but perhaps excessive.
3. seems to penalize short segments (e.g. ACKs), of which there are
   many.

Or am I missing something?)

Thanks.

Mark Bodenstein      (mab@cornellc.bitnet@wiscvm.wisc.edu)
Cornell University
-----------[000035][next][prev][last][first]----------------------------------------------------
Date:      Mon, 5-Oct-87 11:28:08 EDT
From:      jas@MONK.PROTEON.COM (John A. Shriver)
To:        comp.protocols.tcp-ip
Subject:   More on TCP performance

Are there any published studies on NETEX performance?  It would be
good if the world could learn from the frustrations of the past.

One of the interesting limits that has not been mentioned in the
discussion is the programming interface.  While I've never written a
"sink" protocol family for 4.xBSD, I did write a "sink" driver
(if_bb.c, for "Bit Bucket").  I found that even UDP can't feed the
driver all that fast on a Sun.  The same probably applies to the
socket() interface itself, which could be tested with the "sink"
protocol family.

Proteon did adress the obvious problems of packet buffers in
ProNET-80.  All of the boards have at least 3 on the receive side.
Some have 16KB of memory that holds as many packets as fit.  The
VMEbus allows you to implement DMA at speeds over 100 megabits/second,
which allows the ProNET-80 VMEbus card to move data from ring to bus
RAM at 80 mbps.

-----------[000036][next][prev][last][first]----------------------------------------------------
Date:      Mon, 5-Oct-87 11:31:44 EDT
From:      PADLIPSKY@A.ISI.EDU (Michael Padlipsky)
To:        comp.protocols.tcp-ip
Subject:   Re: SUPDUP protocol

I seem to be missing something here, perhaps because I've yet to get
my hands on an X-Windows spec.  Superfically, if I'm talking to a
command language interpreter--or, worse, an editor--that "expects"
character-at-a-time interaction from/with me, doing so in a windowed
environment ought to lead to more transmissions rather than fewer,
since I could be char-a-a-t in several windows rather than just the
one I'm used to.  Don't want to sound like I'm still living in the
days when it was a survival trait to know how to make 026 drum cards
(though I must confess I do miss kepunches: unpunched cards were very
handy for keeping in breast pockets to make notes on), but unless the
window-oriented things contain some mechanisms for distinguishing
between what stays at the workstation and what goes to the Server
(or counterpart, or peer, or whatever it's fashionable to call the
other side these days) all we've got is jazzier interfaces to the
same old problem.  Would somebody please clarify?
   puzzled cheers, map

P.S.  Similar considerations apply to the subsequent msg about X.25
and TOPS-20: sure seemed as if case 2) (EMACS) was still doing
precisely what we're trying to avoid....  Which in a roundabout way
reminds me: can anybody speak to the rumor I recall hearing years ago
that RCTE wasn't actually a buggy protocol, it was just the
TIP's implementation that was at fault?  (Seem to recall picking
that one up from somebody who had had something to do with the
Multics implementation of RCTE, after I'd left Project MAC, as
it was then known.)
-------

-----------[000037][next][prev][last][first]----------------------------------------------------
Date:      Mon, 5-Oct-87 12:31:00 EDT
From:      DCP@QUABBIN.SCRC.SYMBOLICS.COM (David C. Plummer)
To:        comp.protocols.tcp-ip
Subject:   Re: SUPDUP protocol

Indeed, various people's vision of the future, which include high
degrees of real time interaction with keyboards and pointing devices,
would suggest that RCTE is trying to solve a shrinking problem.  There
still exist time-shared systems and a lot of personal computers not yet
powerful enough to be weaned from having to login-style connect to those
time-shared systems, and that's why I see RCTE as solving an existing
problem.  Eventually, RCTE should become part of the "good old days" and
exist only in stories to grandchildren about what it was like "back
then."  Perhaps SUPDUP was (and still is?) ahead of its time by assuming
that interaction is important and communication is cheap.

-----------[000038][next][prev][last][first]----------------------------------------------------
Date:      Mon, 5-Oct-87 12:42:02 EDT
From:      barmar@think.COM (Barry Margolin)
To:        comp.protocols.tcp-ip
Subject:   Re: SUPDUP protocol

This discussion of SUPDUP stemmed from a discussion of RCTE.  Several
people pointed out that SUPDUP doesn't actually solve the problem that
RCTE is intended to solve.  However, RMS long ago proposed a
SUPDUP-based solution, called the Local Editing Protocol (LEP), which
goes much further than RCTE.  LEP allows a host program to tell the
terminal emulator about many simple key bindings.  These include
self-insert, relative cursor motion, motion by words, and simple
deletion commands.  A large number of the operations of a video text
editor end up being performed in the workstation, and when the user
types a command it can't perform locally all the buffered up
operations are transmitted to the host, so that it can update its
buffer.

RMS also proposed a related protocol, called the Line Saving Protocol,
which allows the host to send lines outside the physical screen, or
the workstation can remember lines that are scrolled off and the host
can ask it to recall them.

I believe RMS actually implemented support for LEP in ITS EMACS, but I
don't think he ever wrote a client SUPDUP that implemented it.  Most
of the SUPDUP support in the world at the time was connected to MIT's
Chaosnet, and network speed was never a problem within that
environment (when supduping from a Lisp Machine to ITS they didn't
bother turning on insert/delete-line operations, because it slowed
down EMACS's redisplay computation, and redrawing the screen over the
net was faster!).

---
Barry Margolin
Thinking Machines Corp.

barmar@think.com
seismo!think!barmar

-----------[000039][next][prev][last][first]----------------------------------------------------
Date:      Mon, 5-Oct-87 13:06:52 EDT
From:      dolson@ADA20.ISI.EDU (Douglas M. Olson)
To:        comp.protocols.tcp-ip
Subject:   Re: TCP/IP and DECnet

> From: melohn@Sun.COM (Bill Melohn)
>                   ... MOP, the Maintenance Operations Protocol, yet
> another Digital propriatary protocol that is NOT included under the
> publically available DECnet suite. 

"MOP is a subset of DDCMP..."
(p 4-14 of the VAX/VMS Networking Manual)
"DDCMP was designed in 1974 specifically for the Digital Network 
Architecture."
(p 28 of pamphlet "Digital's Networks: An Architecture with a Future")

So we are in agreement that LAT-speaking DECservers speak MOP, but
we disagree on whether MOP is part of DNA or not.  Fine.

> From: melohn@Sun.COM (Bill Melohn)
> The point I was making is that DECnet is a publically specified
> interface for talking to DEC machines; however it does NOT include
> terminal servers, VAX clusters, and many other protocols which Digital
> considers propritary. I'm tired of explaining to customers why we
> can't support DEC terminal servers, "since they run DECnet" (which we
> can and do support under SunOS).

Point taken.

Doug
-------

-----------[000040][next][prev][last][first]----------------------------------------------------
Date:      Mon, 5-Oct-87 14:25:53 EDT
From:      thomson@uthub.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: closing half-open connections

In article <247@mitisft.Convergent.COM> andrew@mitisft.Convergent.COM (Andrew Knutsen) writes:

 >	While I dont have any suggestions for closing existing half-open
 >connections (although I think someone posted something awhile back), I
 >do have a scenario which I have seen cause this, which can be traced to
 >an ambiguity in the RFC...

...
 >4) Client closes connection.
 >	At this point, client has data buffered, and needs a window update.
 >	FIN hasnt been sent since data is pending.
 >
 >5) Client is now in LAST_ACK.  However, he ignores window updates, looking
 >	only for ACK of FIN he hasnt sent! The connection is effectively
 >	idle.
 >
 >	Now, the RFC says all data should be sent after a close (pgs 49 & 61),
 >and that when a segment arrives in LAST_ACK state only the ACK of FIN should
 >be checked for (pg 73).

The problem is really with the implementation, not the RFC.
A TCP is not supposed to enter LAST_ACK until it has sent the FIN.
From pg. 61, it should remain in CLOSE_WAIT state "... until all preceding SENDs
have been segmentized; then send a FIN segment, enter [ LAST_ACK ] state".
The actual document said "enter CLOSING state", obviously a typo.

Having said all that, it may well be that the easiest way to handle this
is to accept window updates while in LAST_ACK.
-- 
		    Brian Thomson,	    CSRI Univ. of Toronto
		    utcsri!uthub!thomson, thomson@hub.toronto.edu

-----------[000041][next][prev][last][first]----------------------------------------------------
Date:      Mon, 5-Oct-87 15:40:34 EDT
From:      glauer@SURAN.BBN.COM (Gregory Lauer)
To:        comp.protocols.tcp-ip
Subject:   Re: TCP and Loss (inherently lossy nets)

Hop-by-hop retransmission is needed in networks with high loss and long routes
if we are to have a reasonable chance of getting anything through the network.
If the link loss probability is p, then the probability of getting a packet
through N hops without hop-by-hop retransmissions is (1-p)**N.  In a lossy
network (p = .50), with a path length of N=10 hops, the probability of getting
a packet through without hop-by-hop retransmissions is thus ~.001.  On the
other hand, end-to-end retransmissions are needed since a node can crash after
having acked a packet and before having forwarded it.

How many packets get sent in each case?

In the following we assume that p**N is approximately 0 and that 1/p<<N (it
lets us sum from 0 to infinity instead of from 0 to N).

Without hop-by-hop retransmission it takes on average 1/(1-p)**N end-to-end
retransmission before the packet will reach the destination.  Each unsuccessful
end-to-end transmission (obviously) doesn't reach the destination, but goes (on
average) 1/p hops, thus the total traffic generated is 1/[p(1-p)**N] packets.
For p=.5 and N=10, this implies ~2000 packets are generated to get the packet
to the destination.  Of course, an end-to-end ack is also needed, which will
generate another 2000 packets (and, unless the retransmission timers are set
"appropriately", the source will continue to transmit while it is waiting for
the ack to get through).

With hop-by-hop retransmission it takes, on average, 1/(1-p) transmissions to
get a packet over a link.  If the hop-by-hop retransmission timers are set
appropriately (so that a packet is retransmitted only if it didn't get across
the link), then a packet is transmitted 1/(1-p)**2 times before the
acknowledgement is received.  Thus the number of transmissions required to get
the packet to the destination is N/(1-p)**2, which for p=.5 and N=10, is 40
packets.  Another 40 packets are required for the end-to-end ack (during which
time the source may continue to send if the end-to-end retransmission timer is
set inappropriately).  Note that in addition to transmissions of the original
packet, there are 1/(1-p) hop-by-hop acks transmitted on each link, adding
N/(1-p) packets to the overhead of getting a packet to a destination.  (For
p=.5, and N=10, this yields another 20 packets each way.) 

Thus ignoring "spurious" end-to-end retransmissions, we have the following
comparison:

with hop-by-hop acks:
   40 transmissions of the original packet
   20 hop-by-hop acks of the original packet
   40 transmissions of the end-to-end ack
   20 hop-by-hop acks of the end-to-end ack
  ---
  120 packets

without hop-by-hop acks:
   2000 transmissions of the original packet
   2000 transmissions of the end-to-end ack
   ----
   4000 packets

Finally, a note about the variance of the round trip time.  Ignoring the impact
of missing hop-by-hop acks, the variance in the number of transmissions
required to get a packet across a link is (I think) q=2[1/(1-p)+{p/(1-p)}**2],
and the variance in the time required to get it to the destination is Nq.  For
our example, this implies a standard deviation in the number of transmission
required to get a packet to the destination of ~24.5 transmissions.  For the
case with no hop-by-hop acks, the number of end-to-end transmissions required
has a variance given by the formula for q above with p replaced by
1-1/(1-p)**N.  For our example this yields a standard deviation of ~1447
transmissions.  Thus in this example, hop-by-hop acks not only reduce the
number of transmissions required but also reduce the variance in the number of
transmissions required (making it easier to have a good estimate of the
round-trip time, and thus reducing the number of unnecessary retransmissions).

For networks with lower losses, the overhead of the hop-by-hop acks will
outway the reduction in number of transmissions required.  We can get an upper
bound on when hop-by-hop acks help as follows.  An upper bound on the number of
transmissions required without hop-by-hop acks is N/(1-p)**N (i.e. each failed
transmission takes the maximum number of hops plus 1).  A lower bound on the
number required with hop-by-hop acks is 2N/(1-p) (i.e. never retransmit a
packet once it has been received...even if you don't know it's been received).
Equating these yields the fact that hop-by-hop acks don't help if
p<1-(.5)**[1/(N-1)], which for N=10 corresponds to p<.074.

This type of analysis is perhaps appropriate for a packet radio network, where
there may be a significant probability that a radio just doesn't receive the
packet and one can (maybe) argue that the probability of loss is independent
from node to node and from transmission to transmission.  This type of analysis
is less useful in networks where packet loss is due to congestion (and
corresponding buffer shortages), where acks have a higher priority, where there
is likly to be significant correlation between the probability that nodes are
congested and where packet (re)transmission is due mainly to timers going off
before a node has a chance to ack a packet that it correctly received.


Greg

-----------[000042][next][prev][last][first]----------------------------------------------------
Date:      05 Oct 87 15:40:34 EDT (Mon)
From:      Gregory Lauer <glauer@SURAN.BBN.COM>
To:        tcp-ip@sri-nic.ARPA, craig@SURAN.BBN.COM, brescia@SURAN.BBN.COM
Cc:        glauer@SURAN.BBN.COM
Subject:   Re: TCP and Loss (inherently lossy nets)
Hop-by-hop retransmission is needed in networks with high loss and long routes
if we are to have a reasonable chance of getting anything through the network.
If the link loss probability is p, then the probability of getting a packet
through N hops without hop-by-hop retransmissions is (1-p)**N.  In a lossy
network (p = .50), with a path length of N=10 hops, the probability of getting
a packet through without hop-by-hop retransmissions is thus ~.001.  On the
other hand, end-to-end retransmissions are needed since a node can crash after
having acked a packet and before having forwarded it.

How many packets get sent in each case?

In the following we assume that p**N is approximately 0 and that 1/p<<N (it
lets us sum from 0 to infinity instead of from 0 to N).

Without hop-by-hop retransmission it takes on average 1/(1-p)**N end-to-end
retransmission before the packet will reach the destination.  Each unsuccessful
end-to-end transmission (obviously) doesn't reach the destination, but goes (on
average) 1/p hops, thus the total traffic generated is 1/[p(1-p)**N] packets.
For p=.5 and N=10, this implies ~2000 packets are generated to get the packet
to the destination.  Of course, an end-to-end ack is also needed, which will
generate another 2000 packets (and, unless the retransmission timers are set
"appropriately", the source will continue to transmit while it is waiting for
the ack to get through).

With hop-by-hop retransmission it takes, on average, 1/(1-p) transmissions to
get a packet over a link.  If the hop-by-hop retransmission timers are set
appropriately (so that a packet is retransmitted only if it didn't get across
the link), then a packet is transmitted 1/(1-p)**2 times before the
acknowledgement is received.  Thus the number of transmissions required to get
the packet to the destination is N/(1-p)**2, which for p=.5 and N=10, is 40
packets.  Another 40 packets are required for the end-to-end ack (during which
time the source may continue to send if the end-to-end retransmission timer is
set inappropriately).  Note that in addition to transmissions of the original
packet, there are 1/(1-p) hop-by-hop acks transmitted on each link, adding
N/(1-p) packets to the overhead of getting a packet to a destination.  (For
p=.5, and N=10, this yields another 20 packets each way.) 

Thus ignoring "spurious" end-to-end retransmissions, we have the following
comparison:

with hop-by-hop acks:
   40 transmissions of the original packet
   20 hop-by-hop acks of the original packet
   40 transmissions of the end-to-end ack
   20 hop-by-hop acks of the end-to-end ack
  ---
  120 packets

without hop-by-hop acks:
   2000 transmissions of the original packet
   2000 transmissions of the end-to-end ack
   ----
   4000 packets

Finally, a note about the variance of the round trip time.  Ignoring the impact
of missing hop-by-hop acks, the variance in the number of transmissions
required to get a packet across a link is (I think) q=2[1/(1-p)+{p/(1-p)}**2],
and the variance in the time required to get it to the destination is Nq.  For
our example, this implies a standard deviation in the number of transmission
required to get a packet to the destination of ~24.5 transmissions.  For the
case with no hop-by-hop acks, the number of end-to-end transmissions required
has a variance given by the formula for q above with p replaced by
1-1/(1-p)**N.  For our example this yields a standard deviation of ~1447
transmissions.  Thus in this example, hop-by-hop acks not only reduce the
number of transmissions required but also reduce the variance in the number of
transmissions required (making it easier to have a good estimate of the
round-trip time, and thus reducing the number of unnecessary retransmissions).

For networks with lower losses, the overhead of the hop-by-hop acks will
outway the reduction in number of transmissions required.  We can get an upper
bound on when hop-by-hop acks help as follows.  An upper bound on the number of
transmissions required without hop-by-hop acks is N/(1-p)**N (i.e. each failed
transmission takes the maximum number of hops plus 1).  A lower bound on the
number required with hop-by-hop acks is 2N/(1-p) (i.e. never retransmit a
packet once it has been received...even if you don't know it's been received).
Equating these yields the fact that hop-by-hop acks don't help if
p<1-(.5)**[1/(N-1)], which for N=10 corresponds to p<.074.

This type of analysis is perhaps appropriate for a packet radio network, where
there may be a significant probability that a radio just doesn't receive the
packet and one can (maybe) argue that the probability of loss is independent
from node to node and from transmission to transmission.  This type of analysis
is less useful in networks where packet loss is due to congestion (and
corresponding buffer shortages), where acks have a higher priority, where there
is likly to be significant correlation between the probability that nodes are
congested and where packet (re)transmission is due mainly to timers going off
before a node has a chance to ack a packet that it correctly received.


Greg
-----------[000043][next][prev][last][first]----------------------------------------------------
Date:      Mon, 5-Oct-87 16:28:00 EDT
From:      CLYNN@G.BBN.COM
To:        comp.protocols.tcp-ip
Subject:   Re: TCP performance limitations

Mark,
	One could, given suitable hardware, etc. compute the checksum more
than 16 bits at at time - e.g., use 32-bit arithmetic (or some other higher
number).  There are at least two ways - adding 16 bits into a wider accumulator
which reduces/eliminates checking for carry/overflow, or adding 32 bits at a
time into the accumulator, reducing the number of memory fetches/adds/carry
checks by a factor of 2 (more if you have 64 bit arithmetic, etc.).
	One can also compute the checksum as the data is being placed
into the packet, instead of computing it after the packet has been "built".
	On retransmissions/repacketizations, one can "update" the checksum
(subtract out the old (header) values and add in the new) instead of
recomputing the sum of the data each time (the checksum is weak enough that
byte-pair position, order of summation, etc. doesn't matter).

-----------[000044][next][prev][last][first]----------------------------------------------------
Date:      Mon, 5-Oct-87 16:55:31 EDT
From:      lister@ncsa.uiuc.EDU (Tim K telnet use)
To:        comp.protocols.tcp-ip
Subject:   NCSA Telnet version 2.0 available



New software for free . . .

The National Center for Supercomputing Applications has
good news and more good news for Macintosh and PC TCP/IP users.

	New version 2.0 of NCSA Telnet for the PC
	New version 2.0 of NCSA Telnet for the Macintosh
	New hardware support in testing
	Mailing list for telnet related questions/bugs (telnet@ncsa.uiuc.edu)
	User/Developer forum at TCP/IP conference in December
	SOURCE CODE coming soon!

On November 1, NCSA will release full source code to version 2.1 of NCSA
Telnet for the PC and Macintosh.  Anyone may use, modify and redistribute
this code subject to the licensing (not-for-profit) statement included
with the source.  We will redistribute contributed source code on an
ongoing basis.

Version 2.0 of NCSA Telnet is available now -- availability information is
appended to this note.  We have many new features, including Tektronix 4014 
emulation.  Previous bug on PCs and XTs is gone.  The pre-printed manual 
has been completely re-written and is worth the $20.

We are sponsoring a mailing list for people interested in keeping up with the
NCSA Telnet distribution and the various groups taking part in future 
development.  The address is telnet@ncsa.uiuc.edu.  Send a message to 
telnet-request@ncsa.uiuc.edu to get on the list.

For the PC, we have a driver for the MICOM NI5210 (not the NI5010) board,
which has a list price of $395.00 and a driver for the Ungermann-Bass
(IBM) NIC Ethernet board.  These drivers won't be included until version 2.1
but may be available upon request.

For the Macintosh, we are testing a version which uses Apple's new EtherTalk
driver directly.  This is not in version 2.0, but will be in 2.1.

There will be a meeting at Advanced Computing Environment's TCP/IP conference
in December.  I will discuss user problems and the status of various
development projects.  This will be a good time to ask technical questions.

Thanks for your interest, some details follow the signature,

Tim Krauskopf
National Center for Supercomputing Applications (NCSA)
University of Illinois

timk@ncsa.uiuc.edu            (ARPA)
timk%newton@uxc.cso.uiuc.edu  (alternate)
14013@ncsavmsa                (BITNET)

--------------------------------------------------------------------
Fact Sheet
----------

National Center for Supercomputing Applications presents:

NCSA Telnet for the PC, version 2.0
NCSA Telnet for the Macintosh, version 2.0

These programs are copyrighted, but distributed in binary form with 
no license fee.  Source code will be available on November 1.

Features included in version 2.0 of NCSA Telnet:
-----------------------------------------------
DARPA standard telnet 
Built-in standard FTP server for file transfer
VT102 emulation in multiple, simultaneous sessions
Class A,B and C addressing with standard subnetting
Tektronix 4014 graphics emulation
Scrollback for each session
Each session in a different window (Macintosh)
Supports Croft gateway - KIP (Macintosh)
Capture text to a file (PC)
Full color support (PC)

How to obtain:
-------------
1) From a friend

The disk, documentation and files may be copied freely and distributed in
binary form, unmodified, with copyright notices intact.  This distribution
is free and no copies may be sold for profit.

2) Anonymous FTP from   uxc.cso.uiuc.edu   in the NCSA subdirectory.

The PC version is a tar file which contain binary files.  There is also a
compressed tar file with the same contents.  After the files are extracted
from the tar file, some binary transfer (e.g. kermit, NCSA Telnet) should
be used to download the files to the PC.  The documentation is in line
printer format.

The Macintosh version is in the NCSA/Mac subdirectory and consists of
several files encoded with BinHex 4.0 and/or Pack-It.  You may want to
consult the READ.ME file to determine which files to download.  Download
them with a binary transfer method (kermit, NCSA Telnet) and use BinHex
4.0 and/or Pack-It to extract the files.  The documentation is in
Microsoft Word 3.0 format.  

3) Diskette

On-disk copies, with a printed manual are available for $20 each, which
covers materials, handling and postage.  Orders can only be accepted if
accompanied by a check made out to the University of Illinois.  Send to:

NCSA Telnet orders (specify PC or Macintosh version)
152 Computing Applications Building
605 E. Springfield Ave.
Champaign, IL 61820

Hardware required:
-----------------
PC: IBM PC,XT, AT or compatible. 3COM 3C501 Etherlink board.
	IBM RT PC Baseband adapter support soon.
	Ungermann-Bass NIC board support soon.
	MICOM NI5210 Ethernet board support soon.

Mac: Macintosh 512K, Plus, SE or Macintosh II.  
	Kinetics, Inc. FastPath, EtherSC or Etherport SE.
	Kinetics gateway software or Stanford KIP (Croft) gateway software.
	Support soon for Apple EtherTalk board and software for the Macintosh II.

The best source of information about Kinetics is directly from the company.
Kinetics Inc.                     FastPath approx. $2500
Suite 110                         EtherSC approx. $1250
2500 Camino Diablo                EtherPort SE approx. $800
Walnut Creek, CA  94596
(415) 947-0998

Mailing List:
------------
Mail to telnet-request@ncsa.uiuc.edu to be added to the list of recipients.
To post messages to the list, mail to telnet@ncsa.uiuc.edu.
If your mailer cannot resolve ncsa.uiuc.edu, route mail through 
uxc.cso.uiuc.edu, also known as uiucuxc.arpa.

Other questions:
---------------
mail to telbug@ncsa.uiuc.edu (alternate: telbug%ncsa@uxc.cso.uiuc.edu)

-----------[000045][next][prev][last][first]----------------------------------------------------
Date:      Mon, 5-Oct-87 18:09:25 EDT
From:      bzs@BU-CS.BU.EDU (Barry Shein)
To:        comp.protocols.tcp-ip
Subject:   SUPDUP protocol


[Wednesday is the tenth anniversary for RFC736, TELNET SUPDUP Option]

I am certainly not casting aspersions at the SUPDUP protocol, in fact
it should be useful in any environment where the mode of interaction
models an intelligent ASCII terminal (that is, intelligent for an
ASCII terminal.) Whether SUPDUP is what I would sit down at my tabula
rasa and write today is yet another question, let's at least
distinguish between software lying on the shelf vs new efforts and
their costs. RFC's tend to represent and encourage both.

More importantly, if one generalizes to the point that all host-host
interactive interactions are made to appear similar in nature (gee,
it's only keystrokes and indices for their graphic representations
passing back and forth) then I believe the spirit of the thing is
lost.

That is, SUPDUP is a very specific protocol with very specific
definitons for interactions and a model of the world most closely
resembling a relatively fixed (generalized) ASCII terminal utilizing
telnet to speak to a remote host. It is very clever within it's model,
but ten years have passed and some things have changed.

My point is that window protocols like X and NeWS almost certainly
-ARE- (plus or minus a little intention) SUPDUP for current times.
They perform nearly the same services and much, much more. My only
comment really was that if I were king of the universe (good start)
I would like to see people working on thinking about how these window
systems might be standardized and accepted and just leave SUPDUP more
or less alone as a standing standard (I have nothing against
interested parties sorting out changes that may be desired in SUPDUP
but I do think we as a community need to get on with other things.)

It goes something like this: If we don't lead, we surely will follow.

I would agree it might be early to standardize given the current
competition of proposed standards out there, but it's almost too late
for this community to begin talking about what they would like in a
standard (eg. subset support for ascii terminals has already been
rejected, it's not impossible to put into these windowing standards
but I don't believe either of them even entertains the possibility.
Should they? That's a question, and another good start.)

Nothing earth-shattering or shibboleth-violating here, mostly just
trying to open a discussion. If you find any *answers* in anything
I've said you've misunderstood me.

	-Barry Shein, Boston University

-----------[000046][next][prev][last][first]----------------------------------------------------
Date:      Mon, 5-Oct-87 18:10:56 EDT
From:      cam@columbia-pdn (Chris Markle acc_gnsc)
To:        comp.protocols.tcp-ip
Subject:   Mac's, Ethernet, & TCP/IP

Folks,

Does anyone out there know about software/hardware that implements "the"
TCP/IP protocols on a Mac for use over Ethernet? I will summarize for this
group if I get any responses.

Chris Markle - cam%columbia-pdn@acc-sb-unix.arpa - (301)290-8100

-----------[000047][next][prev][last][first]----------------------------------------------------
Date:      Mon, 5-Oct-87 21:53:03 EDT
From:      bzs@BU-CS.BU.EDU (Barry Shein)
To:        comp.protocols.tcp-ip
Subject:   RCTE


From: David C. Plummer <DCP@QUABBIN.SCRC.Symbolics.COM>
>Opening the door to a more unhindered future question: Assuming
>"costly" includes money, when will public networking come up with a
>deterministic usage fee so that researchers can budget their
>communications costs instead of fretting?  I imagine most researchers
>want to spend money on research and correspond with collegues and know
>from the outset how each will cost; having to worry about variable
>communications charges that they possibly don't understand or care to
>understand is probably an undesired and recurring distraction.

I'd like to underscore this point, it's critical. This was the biggest
initial design goal which motivated the Cypress network project, fixed
and predictable costs from month to month (that is, not based on data
flow.)

It's critical in more ways than one. When the cost is based on
per-packet (or whatever) one can waste money using the network. When
it's flat rate one can only waste money by not using the network. The
distinction is important.

One might argue that this would just encourage irresponsibility, but
in reality the former just encourages irresponsibility by those who
can bury their costs at the expense of those who cannot. I assume any
common denominator of price will be a mere bagatelle to many folks
anyhow. There should be other ways to control irresponsibility besides
mere chargeback (eg. limiting bandwidth into the network.) I suppose
the question is whether one sees the network as infrastructure or a
commodity.

It also, of course, encourages network access based purely upon
political clout within an organization (ie. the managers will limit it
to themselves, the rats always guard the cheese...) I suppose whether
or not this would be a negative factor is subject to discussion.

Predictability is critical within a University context (and, I
suspect, other business situations.) I can get statements from any
number of bean-counters around here (I collected these verbally during
the initial Cypress discussions) that they would far rather commit to
(eg) $500 a month than a varying cost of $300-$700 per month which
would *probably* average out to $500. Just keeping tabs on whether
some change in behavior has jumped that to $1000/mo involves staff
time better placed at the vendor's end (at which point they could
raise their flat fees which, I assume, would reflect average usages
rather than simply my singularities, they have more to work with to
respond to the situation other than simply imposing little rules.)

The question of course arises "what about a small organization that
truly believes they would benefit from per-quantum charges and feel
they are subsidizing the heavier users?" Well, for one thing other
adjustments could be made but more importantly one has to be able to
show that the economy of scale is working in general and, as I
believe, that the per-quantum costs would end up costing the smaller
user more (if rates are here, could bulk-rates be far behind? etc.)  I
suspect a sound financial argument could be made that the small user
is benefitting in perhaps less obvious ways (eg. large users would
tend to have multiple (separately charged for) connections and provide
a stable revenue base which is what it takes to re-tool
infrastructure, small users would probably tend to come and go, it's a
two-way street.)

	-Barry Shein, Boston University

-----------[000048][next][prev][last][first]----------------------------------------------------
Date:      Tue, 6-Oct-87 01:55:09 EDT
From:      SATZ@MATHOM.CISCO.COM (Greg Satz)
To:        comp.protocols.tcp-ip
Subject:   DDN Standard X.25 address questions

The latest DDN X.25 documentation that I have is dated December 1983. Is
there a newer version?

I also have a couple of questions regarding the DDN addressing
mechanisms when using X.25.

1) Has a DNIC been assigned to the DDN?

The DDN X.121 address is comprised of a seven digit address, a flag
digit for physical/logical addressing and a sub-address field consisting
of two digits.

2) I would appreciate an example of using the physical/logical
addressing modes. What are they used for?

3) Since sub-addresses are optional and are only used between consenting
DTE implementations, is it safe to ignore them completely? I guess I am
asking if anyone knows what these are used for as well.

It would be nice if BBN could provide documentation explaining their
X.121 addresses a little more clearly. The "how" is explained rather
nicely; however, the "what" and "why" were completely overlooked.
-------

-----------[000049][next][prev][last][first]----------------------------------------------------
Date:      Tue, 6-Oct-87 03:13:00 EDT
From:      SRA@XX.LCS.MIT.EDU (Rob Austein)
To:        comp.protocols.tcp-ip
Subject:   SUPDUP protocol, window systems, what issues are still current

Granted that window protocols are a subject deserving lots of skull
sweat and are good clean fun.  However, I don't think this eliminates
the continuing need for a "terminal" protocol (ie, something that
passes "keystrokes" in one direction and "characters" to be displayed
in a more or less boring and regular fashion in the other direction).

Two reasons for this:

 - Bandwidth, RTT, and cost constraints are still with us and
   presumably always will be.  Dr. Malthus always has the last laugh.

 - Most of the researchers in my building have some kind of bitmapped
   display running a window system.  Nevertheless, it seems that most
   of these people spend most of their time talking to two kinds of
   programs: command interpreters and text editors (both of these also
   come in various specialized flavors, such as lisp listeners and
   mail readers/composers).  I believe that this will be the case as
   long as the primary mode of data input is a keyboard.  While it is
   certainly possible and probably useful to hair things up with
   bitmapped graphics, people can get their work done using programs
   that operate within the "terminal" model.  There is no particular
   problem with having the terminal model support mice; mouse clicks
   can be represented by a "character" followed by coordinates in
   terms of character positions (an existing library for Lispm mice in
   ITS/Twenex EMACS via SUPDUP does this).

Taken together, I think this points out a continuing need for terminal
support.  In particular, I find it hard to believe that users faced
with a severe bandwidth or RTT problem will buy the argument that they
really want to be using a window system when they know that they could
be getting useful work done with a plain old terminal.  Certainly one
can add hair to the window protocol to deal with these cases, but
doing so is essentially resurrecting the terminal protocol inside the
window protocol; you may decide that this is what you want, but it's
by no means an open and shut issue.

I agree with John Wroclawski about the direction any new work on
terminal protocols should take, so I won't repeat it.

-----------[000050][next][prev][last][first]----------------------------------------------------
Date:      6 Oct 1987 06:25-PDT
From:      STJOHNS@SRI-NIC.ARPA
To:        SATZ@MATHOM.CISCO.COM
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: DDN Standard X.25 address questions
The 1983 version of the DDN X.25 standard is the most recent.

1)No  DNIC  has been assigned to any component of the DDN, and at
this writing, it doesn't look like one will be  assigned.   There
are only some 20 DNICs allocated to the US.

2) Using the current mappings, the physical mappings are directly
translatable into a PSN/Host Port pair.   Logical  addressing  is
implemented,  but  currently  not  used by anyone since for it to
work properly for the IP world, everyone on  the  subnet  has  to
implement  a  form of the Logical addressing.  Logical Addressing
allows several forms of address indirection: a) single host  with
multiple  ports,  b)  multiple hosts on different ports providing
the same service (eg Gateways), c) multiple  hosts  on  the  same
port,  time  sharing  the  port.   In  each  case, the address is
resolved at call setup time into a physical address.

3) On the C/30 network, using X.25 standard  with  IP  above  it,
sub-addresses  should  NOT  be specified and should be ignored if
they are.

As for the what and why, keep in mind,  we  have  to  maintain  a
valid  mappiung  from  IP to whatever X.25 address we pick.  That
has a lot to do with "why".

Mike
-----------[000051][next][prev][last][first]----------------------------------------------------
Date:      Tue, 6-Oct-87 07:12:57 EDT
From:      HANK@BARILVM.BITNET (Hank Nussbacher)
To:        comp.protocols.tcp-ip
Subject:   MVS and X.25

Is there a way to connect an MVS system to Tcp/Ip via X.25?  What hardware
would be required?  Does the X.25 card from ACC solve the problem?  What
software can handle X.25 and Tcp/Ip in MVS?  Does the UCLA ACP or ACCES/MVS
solve the problem?

This is meant to be an alternate (cheap solution) rather than buying
hardware and software to connect an MVS system directly to the Ethernet.

Can it be done?

Thanks,
Hank

-----------[000052][next][prev][last][first]----------------------------------------------------
Date:      Tue, 6-Oct-87 09:13:24 EDT
From:      jon@Cs.Ucl.AC.UK (Jon Crowcroft)
To:        comp.protocols.tcp-ip
Subject:   TCP Performance Limitations


 Since TCP is a transport protocol, it has to reach the processes
lower protocols can't reach. This means (in most OS's (eg
Used Never In Xanadu)), at least one buffer copy, and one or two context
switches per packet. 

However good your windowing, and lossless the net, that bites 
bad. If the hardware could keep track of the actual user buffers (and scatter
gather them too) straight in and out of user processes, then you are down
to DMA speeds. The trick is to design clever enough hardware that the OS 
designers will trust not to trash the OS. 

You lose two ways because of current hardware - packet over run (see many
writings by Dave Cheriton) causing lowere throughput, and higher latency.

Another win would be hardware that 
provided the higher level protocols with a CRC [Say buffer descriptor
list contains pointer to where to put CRC in outgoing packets, and how to
do it for incoming packets].

It's instructive to look at fast gateways - two lance ethernet chips on a 
multibus can blow away the bus bandwidth - really fast boxes have no bus,
or have a binary tree bus a la buttergates...

Jon

-----------[000053][next][prev][last][first]----------------------------------------------------
Date:      Tue, 6-Oct-87 09:25:00 EDT
From:      STJOHNS@SRI-NIC.ARPA
To:        comp.protocols.tcp-ip
Subject:   Re: DDN Standard X.25 address questions

The 1983 version of the DDN X.25 standard is the most recent.

1)No  DNIC  has been assigned to any component of the DDN, and at
this writing, it doesn't look like one will be  assigned.   There
are only some 20 DNICs allocated to the US.

2) Using the current mappings, the physical mappings are directly
translatable into a PSN/Host Port pair.   Logical  addressing  is
implemented,  but  currently  not  used by anyone since for it to
work properly for the IP world, everyone on  the  subnet  has  to
implement  a  form of the Logical addressing.  Logical Addressing
allows several forms of address indirection: a) single host  with
multiple  ports,  b)  multiple hosts on different ports providing
the same service (eg Gateways), c) multiple  hosts  on  the  same
port,  time  sharing  the  port.   In  each  case, the address is
resolved at call setup time into a physical address.

3) On the C/30 network, using X.25 standard  with  IP  above  it,
sub-addresses  should  NOT  be specified and should be ignored if
they are.

As for the what and why, keep in mind,  we  have  to  maintain  a
valid  mappiung  from  IP to whatever X.25 address we pick.  That
has a lot to do with "why".

Mike

-----------[000054][next][prev][last][first]----------------------------------------------------
Date:      Tue, 6-Oct-87 09:35:00 EDT
From:      DCP@QUABBIN.SCRC.SYMBOLICS.COM (David C. Plummer)
To:        comp.protocols.tcp-ip
Subject:   Re: TCP performance limitations


    Date:         Mon, 5 Oct 1987 11:24:26 EDT
    From:         Mark Bodenstein <MAB%CORNELLC.BITNET@wiscvm.wisc.edu>

    Could you provide more detail on how you unrolled this loop?

    (The complication being that the length of the loop is determined by
    the length of the data.  Some alternatives I can think of would be:

	...

There is a fourth way that we (Symbolics) have done which you did not
mentioned:

(a) Pick a compile-time unrolling factor, usually a power of 2, say 16 = 2^4.
(b) Divide the data length by the unrolling factor, obtaining a quotient
    and remainder.  When the unrolling factor is a power of two, the
    quotient is a shift and the remainder is a logical AND.
(c) Write a unrolled loop whose length is the unrolling factor.  Execute
    this loop <quotient> times.
(d) Write an un-unrolled loop (whose length is therefore 1).  Execute
    this loop <remainder> times.

-----------[000055][next][prev][last][first]----------------------------------------------------
Date:      Tue 6 Oct 87 10:22:40-EDT
From:      Michael Padlipsky <PADLIPSKY@A.ISI.EDU>
To:        bzs@BU-CS.BU.EDU
Cc:        DCP@SCRC-QUABBIN.ARPA, BILLW@MATHOM.CISCO.COM, tcp-ip@SRI-NIC.ARPA
Subject:   Re: SUPDUP protocol
Speaking of misunderstandings, please be aware that I'm NOT one of
SUPDUP's advocates.  Just trying to "call for the order of the day" by
asking for an explanation (which I'd still appreciate getting) of how
windowing sorts of things minimize number of transmissions.  If, however,
your point is that the need for progress outweighs the need to avoid
being charged for each character typed, so that windowing protocols
should become the focus of the discussion irrespective of their
properties in the cost dimension, I'm inclined to duly note it and
repeat my question to everybody else as to whether a genuinely
simple fix to RCTE (whether the protocol or the implementations)
wouldn't be worthwhile, in context.
   cheers, map
-------
-----------[000056][next][prev][last][first]----------------------------------------------------
Date:      Tue, 6-Oct-87 11:21:54 EDT
From:      mminnich@udel.EDU (Mike Minnich)
To:        comp.protocols.tcp-ip
Subject:   Re: TCP performance limitations

In article <8710051601.AA14825@ucbvax.Berkeley.EDU>, MAB@CORNELLC.BITNET (Mark Bodenstein) writes:
> Could you provide more detail on how you unrolled this loop?
> 
> (The complication being that the length of the loop is determined by
> the length of the data.  Some alternatives I can think of would be:
> 
> 2. to unroll the loop for each possible data length, and chose and
>    execute the appropriate unrolled loop
> 

A simple technique that has worked well for me in the past is to unroll
the loop for the longest possible length and then compute a jump into
the unrolled loop based on the length of the data to be
checksummed/copied/etc.  Only one version of the unrolled loop is
needed in this case.

mike
-- 
Mike Minnich

-----------[000057][next][prev][last][first]----------------------------------------------------
Date:      Tue, 6-Oct-87 13:44:00 EDT
From:      franceus@TYCHO.ARPA (Paul Franceus)
To:        comp.protocols.tcp-ip
Subject:   CRC algorithms

Could someone please point me in the direction of a table driven CRC algorithm.
We have currently implemented a CRC, but I would like to make use of a table
driven version.  If anyone out there has written one or knows where an 
algorithm can be found, I would appreciate it greatly.  BTW: this is for an
Ada implementation of X.25 that we are working on.

Thanks alot,
   Paul F.
-------

-----------[000058][next][prev][last][first]----------------------------------------------------
Date:      6 Oct 1987 1344-EDT
From:      franceus@TYCHO.ARPA  (Paul Franceus)
To:        tcp-ip at sri-nic.arpa
Subject:   CRC algorithms
Could someone please point me in the direction of a table driven CRC algorithm.
We have currently implemented a CRC, but I would like to make use of a table
driven version.  If anyone out there has written one or knows where an 
algorithm can be found, I would appreciate it greatly.  BTW: this is for an
Ada implementation of X.25 that we are working on.

Thanks alot,
   Paul F.
-------

-----------[000059][next][prev][last][first]----------------------------------------------------
Date:      Tue, 6-Oct-87 15:23:13 EDT
From:      bzs@BU-CS.BU.EDU (Barry Shein)
To:        comp.protocols.tcp-ip
Subject:   SUPDUP protocol


From: Michael Padlipsky <PADLIPSKY@A.ISI.EDU>
>Speaking of misunderstandings, please be aware that I'm NOT one of
>SUPDUP's advocates.  Just trying to "call for the order of the day" by
>asking for an explanation (which I'd still appreciate getting) of how
>windowing sorts of things minimize number of transmissions.

Although at this point I would love to see the core window gnurds jump
in perhaps I could offer some examples.

In the first place, window systems (the hardware used to support them)
present new transmission opportunities and a need for solutions. A
straightforward example from X is the ability either to track the
pixel-by-pixel motion of a mouse versus requesting the remote server
to simply inform the client (with a single transmitted event) when
certain conditions occur such as the mouse entering or leaving a
window. The rest of the tracking is done in the server.

[For those less grounded in such things let me point out that the
"server" is typically one large program in charge of the physical
screen, keyboard, mouse etc and the "clients" are the applications
programs which send requests to either the remote or local server.]

Similarly, keystrokes can be mapped into multiple character
transmissions on the server (by request of the client) and these
would typically be sent as one network transaction.

NeWS of course offers a whole other dimensionality in its ability to
send a program text (in postscript) to be executed locally by the
server's language interpreter. Such a text I assume could open a
window, display a form to be filled out, collect the user's entry and
zap it all back in one transmission.

[Let me stop right here and say I don't claim that any of these
features I describe are unique or even original with the systems I
mention, I am simply trying to stick to some examples I am familiar with.]

Thus modern, networked window systems (both of these use Internet
protocols for their transmissions) offer both more powerful problems
and more powerful solution models than previous protocols aimed
at keyboard/screen interactions.

>...If, however,
>your point is that the need for progress outweighs the need to avoid
>being charged for each character typed, so that windowing protocols
>should become the focus of the discussion irrespective of their
>properties in the cost dimension, I'm inclined to duly note it and
>repeat my question to everybody else as to whether a genuinely
>simple fix to RCTE (whether the protocol or the implementations)
>wouldn't be worthwhile, in context.

I think we can have both, all three; a fix to RCTE where it exists
currently (I don't have a version on the entire B.U. campus), progress
into the discussion of networked window systems, and cost reductions in
network transmissions under window systems -if these needs are expressed-!

That's the key point, I don't think such needs have ever been much
expressed, most of the window systems were developed within ethernet
environments where things like character-at-a-time overheads were
probably not very important.

The prospect (as people on this campus are asking for) of remote
access to facilities such as super-computers over long-haul networks
via windowed interfaces makes these issues more pressing. Data
visualization and this split interaction makes a lot of sense on
remote, high-end facilities with a graphically oriented workstation on
one's desk and a network connection.

I would dare to say that the transmissions we are already starting to
see generated by such interactions will make character-at-a-time
overhead seem like mere child's play. We're looking at the prospect of
a keystroke being echoed with a megabit or more of graphical data etc.

I suppose I could better allegorize my view as SUPDUP presenting a
finger in the dyke and others having run off to fetch some caulking to
put around the finger...it's a fine finger and the others will no
doubt come back with fine caulking.

	-Barry Shein, Boston University

-----------[000060][next][prev][last][first]----------------------------------------------------
Date:      Tue, 6-Oct-87 15:32:00 EDT
From:      DCP@QUABBIN.SCRC.SYMBOLICS.COM (David C. Plummer)
To:        comp.protocols.tcp-ip
Subject:   Re: SUPDUP protocol


    Date: Tue 6 Oct 87 10:22:40-EDT
    From: Michael Padlipsky <PADLIPSKY@A.ISI.EDU>

    Speaking of misunderstandings, please be aware that I'm NOT one of
    SUPDUP's advocates.  Just trying to "call for the order of the day" by
    asking for an explanation (which I'd still appreciate getting) of how
    windowing sorts of things minimize number of transmissions.  If, however,
    your point is that the need for progress outweighs the need to avoid
    being charged for each character typed, so that windowing protocols
    should become the focus of the discussion irrespective of their
    properties in the cost dimension, I'm inclined to duly note it and
    repeat my question to everybody else as to whether a genuinely
    simple fix to RCTE (whether the protocol or the implementations)
    wouldn't be worthwhile, in context.

One of my points is that the need for windowing and interactiveness is
great, and that having to worry about unrelated-to-that-work things like
number of packets and random monetary costs severely detracts from
progress in windowing and interaction.

Your question still stands, and I am not qualified to answer it.  I hope
people keep windowing and RCTE separate.  If you must think of them
together, try to think of RCTE being an optimization to windowing, not a
requirement (because of $$ constraints, etc).

-----------[000061][next][prev][last][first]----------------------------------------------------
Date:      Tue, 6-Oct-87 15:32:17 EDT
From:      MAB@CORNELLC.BITNET (Mark Bodenstein)
To:        comp.protocols.tcp-ip
Subject:   Re: Unrolling TCP Checksum Loop

I received quite a few responses to my question about unrolling the TCP
checksum loop.  Thanks to all who replied.

Probably the most elegant solution came from Al Marshall at Proteon.
I've taken the liberty of including it here:

--------------------
Mark,
The simple way to unroll the loops (which is what I am doing now for the Novell
drivers for ProNET-10) is to divide the input packet size (from the registers)
by the length of the unrolled loop.  The whole number is the number used for
iteration AFTER jumping into the appropriate point of the loop for the
remainder.  The "appropriate point of the loop" is the remainder times
the number of bytes in the instruction length for each "move".

There are other ways to accomplish the same thing such as doing the whole
number part first then doing a short single cycle loop for the remainder.

Hope this helps you.  The actual speed improvement is significant because of
the way the 86,88,286,and 386 do instruction pre-fetches of bytes.  What it
does is that on each branch instruction the pre-fetch queue is dumped and
you have to fill it on each instruction.  You could see as much as 5 or 10
times the speed for things like this.
    -Al Marshall, Proteon
--------------------
Additional refinements came from CLYNN at BBN and from David C. Plummer
at Symbolics:

From CLYNN (paraphrased):

  - Use as much accumulator width as you've got, to process as many
    bytes at a time as possible.

  - For retransmissions, don't recompute the entire checksum, just
    subtract out the old header checksum and add in the new.

From David C. Plummer at Symbolics (paraphrased):

   - Make the unrolled loop length a power of 2.  Then the calculation
     of the quotient and remainder (see above) become a "shift" and an
     "and", respectively.
---------

Mark Bodenstein    (mab%cornellc.bitnet@wiscvm.wisc.edu)
Cornell University

-----------[000062][next][prev][last][first]----------------------------------------------------
Date:      Tue, 06 Oct 87 15:32:17 EDT
From:      Mark Bodenstein <MAB%CORNELLC.BITNET@wiscvm.wisc.edu>
To:        TCP-IP@sri-nic.arpa
Subject:   Re: Unrolling TCP Checksum Loop
I received quite a few responses to my question about unrolling the TCP
checksum loop.  Thanks to all who replied.

Probably the most elegant solution came from Al Marshall at Proteon.
I've taken the liberty of including it here:

--------------------
Mark,
The simple way to unroll the loops (which is what I am doing now for the Novell
drivers for ProNET-10) is to divide the input packet size (from the registers)
by the length of the unrolled loop.  The whole number is the number used for
iteration AFTER jumping into the appropriate point of the loop for the
remainder.  The "appropriate point of the loop" is the remainder times
the number of bytes in the instruction length for each "move".

There are other ways to accomplish the same thing such as doing the whole
number part first then doing a short single cycle loop for the remainder.

Hope this helps you.  The actual speed improvement is significant because of
the way the 86,88,286,and 386 do instruction pre-fetches of bytes.  What it
does is that on each branch instruction the pre-fetch queue is dumped and
you have to fill it on each instruction.  You could see as much as 5 or 10
times the speed for things like this.
    -Al Marshall, Proteon
--------------------
Additional refinements came from CLYNN at BBN and from David C. Plummer
at Symbolics:

From CLYNN (paraphrased):

  - Use as much accumulator width as you've got, to process as many
    bytes at a time as possible.

  - For retransmissions, don't recompute the entire checksum, just
    subtract out the old header checksum and add in the new.

From David C. Plummer at Symbolics (paraphrased):

   - Make the unrolled loop length a power of 2.  Then the calculation
     of the quotient and remainder (see above) become a "shift" and an
     "and", respectively.
---------

Mark Bodenstein    (mab%cornellc.bitnet@wiscvm.wisc.edu)
Cornell University
-----------[000063][next][prev][last][first]----------------------------------------------------
Date:      Tue, 6-Oct-87 15:53:00 EDT
From:      sandrock@uxc.cso.uiuc.edu
To:        comp.protocols.tcp-ip
Subject:   Re: TCP/IP and DECnet



     I have an old DEC manual in front of me called: "Maintenance Oper-
   ation Protocol (MOP), Functional Specification Version 2.0, March 1978".
   It says: "To order additional copies of this document, contact the
   Software Distribution Center, Digital Equipment Corp., Maynard, MA
   01754". It also shows: "Order No. AA-D602A-TC".

     Hope this is of some help!

         Mark Sandrock, (sandrock@uiucuxc.UUCP)

-----------[000064][next][prev][last][first]----------------------------------------------------
Date:      Tue, 06 Oct 87 20:33:07 -0400
From:      Mike Brescia <brescia@PARK-STREET.BBN.COM>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: TCP and Loss (inherently lossy nets)
I submit this note from Robert Cole, recently of University College London, in
the hopes that there might be someone (like Carl Sunshine) who may come up
with a citation for the paper mentioned.  This seems to be some hint of
argument counter to the reasoning that Greg Lauer described for lossy packet
radio nets.  (Robert did assent when I asked him if I could publish his note.)
    Mike

> From: Robert Cole <rhc>
> Message-Id: <11332.8710060755@rcole.lb.hp.co.uk>
> To: brescia@park-street (Mike Brescia)
> Date: Tue, 6 Oct 87 8:55:19 BST
> In-Reply-To: Message from "Mike Brescia" of Oct 05, 87 at 8:59 am
> Situation: Bristol Research Centre, Hewlett-Packard Laboratories.
> X-Mailer: Elm [version 1.5]
> 
> Mike,
> I remember a paper from a long while ago by (some permutation of)
> Sunshine, Grossman and Hindley which had (some permutation of)
> "Hop-by-hop and End-to-end" in the title. I remember it discussed a
> number of these issues and concluded that end-to-end was superior to
> hop-by-hop for flow control. I seem to remember that the arguments
> applied to a number of situations so perhaps they also apply to error
> management (which is not too different from flow control).
-----------[000065][next][prev][last][first]----------------------------------------------------
Date:      Tue, 06 Oct 87 12:43:54 +0100
From:      Jon Crowcroft <jon@Cs.Ucl.AC.UK>
To:        tcp-ip@sri-nic.arpa
Subject:   TCP Performance Limitations

 Since TCP is a transport protocol, it has to reach the processes
lower protocols can't reach. This means (in most OS's (eg
Used Never In Xanadu)), at least one buffer copy, and one or two context
switches per packet. 

However good your windowing, and lossless the net, that bites 
bad. If the hardware could keep track of the actual user buffers (and scatter
gather them too) straight in and out of user processes, then you are down
to DMA speeds. The trick is to design clever enough hardware that the OS 
designers will trust not to trash the OS. 

You lose two ways because of current hardware - packet over run (see many
writings by Dave Cheriton) causing lowere throughput, and higher latency.

Another win would be hardware that 
provided the higher level protocols with a CRC [Say buffer descriptor
list contains pointer to where to put CRC in outgoing packets, and how to
do it for incoming packets].

It's instructive to look at fast gateways - two lance ethernet chips on a 
multibus can blow away the bus bandwidth - really fast boxes have no bus,
or have a binary tree bus a la buttergates...

Jon
-----------[000066][next][prev][last][first]----------------------------------------------------
Date:      Tue, 6-Oct-87 19:08:57 EDT
From:      geof@imagen.UUCP (Geof Cooper)
To:        comp.protocols.tcp-ip
Subject:   TCP checksum unrolling

Here is one, undebugged, that illustrates the concept.  It
also uses the trick that (if you have it) you can use 32-bit
two's complement addition and add all the carries in at the
end (another trick that is sometimes faster is to generate
a 32-bit one's complement sum and then add the top and bottom
halves together to get the 16-bit sum).  Some C compilers
won't accept the wierd syntax below; or maybe I should point
out, as you wretch on the floor, that there is at least ONE
c compiler that DOES accept this syntax.

It is trivial to code it for all C compilers -- but
what you really want to do is code the exact intent of the
following into assembly language.  That makes it a lot faster
to add the two halves of a 32-bit word.

These tricks don't work for XNS checksums.  Our experience is
that this difference alone makes our XNS implementation a little
slower than our TCP implementation on a 68000.

- Geof

checksum(p, n)
    unsigned short *p;
    short n;
{
    short nloop;
    short nrem;
    unsigned long sum;

    sum = 0;
    if ( n > 0 ) {
        nloop = (n >> 3) + 1;
        nrem  = n & 7;

        switch ( nloop ) {

            do {
                    sum += *p++;
                case 7:
                    sum += *p++;
                case 6:
                    sum += *p++;
                case 5:
                    sum += *p++;
                case 4:
                    sum += *p++;
                case 3:
                    sum += *p++;
                case 2:
                    sum += *p++;
                case 1:
                    sum += *p++;
                case 0:
            } while ( --nloop > 0 );
        }
    }

    sum = (sum >> 16) + (sum & 0xffff);
    sum = (sum >> 16) + (sum & 0xffff);

    return ( sum );
}

-----------[000067][next][prev][last][first]----------------------------------------------------
Date:      Tue, 6-Oct-87 20:17:06 EDT
From:      DEDOUREK@UNB.BITNET
To:        comp.protocols.tcp-ip
Subject:   seeking opinions on a small tcp-ip university network


We are new to the TCP/IP world.  We are planning to install a small
TCP/IP network within a University Engineering Building initially,
with growth to the whole campus expected long term.  The current
situation looks like this within the Engineering Building:
-- The campus is wired with a typical broadband LAN cable, currently
   supporting RS-232 terminal access to various mainframe and
   minicomputers (SYTEK 2000 product) and PC-Network interconnection
   for IBM PC's.
-- Building site A contains IBM 3090 mainframe running MVS operating
   system and IBM PC/RT running BSD 4.3.
-- Building site B contains SGI IRIS 1400 and SGI IRIS 2400 graphics
   workstations running UNIX and an IBM PC/RT running BSD 4.3.
-- Building site C contains a MicroVAX II running ULTRIX and a
   Sun standalone workstation running Unix
-- Building site D contains a VAX running VMS
The following interconnection schemes are under consideration:
-- TCP/IP Ethernet over the existing broadband cable
-- Install Ethernet baseband cable in the building; the current
   sites would be at the upper distance limit of "thin wire" Ethernet,
   we believe, making true Ethernet cable the likely choice.
-- Install an Ethernet at each building site (each is "room sized"),
   either regular or "thin wire" and gateway/bridge/? over the
   broadband cable
-- Install Ethernet at each building site and install baseband "backbone"
   cable to gateway/bridge/? the Ethernets.
Any opinions or comments would be appreciated.  Please reply via
electronic mail to the undersigned, and indicate if you think a
summary of responses would be of interest to the network.

John DeDourek
Professor of Computer Science

School of Computer Science
University of New Brunswick
P.O. Box 4400
Fredericton, New Brunswick, CANADA  E3B 5A3
(506) 453-4566

Electronic mail:
DEDOUREK@UNB.BITNET  -- BITNET/NETNORTH

-----------[000068][next][prev][last][first]----------------------------------------------------
Date:      Tue, 06 Oct 87 14:12:57 P
From:      Hank Nussbacher <HANK%BARILVM.BITNET@wiscvm.wisc.edu>
To:        tcp-ip@sri-nic.ARPA
Subject:   MVS and X.25
Is there a way to connect an MVS system to Tcp/Ip via X.25?  What hardware
would be required?  Does the X.25 card from ACC solve the problem?  What
software can handle X.25 and Tcp/Ip in MVS?  Does the UCLA ACP or ACCES/MVS
solve the problem?

This is meant to be an alternate (cheap solution) rather than buying
hardware and software to connect an MVS system directly to the Ethernet.

Can it be done?

Thanks,
Hank
-----------[000069][next][prev][last][first]----------------------------------------------------
Date:      Wed, 7-Oct-87 01:41:51 EDT
From:      JBVB@AI.AI.MIT.EDU ("James B. VanBokkelen")
To:        comp.protocols.tcp-ip
Subject:   How to unroll a loop ('D' if you know)

I got asked several times, so I'll reply to everyone.  The easy-to-understand
way is:
	...
	remainder = count % UNROLLING_FACTOR;
	count = (count/UNROLLING_FACTOR) + 1;	/* no fenceposts for me */
	switch (remainder) {
		case 0:
			count--;	/* optimize */
			goto top;
		case 1:
			goto rem_1;
		...			/* up to UNROLLING_FACTOR - 1 */
	}
top:	do_it;
	...				/* UNROLLING_FACTOR times in all */
rem_2:	do_it;
rem_1:	do_it;
	if (count--)
		goto top;
	...

The way it is usually done (because speed freaks are rarely willing to put
up with any real or supposed inefficiency in their compilers, and are coding
the critical parts in assembler) is by figuring out how large each occurrence
of "do_it" is in bytes, and multiplying that by the remainder, and either
adding that to the program counter (if you can), or using it as an index
in a JUMP instruction of some sort. If you have a CASE instruction, make sure
it is really more efficient than coding it out.  HLL users can take a look
at how their compilers do 'case' or 'computed goto' - maybe it isn't losing
much time.

The smaller the loop test's elapsed time is relative to the time necessary
to "do_it", the less unrolling buys you.

If I coded it wrong, feel free to punish me publicly, but it probably doesn't
belong on tcp-ip.  If you are horrified by gotos, please don't tell me...

jbvb

-----------[000070][next][prev][last][first]----------------------------------------------------
Date:      Wed,  7 Oct 87  8:49 PDT
From:      Michael Stein                        <CSYSMAS@UCLA-CCN.ARPA>
To:        tcp-ip@sri-nic.arpa
Subject:   domain servers implementation notes?
Is there any later documentation than RFC973 (Domain System
Changes and Observations)?  The *LARGE* warnings in RFC883 make
me worry about missing any changes in the spec.

(Implemention hints & tips also appreciated).

-----------[000071][next][prev][last][first]----------------------------------------------------
Date:      7 Oct 87 03:11:47 GMT
From:      amethyst!rsm@arizona.edu  (Robert Maier)
To:        tcp-ip@sri-nic.arpa
Subject:   Re: SUPDUP
In article <870928095131.6.DCP@KOYAANISQATSI.S4CC.Symbolics.COM> David
C. Plummer writes:

>There is and has been a display oriented terminal protocol for a
>long, long time. ...  I'm referring, of course, to the SUPDUP
>protocol, RFC 734 of 1978.  If you want graphics, you can do that
>too:  the SUPDUP Graphics extension, RFC 746 March 1978.

It's my impression that SUPDUP (as described in Richard Stallman's
1983 AI Lab Memo 644; I don't have the RFC's handy) doesn't support
alteration of display terminal characteristics after the connection
initialization takes place.  Wouldn't that rule out its use in a
modern windowing environment?  Windows can be resized.

It looks like a very nice job, just the same.  I have a generic
Unix implementation here; are there any implementations of the
graphics extension available for SunView or X Windows?

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Robert S. Maier   | Internet: rsm@amethyst.ma.arizona.edu
Dept. of Math.    | UUCP: ..{allegra,cmcl2,hao!noao}!arizona!amethyst!rsm
Univ. of Arizona  | Bitnet: maier@arizrvax
Tucson, AZ  85721 | Phone: +1 602 621 6893  /  +1 602 621 2617
-----------[000072][next][prev][last][first]----------------------------------------------------
Date:      Wed, 7-Oct-87 09:03:55 EDT
From:      mckenzie@LABS-N.BBN.COM (Alex McKenzie)
To:        comp.protocols.tcp-ip
Subject:   Change of address

Folks,

There is some evidence that following my latest move from one host to another
within BBN my mail is not all following me.  For those of you who may want to
send me mail in the future, or who are maintaining a mailing list with my name
on it, please note that my correct e-mail address is
 mckenzie@bbn.com
in spite of the fact that the local systems generally put some other incorrect
character string in the "From" field of messages I send.

For those of you who don't know me or ever want to write, my apologies for
bothering you with this broadcast.

Thank you,
Alex McKenzie
mckenzie@bbn.com
 

-----------[000073][next][prev][last][first]----------------------------------------------------
Date:      Wed, 7 Oct 87 9:03:55 EDT
From:      Alex McKenzie <mckenzie@LABS-N.BBN.COM>
To:        perry@VAX.DARPA.MIL, carl@JOVE.CAM.UNISYS.COM, tcp-ip@SRI-NIC.ARPA, tcp-ip-request@SRI-NIC.ARPA, isode@NRTC.NORTHROP.COM, mrose@GREMLIN.NRTC.NORTHROP.COM, postel@VENERA.ISI.EDU
Cc:        edmiston@LABS-N.BBN.COM, malman@LABS-N.BBN.COM
Subject:   Change of address
Folks,

There is some evidence that following my latest move from one host to another
within BBN my mail is not all following me.  For those of you who may want to
send me mail in the future, or who are maintaining a mailing list with my name
on it, please note that my correct e-mail address is
 mckenzie@bbn.com
in spite of the fact that the local systems generally put some other incorrect
character string in the "From" field of messages I send.

For those of you who don't know me or ever want to write, my apologies for
bothering you with this broadcast.

Thank you,
Alex McKenzie
mckenzie@bbn.com
 
-----------[000074][next][prev][last][first]----------------------------------------------------
Date:      Wed, 7-Oct-87 11:49:00 EDT
From:      CSYSMAS@OAC.UCLA.EDU (Michael Stein)
To:        comp.protocols.tcp-ip
Subject:   domain servers implementation notes?

Is there any later documentation than RFC973 (Domain System
Changes and Observations)?  The *LARGE* warnings in RFC883 make
me worry about missing any changes in the spec.

(Implemention hints & tips also appreciated).

-----------[000075][next][prev][last][first]----------------------------------------------------
Date:      Wed, 7-Oct-87 12:44:00 EDT
From:      WANCHO@SIMTEL20.ARPA
To:        comp.protocols.tcp-ip
Subject:   DDN Backbone bandwidth vs. speed

As the host administrator for this machine, I often get asked why the
network is so slow.  Part of the answer is that this host is but a
2040 with 512KW, soon to be upgraded to a 2065 with 4MW.  That should
make a significant difference in that we will finally be able to run
the TCP service locked into the highest queue without swamping the
system.

But, for the rest of the answer, I point out that the DDN backbone is
still operating at 56Kbps with nodes which apparently cannot handle
higher rates.  That configuration maybe was adequate when the net
consisted of about 300 to 500 hosts and the protocol was the more
efficient, but less flexible NCP (in my opinion).  Now, we have an
order of magnitude more hosts sending TCP traffic through the net, and
the links are still 56Kbps.  Oh, there may be more links, more
cross-country paths, and even satellite hops added on a weekly basis
to handle the traffic.  But, the basic *speed* is still 56Kbps,
although the bandwidth *may* be greater.

Meanwhile, campus LANs architects sneer at anything less than 10Mbps
to get any work done.

Is it really unreasonable to ask why the backbone hasn't been upgraded
to at least T1 service?  Are there any plans for such an upgrade?  If
not, then what?  Still more 56Kbps links?  Does that *really* solve
the problem?  What should I tell my users (one in particular) to
expect, and when?

--Frank

-----------[000076][next][prev][last][first]----------------------------------------------------
Date:      Wed, 7-Oct-87 12:58:04 EDT
From:      ihm@nrcvax.UUCP (Ian H. Merritt)
To:        comp.protocols.tcp-ip
Subject:   Re: TCP performance limitations

>The instruction count sounds low to me, how about 10 times more?

This sounds more reasonable.

>
>(A 1000 byte packet sounds like it would take 500 adds just to
>compute the TCP checksum, not to mention a 64K packet).

Since the 1's complement arithmetic is so flexible, this could of
course be improved on 32 bit processors, for example, but still
represents significantly more than 1000 instructions...
	.
	.
	.

>Unfortunately for a TCP connection, most of the checksum overhead
>is in the TCP checksum (which is an end-to-end check) and this
>sounds harder to move off of the general purpose CPU.  The idea
>would be to let your general purpose 14 MIP[S] CPU do general
>purpose work rather than adding up checksums.
>
	.
	.
	.

>I have been thinking of how to design a T3 (45Mb) type speed
>packet switch (just thinking) and there are some real problems
>with doing IP packet header processing when you need to process a
>packet every 6 us.  (Voice packets want to be about 100 bytes so
>you need to be able to handle about 56K packets/sec).
>
>Virtual circuts sure seem easer at the packet level at this speed
>(smaller packet overhead too).  Of course, a virtual circut could
>carry embedded IP packets.

This suggests a dedicated packet concentrator arrangement such that
long-haul very high speed (VHS (:->) ) circuits could be utilized by
grouping large numbers of small (i.e. ip/tcp-size) packets bound for
the same region together into mostly-reliable megapackets shipped over
the high-speed link to be unpacked and sent off to gateways at the
far end.

TCP's reliability features would be sufficient to permit this to work
with little or no megapacket-level error handling, but such
functionality could be included by using nonflexible (therefore
simple) algorithms implemented in VLSI that operate on the entire
megapacket or by cutting it up into arbitrary segments without any
content-specific knowledge.  The common-channel interoffice signalling
(CCIS) protocol used by the bell system incorporated a similar scheme
(albeit on a smaller (and slower) scale), allowing selective
retransmission of portions of a megapacket while verifying the
validity of the rest of the data.
					--i

-----------[000077][next][prev][last][first]----------------------------------------------------
Date:      Wed, 7-Oct-87 13:34:20 EDT
From:      merlin@hqda-ai.UUCP (David S. Hayes)
To:        comp.sources.wanted,comp.protocols.tcp-ip
Subject:   SUPDUP and BSD UNIX


     All this discussion of SUPDUP makes it look interesting.  I
have some machines here that can run SUPDUP, but my Unix engine
isn't one of them.  Does anyone know where I can find a (free, no
budget available here) set of SUPDUP code for a Sun BSD Unix box?
I'd like to get both a client and a server, but for free, I'll
take what I can get.  Thanks,

-- 
David S. Hayes, The Merlin of Avalon	PhoneNet:  (202) 694-6900
UUCP:  *!uunet!cos!hqda-ai!merlin	ARPA:  ai01@hios-pent.arpa

-----------[000078][next][prev][last][first]----------------------------------------------------
Date:      Wed, 7-Oct-87 14:58:00 EDT
From:      WESTCOTT@G.BBN.COM
To:        comp.protocols.tcp-ip
Subject:   Re: TCP and Loss (inherently lossy nets)

Mike,

Hop by hop retransmissions are necessary in packet radio networks 
because PRs must transmit over links with relatively low probability
of success.  Imagine a 5 hop path with a  probability of 80% success
per hop (~30% over net).  When that is coupled with congestion and other
problems along a longer Internet route, then it is extremely difficult
and costly (in terms of packet transmissions) to attempt to maintain
a reliable connection.

Presently, PR nets also use the timing information from their hop
acknowledgements to apply backpressure for congestion control; its
known as the pacing algorithm.  The reasoning is that rather than fill
all the buffers in a data path, limit the packets forwarded to what
the next PR can handle.  If congestion appears on the far side of the
net, traffic generation may be slowed by increasing the pacing delays
all the way back to the source.  Smart sources slow down, dumb sources 
get their packets dropped by the source's attached PR.

Hop by hop retransmission is limited to several attempts along the
routing path and a few attempts directed to "any PR who can route this
packet toward the destination".  The later is known as alternate
routing and becomes most useful when you've lost connectivity with
your neighbor (perhaps its driven under a bridge or behind a
building).  

Because of the uncertainty of successful retransmission, PR nets count
on a reliable end-to-end protocol as well.  One way to look at your
question is that PRnets, with per hop acknowledgements, approach the
non-lossy nets in probability of successful transmissions.  I agree
with your thought that due to "congestion => dropped packets"
behavior, simply using hop by hop acknowledgements seems fruitless.

Jil

-----------[000079][next][prev][last][first]----------------------------------------------------
Date:      Wed, 7-Oct-87 15:52:05 EDT
From:      gardner@UXC.CSO.UIUC.EDU (Michael G. Gardner)
To:        comp.protocols.tcp-ip
Subject:   pc telnet 2.0

What did you use to verify your tek display?  Programs that run on UXC and
generate 4014 codes for my DEC vt240 come out scrambled on the pc/at with
ega card.  
Also, where is chapter 6?
tnx
mgg
----------------------------------------------------------------------------
Assistant Director - Computer Services Office - University of Illinois
Michael G. Gardner	   217-244-0914
UUCP:    {ihnp4,pur-ee,convex}!uiucdcs!uiucuxc!gardner
ARPANET: gardner%uxc@a.cs.uiuc.edu  CSNET:  gardner%uxc@uiuc.csnet
ICBM:    40 07 N / 88 13 W          BITNET: gardner@uiucuxc
US Mail: Univ of Illinois, CSO, 1304 W Springfield Ave, Urbana, IL  61801

-----------[000080][next][prev][last][first]----------------------------------------------------
Date:      Wed, 7-Oct-87 17:45:52 EDT
From:      gardner@UXC.CSO.UIUC.EDU (Michael G. Gardner)
To:        comp.protocols.tcp-ip
Subject:   ps telnet 2.0

What did you use to verify your tek display?  Programs that run on UXC and
generate 4014 codes for my DEC vt240 come out scrambled on the pc/at with
ega card.   Is there some way to specify a default domain?  

Also, where is chapter 6?
tnx
mgg
----------------------------------------------------------------------------
Assistant Director - Computer Services Office - University of Illinois
Michael G. Gardner       217-244-0914
UUCP:    {ihnp4,pur-ee,convex}!uiucdcs!uiucuxc!gardner
ARPANET: gardner%uxc@a.cs.uiuc.edu  CSNET:  gardner%uxc@uiuc.csnet
ICBM:    40 07 N / 88 13 W          BITNET: gardner@uiucuxc
US Mail: Univ of Illinois, CSO, 1304 W Springfield Ave, Urbana, IL  61801

-----------[000081][next][prev][last][first]----------------------------------------------------
Date:      Wed, 7-Oct-87 17:45:54 EDT
From:      rouquett@castor.usc.edu (Nicolas F Rouquette)
To:        comp.protocols.tcp-ip
Subject:   TCP/IP on TI-Explorers

Has anybody implemented TCP/IP on TI-Explorers?
I do not know how to send data between the TI-Explorers using streams.
Any help concerning this will be greatly appreciated.
---------------------------------------------------------------------
Nicolas Rouquette
--------------------------------------------------------------------

-----------[000082][next][prev][last][first]----------------------------------------------------
Date:      Thu, 8-Oct-87 01:47:00 EDT
From:      PAP4@AI.AI.MIT.EDU ("Philip A. Prindeville")
To:        comp.protocols.tcp-ip
Subject:   Re: SUPDUP and BSD UNIX

There used to be supdup sources kept on Borax.LCS.MIT.Edu (which was
used for tourists, amongst other dubious functions) that was ftp'ble;
also bootp servers, the pc/ip sources, argus, and other goodies.
The machine is defunct now.  However, I'm sure the repository of
sources has been backed-up on tape, if not moved to another machine.

I'm sure some other people on this list might know what happened
to this software.  (Rob, Swa, are you out there?)

-Philip

-----------[000083][next][prev][last][first]----------------------------------------------------
Date:      Thu, 8-Oct-87 01:56:35 EDT
From:      PAP4@AI.AI.MIT.EDU ("Philip A. Prindeville")
To:        comp.protocols.tcp-ip
Subject:   Re: TCP and Loss (inherently lossy nets)

Where can I find a complete description of IP packet radio networks?
I'd like to know everything about them: routing algorithms; link
layer protocols; hardware involved; reliability and throughput, etc.

Also, has anyone given any thought to using digital mobile phones as
a subnet media?  Or are most mobile phone nets in the US still analog?

Thanks in advance,

-Philip

-----------[000084][next][prev][last][first]----------------------------------------------------
Date:      Thu, 8-Oct-87 02:48:29 EDT
From:      PAP4@AI.AI.MIT.EDU ("Philip A. Prindeville")
To:        comp.protocols.tcp-ip
Subject:   Re: SUPDUP protocol

> then."  Perhaps SUPDUP was (and still is?) ahead of its time by assuming
> that interaction is important and communication is cheap.

Well, someone suggested that SUPDUP is archaic, and that we should devote
our attention instead to developing applications based on windowing
systems such as X.

I have a hard time imagining BITBLTing across the Internet at 56kps (and
less).  Now if we had T1 or T3...  In any case, such bitmap transfers
would be slower than waiting for the remote host to do your editing for
you...  And gobble much more bandwidth.

-Philip

-----------[000085][next][prev][last][first]----------------------------------------------------
Date:      Thu, 8-Oct-87 11:09:56 EDT
From:      kline@UXC.CSO.UIUC.EDU (Charley Kline)
To:        comp.protocols.tcp-ip
Subject:   Re: More on TCP performance

NSC did a few studies on NETEX performance at the last NEXUS I was at.
The performance numbers varied about linearly with the crank of the
particular CPU's involved, indicating an inefficiently coded protocol.

If you're interested in the theories of Hyperchannel hardware performance
(which is crucial if you're trying to put new protocols on Hyperchannels),
a paper by Franta and Heath, "Performance of Hyperchannel Networks" is
excellent reading. I have hard copies if anyone is interested.

-----
Charley Kline
University of Illinois Computing Services
Internet: kline@uxc.cso.uiuc.edu
Bitnet:   kline@uiucvmd.bitnet
UUCP:     {ihnp4,seismo,pur-ee,convex}!uiucdcs!uiucuxc!kline

-----------[000086][next][prev][last][first]----------------------------------------------------
Date:      Thu, 8-Oct-87 12:00:04 EDT
From:      markl@ALLSPICE.LCS.MIT.EDU
To:        comp.protocols.tcp-ip
Subject:   Re: SUPDUP and BSD UNIX

There is a supdup TAR file in /pub/supdup.tar available via anonymous
FTP from thyme.lcs.mit.edu.  We use this implementation fairly
regularly, and it seems to work well enough.  I believe it was written
by David Bridgham at LCS several years ago; I don't know who may have
hacked it since then.

markl

Internet: markl@ptt.lcs.mit.edu

Mark L. Lambert
MIT Laboratory for Computer Science
Distributed Systems Group

----------

-----------[000087][next][prev][last][first]----------------------------------------------------
Date:      Thu, 8-Oct-87 12:05:47 EDT
From:      ihm@nrcvax.UUCP (Ian H. Merritt)
To:        comp.protocols.tcp-ip
Subject:   Re: TCP performance limitations

>There is a fourth way that we (Symbolics) have done which you did not
>mentioned:
>
>(a) Pick a compile-time unrolling factor, usually a power of 2, say 16 = 2^4.
>(b) Divide the data length by the unrolling factor, obtaining a quotient
>    and remainder.  When the unrolling factor is a power of two, the
>    quotient is a shift and the remainder is a logical AND.
>(c) Write a unrolled loop whose length is the unrolling factor.  Execute
>    this loop <quotient> times.
>(d) Write an un-unrolled loop (whose length is therefore 1).  Execute
>    this loop <remainder> times.

Or if you have memory to burn (which is fast becoming a common
condition), just unroll the loop for the maximum condition and branch
into it at the appropriate point to process the length of the actual
packet.
					--i

-----------[000088][next][prev][last][first]----------------------------------------------------
Date:      Thu, 8-Oct-87 14:27:14 EDT
From:      tedcrane@TCGOULD.TN.CORNELL.EDU (Ted Crane)
To:        comp.protocols.tcp-ip
Subject:   Re: TCP/IP and DECnet

In article <159@medivax.UUCP> chinson@medivax.UUCP (Chinson Yi) writes:
>we have some DECserver 100 accessing it.  We are trying to
>install a TCP/IP product on it to communicate with our Ultrix
>machine and I am wondering if the DECnet and TCP/IP will coexist
>on the same machine sharing a DEQNA.  We will be getting 

There've been a few replies to this.  One mentioned putting DECnet up on
the Ultrix machine.  This is, in some ways, a limited solution but, *for
what it does*, it works very well.  Of course, it isn't free...

-ted
-- 
- ted crane, alias (tc)
tedcrane@tcgould.tn.cornell.edu                       BITNET: tedcrane@CRNLTHRY
{decvax!ucbvax}!tcgould.tn.cornell.edu!tedcrane             DECnet: GOPHER::THC
                                                                 (607) 273-8768

-----------[000089][next][prev][last][first]----------------------------------------------------
Date:      Thu, 8-Oct-87 20:23:06 EDT
From:      Mills@UDEL.EDU
To:        comp.protocols.tcp-ip
Subject:   Re:  NSFNET woe: causes and consequences

Ken,

Thanks very much for your thoughtful and informative response. Like you,
I do believe the proximate cause of the psc-gw problems is running
short of virtual-circuit resources in the X25 interface; however, I am
a little worried about the workaround you suggest - shortening the
idle timer in the PSN itself. I have verified that packets do get
lost if traffic is flowing at the time of the VC clear due to the
interface itself, even in loopback. I think the eventual resolution
must be rebuilding the driver to reclaim VCs on the basis of time and
use, much the same way the PSNs must handle that for themselremacirI di

-----------[000090][next][prev][last][first]----------------------------------------------------
Date:      Thu, 8-Oct-87 20:50:07 EDT
From:      barmar@think.COM (Barry Margolin)
To:        comp.protocols.tcp-ip
Subject:   Re: SUPDUP protocol

In article <266223.871008.PAP4@AI.AI.MIT.EDU> PAP4@AI.AI.MIT.EDU ("Philip A. Prindeville") writes:
>I have a hard time imagining BITBLTing across the Internet at 56kps (and
>less).  Now if we had T1 or T3...  In any case, such bitmap transfers
>would be slower than waiting for the remote host to do your editing for
>you...  And gobble much more bandwidth.

How often would you have to transfer huge bitmaps?  About the only
time would be when you dump a screen to a file or printer.  Most of
the time the units that window systems operate on are much higher
level, such as characters, lines, polygons, and windows.

Assuming that packet transmission cost is the same regardless of the
packet size, SUPDUP and windowing protocols can have about the same
network cost.  In SUPDUP, each keystroke results in a tiny packet (one
TCP octet) being sent from the user's machine to the remote machine,
and a similar packet being returned.  In X, it results in a keystroke
event packet, and an output packet being returned; X has its own
headers and stuff, so these packets are larger than the corresponding
SUPDUP packets, but they are still just one packet each way.  This
requires that applications use X efficientl; for example, it has the
ability to transmit an event when a key is pressed and when it is
released, but it can be told not to bother sending the KeyUp.
Similarly, mouse tracking is usually done in the local host, not in
the remote; an event is generated when when a mouse button is pressed,
when boundaries are crossed, etc., unless the application really needs
to see all mouse motion.

---
Barry Margolin
Thinking Machines Corp.

barmar@think.com
seismo!think!barmar

-----------[000091][next][prev][last][first]----------------------------------------------------
Date:      Fri, 9-Oct-87 04:03:58 EDT
From:      JBVB@AI.AI.MIT.EDU ("James B. VanBokkelen")
To:        comp.protocols.tcp-ip
Subject:   Re: TCP/IP on TI-Explorers

TI offers an add-on software package that implements at least FTP, Telnet,
and user-program access to TCP/IP on the Explorer.  I have observed it to
have a few quirks, but it definitely gets you on the air (I've never used
it, just supported my own product for customers talking to Explorers).

I don't know part numbers or configuration requirements, but that's what your
TI salesman is for...

jbvb

-----------[000092][next][prev][last][first]----------------------------------------------------
Date:      Fri, 9-Oct-87 09:43:38 EDT
From:      PERRY@VAX.DARPA.MIL (Dennis G. Perry)
To:        comp.protocols.tcp-ip
Subject:   Re: DDN Backbone bandwidth vs. speed

Frank, all it takes is money.  Do you have some, or is DARPA/DCA supposed
to foot the bill?

dennis
-------

-----------[000093][next][prev][last][first]----------------------------------------------------
Date:      Fri, 9-Oct-87 22:20:14 EDT
From:      jbvb@ftp.UUCP (James Van Bokkelen)
To:        comp.protocols.tcp-ip
Subject:   HP3000 TCP/IP summary

There seem to be two TCP/IP implementations for the HP3000.  Neither is really
what the person I was asking this for wanted, at least at the moment...

One was done by BBN for a government contract.  It had user and server Telnet,
and user FTP.  It was done on a Series 3 under MPE IV, and later ported to a
Series 44 under MPE V/P.  It needs modifications to the O/S, so you have to
have a source license.  BBN says they aren't pushing it, but money could
persuade them to bring it up on newer machines/versions of the O/S.

Another is the Netxport II transport layer used by HP's Networking Services
product.  It is apparently a TCP/IP transport layer, developed for use by 
HP's private protocols, but supposedly accessible by user-written programs
for the standard ARPA services.  The Wollongong Group is said to be developing
FTP, Telnet and SMTP, presumably on top of the existing transport.  I was told
that this was scheduled for availability in April, 1988.

jbvb

-----------[000094][next][prev][last][first]----------------------------------------------------
Date:      Fri, 9-Oct-87 23:57:27 EDT
From:      jqj@drizzle.uoregon.EDU (JQ Johnson)
To:        comp.protocols.tcp-ip
Subject:   Re: supdup protocol

An important point about the evolution from telnet-style protocols to
X-style windowing protocols is that a parallel evolution is towards
remote file systems (e.g., though not i.e., SUN NFS).  The pair of
trends implies that there are now many interesting alternatives available
for standardized distributed computing.  Some examples:
1/ an interface to a remote command language interpreter that is
extremely smart about local editing (e.g. SUN cmdtool or the various
menu-based command extensions).  The menu-based extension amount to PFCs,
but with a better user interface.  They allow the transmission of a whole
command or part of it in a single packet rather than c-a-a-t (at the user's
typing rate).
2/ special purpose RPCs for typical commands, often with arguments that
are built automatically by the software on the local workstation.  I never
run sysline style programs remotely over a telnet stream!
3/ transparent local editing.  At least in some cases, it makes much more
sense to download a whole file and edit it locally.  That was a user-interface
nightmare when downloading meant firing up FTP (but that's the way Symbolics
tcp/ip implements it).  A remote file system gives you much more flexibility
and syntactic sugar.  Note that if your charges are per-packet with a max.
packet size of 128 bytes, and you plan to type 1K keystrokes during the
editing of a single file, then even if the file is 127K bytes long it is
cheaper to download it!  And of course an intelligent system design allows
downloading of only the pieces of the file actually needed.

Granted such things don't work well if your network connection is 9600b.
They work reasonably at 56Kb, though, given careful tuning.  And they
are often a big win not just in terms of packet charges but in terms of
latency -- I'd much rather wait 5 more seconds for my (local) editor to
fire up on a remote file than wait 1 sec. for the echo of every keystroke!

 .
QUIT

-----------[000095][next][prev][last][first]----------------------------------------------------
Date:      Sat, 10-Oct-87 11:28:00 EDT
From:      CERF@A.ISI.EDU
To:        comp.protocols.tcp-ip
Subject:   Re: TCP and Loss (inherently lossy nets)

Mike,

On a lossy link, it is better to retransmit on the link to cinfine the
amount of retransmission to the link rather than pay the price throughout
the network. One uses end/end retransmission to recover from more cataclysmic
failures (loss of underlying X.25 VC, network partitioning, crash of a
packet switch, etc.).l

On really noisy links, it is better to use forward error correction
because checksums will fail almost every time and retransmission then
is not a good recovery tool.

Vint

-----------[000096][next][prev][last][first]----------------------------------------------------
Date:      Sat, 10-Oct-87 11:34:00 EDT
From:      CERF@A.ISI.EDU
To:        comp.protocols.tcp-ip
Subject:   Re: RCTE


There hasn't been much pressure from the data communication users for
flat fee arrangementst. There are, of course, lease or dedicated
circuits and flat rate fees in local calling areas (voice). 

I suggest that you would find more concrete answers if you went to
one or more public carriers to ask about options for flat rates.

Vint

-----------[000097][next][prev][last][first]----------------------------------------------------
Date:      Sat, 10-Oct-87 12:28:00 EDT
From:      CERF@A.ISI.EDU
To:        comp.protocols.tcp-ip
Subject:   Re:      MVS and X.25

Hank,

It is my understanding that the ACC X.25 interface does give you MVS
TCP/IP via X.25.

Vnt Cerfl

-----------[000098][next][prev][last][first]----------------------------------------------------
Date:      Sat, 10-Oct-87 12:40:00 EDT
From:      CERF@A.ISI.EDU
To:        comp.protocols.tcp-ip
Subject:   [John Robinson <jr@LF-SERVER-2.BBN.COM>: Re: TCP performanc...]


I found this note of considerable interest; fast packet switching (FPS)
is become an important focus of attention in several places - Bell Labs,
Bellcore, Univ Washington, and elsewhere. I believe it has the potential
to become the principal switching technology of the 90's and at the speeds
shwon in the laboratory (aggregate switching rates in the tens of gigabits
per second) could handle voice, data and possibly video (especially if
compressed).

Vint Cerf
	
Begin forwarded message
Received: from LF-SERVER-2.BBN.COM by A.ISI.EDU with TCP; Tue 6 Oct 87 17:13:19-EDT
Date: Tue, 06 Oct 87 16:31:43 -0400
From: John Robinson <jr@LF-SERVER-2.BBN.COM>
Reply-To: jr@BBN.COM
To: CERF@A.ISI.EDU
Subject: Re: TCP performance limitations
Return-Path: <jr@LF-SERVER-2.BBN.COM>

It is interesting that the Bell&al switching fabrics are, at a lower
level, hardware packet switches.  This comparison was in fact the seed
of the Butterfly idea in Crowther's head.  Reduce the routing decision
to something simple enough, and then speed the whole thing up by doing
it in hardware (1976).  I was intrigued when Luderer's (Bell Labs)
presentation on fast packet switching at ISS87 last March alluded to
the Butterfly as a commercial realization of FPS technology to build a
multiprocessor.  Computing and communications are tending to merger
here, as has often been said at other levels.  Conservative
projections of Buterfly technology we have done make 45mbs trunks and
FDDI LANs look entirely feasible for next-generation IP switching.

The phone application of packet switching isn't concerned about
end-end integrity too much, but they naively assume that plentiful
bandwidth and high speeds together mean that error detection and
recovery and, more importantly, resource management, won't ever be
necessary.  I think the current difficulties the Arpanet is having
under heavy load are a consequence of a similar attitude in the early
Arpanet days.  When peak utilizations are 20%, you don't have to try
very hard to control resource utilizations.  So the only question is,
will the supply of phone trunking bandwidth keep enough ahead of
demand come FPS.  Right now, I'd guess it will for at least long
enough for FPS to get accepted, due to rapid deployment of fiber and
deregulation.  This depends on possible shakeouts in the IECs, I
suppose.  This doesn't apply to the non-US world.

As for silicon doing IP or CLNS, I'd expect that this is entirely
within reason already.  I would probably opt for a more powerful
header checksum, in fact, but this choice may already be moot.  Too
bad that CRC is so easy in hardware and hard (compared to simple sum)
in software.  Since only the header is checksummed, however, the check
can overlap reception of the user data and need not add any processing
delay at all.

I am not calibrated on what tcp-ip considers worth airing; please
redistribute this note if you find it interesting enough.

/jr
jr@bbn.com or jr@bbn.uucp

          --------------------
End forwarded message
		

-----------[000099][next][prev][last][first]----------------------------------------------------
Date:      Sat, 10-Oct-87 12:54:00 EDT
From:      CERF@A.ISI.EDU
To:        comp.protocols.tcp-ip
Subject:   Re: TCP and Loss (inherently lossy nets)

For starters, read the November 1978 IEEE Proceedings to get a
good overview of packet radio nets.

Vint Cerf

-----------[000100][next][prev][last][first]----------------------------------------------------
Date:      Sat, 10-Oct-87 13:09:00 EDT
From:      CERF@A.ISI.EDU
To:        comp.protocols.tcp-ip
Subject:   Re: TCP performance limitations


thanks for the observations - what you describe also bears an uncanny
similarity to the internals of the early TYMNET which aggregated bytes
from terminals and grouped them into frames which were sent reliably
on a hop-by-hop basis, broken down at each hop and re-formed based on
new traffic arriving at that node and the fact that the bytes in the
frame might be destined to flow out on different trunks and so had
to be re-marshalled into new frames.

Vint

-----------[000101][next][prev][last][first]----------------------------------------------------
Date:      Sat, 10-Oct-87 16:29:55 EDT
From:      ron@TOPAZ.RUTGERS.EDU (Ron Natalie)
To:        comp.protocols.tcp-ip
Subject:   Re:  DDN Backbone bandwidth vs. speed

I suppose the biggest reason why campus LAN experts work with higher
bandwidths is because we can.  But it's really more resonable to expect
that you need higher rates on local links because there is more traffic.
A quick check of the BRL gateway shows that most of the traffic never
leaves BRL (yet the gateway was still the sixth busiest MILNET host).
Nobody really expects earth-shattering response from the MILNET anymore
(right or wrong).  Most of the traffic is mail, which all happens in the
background.

DCA was probably left behind for a while in network planning because of
the overwhelming success of the INTERNET.   First, the amount of traffic
for any host has gone way up.  Seven years ago, when BRL brought up its
first ARPANET host, there were maybe a dozen people in the lab who used
the ARPANET services.  Now near a thousand people rely on electronic mail
daily.  Second, since IP became available five years ago, MILNET node traffic
was no longer limitted by the traffic generated by a single node.  You
could have one machine front for an entire installation.  I'm not sure
DCA fully comprehended that.  I remember them once telling me that they
liked BRL, we only had one host on the net.  Of course, that host (actually
two) fronts for dozens of Ethernets, Proteon Ring Nets, Hyperchannels, and
even a six IMP ARPANET-clone.  On this are scads of workstations, super-minis,
and two CRAY's.

It's clear that the whole thing is over capacity.  Between gateways and
more and more users relying on network service, the old traffic estimates
are way out of line.  I'm not sure what can really be done though.  Trunks
could always be added, which is probably the most expedient.  56K is not bad
when you have enough connectivity.  The IMPs certainly won't deal with T1, but
more sophisticated switches such as the Butterflies are probably a long way
from the MILNET.  The new end-to-end protocol and the mailbridge upgrades have
not yet been fielded, let alone drastically changing the network topology.

Oh well.  I've got to go hook up another T1 line.

-Ron

-----------[000102][next][prev][last][first]----------------------------------------------------
Date:      Sat, 10-Oct-87 22:08:28 EDT
From:      rick@SEISMO.CSS.GOV (Rick Adams)
To:        comp.protocols.tcp-ip
Subject:   Re: RCTE

I have been able to negotiate a flat rate per hour (no kchar charges)
with Tymnet. They require some large monthly minimums, but are
quite willing to talk about a flat hourly charge.

I believe Compuserve has a flat monthly rate. you pay by the
number of simultaneous connections that you want permitted. It's something
like $750 per connection with a 6 connection minimum.

Flat rates are available. You have to know to ask for them and
be prepared to haggle over the eventual rate.

--rick

-----------[000103][next][prev][last][first]----------------------------------------------------
Date:      Sun, 11 Oct 87 00:18
From:      John Laws (on UK.MOD.RSRE) <LAWS%rsre.mod.uk@NSS.Cs.Ucl.AC.UK>
To:        CERF <@NSS.Cs.Ucl.AC.UK:CERF@a.isi.edu>
Cc:        tcp-ip <@NSS.Cs.Ucl.AC.UK:tcp-ip@sri-nic.arpa>
Subject:   Re: [John Robinson <jr@LF-SERVER-2.BBN.COM>: Re: TCP performanc...]
Vint,
 
I guess this reply should be to the John Robinson portion of your
msg - but my mailer is not that smart.
 
Concerning CRC checksum computation. More than 17 years ago I read a
paper on this topic which used matrix algebra to construct look-up
tables. You took your choice - one large table and a simple one
line equation, or more smaller tables and more complicated equations.
I could still remember enough maths then to follow the algebra
through and did a single table version for the CRC-16 checksum. I
used it in EIN and passed it around the European centres (there was
no chips then for this and most folks were having to do it in one-off
hardware of the day). My code implementation for a CTL Modular-One
(a UK m/c with some interesting inovations that were to appear in DEC
PDP 11) ran at 10 micro-secs per byte and only a little longer in
Corai-66. My only export of this to the US was Ed Cain about 2/3 years
ago. 
 
What does suprise me is the steady rain of papers on this topic which
often propose less effective solutions (and also get the algorithm
wrong) and never reference this old paper. Just a mo......
 
....found it -
 
P E Boudreau and R F Steen
(IBM Corp, Research Triangle Park, N.C.)
Cyclic Redundancy checking by program.
AFIPS Conf Proc Vol.39 1971, pp 9-15
 
and the latest I just found this week and in my briefcase waiting
to be read
 
Georgia Griffiths and G Carlyle Stones
The tea-leaf reader algorithm: an efficient implementation of
CRC-16 and CRC-32.
Comms of the ACM July 1987 Vol.30 No.7 pp 617-620
 
typos: tbles = tables, Corai-66 = Coral-66
 
John
-----------[000104][next][prev][last][first]----------------------------------------------------
Date:      Sun, 11-Oct-87 03:18:00 EDT
From:      LAWS@rsre.mod.UK (John Laws, on UK.MOD.RSRE)
To:        comp.protocols.tcp-ip
Subject:   Re: [John Robinson <jr@LF-SERVER-2.BBN.COM>: Re: TCP performanc...]

Vint,
 
I guess this reply should be to the John Robinson portion of your
msg - but my mailer is not that smart.
 
Concerning CRC checksum computation. More than 17 years ago I read a
paper on this topic which used matrix algebra to construct look-up
tables. You took your choice - one large table and a simple one
line equation, or more smaller tables and more complicated equations.
I could still remember enough maths then to follow the algebra
through and did a single table version for the CRC-16 checksum. I
used it in EIN and passed it around the European centres (there was
no chips then for this and most folks were having to do it in one-off
hardware of the day). My code implementation for a CTL Modular-One
(a UK m/c with some interesting inovations that were to appear in DEC
PDP 11) ran at 10 micro-secs per byte and only a little longer in
Corai-66. My only export of this to the US was Ed Cain about 2/3 years
ago. 
 
What does suprise me is the steady rain of papers on this topic which
often propose less effective solutions (and also get the algorithm
wrong) and never reference this old paper. Just a mo......
 
....found it -
 
P E Boudreau and R F Steen
(IBM Corp, Research Triangle Park, N.C.)
Cyclic Redundancy checking by program.
AFIPS Conf Proc Vol.39 1971, pp 9-15
 
and the latest I just found this week and in my briefcase waiting
to be read
 
Georgia Griffiths and G Carlyle Stones
The tea-leaf reader algorithm: an efficient implementation of
CRC-16 and CRC-32.
Comms of the ACM July 1987 Vol.30 No.7 pp 617-620
 
typos: tbles = tables, Corai-66 = Coral-66
 
John

-----------[000105][next][prev][last][first]----------------------------------------------------
Date:      Mon, 12-Oct-87 08:37:00 EDT
From:      cerf@ISI.EDU
To:        comp.protocols.tcp-ip
Subject:   Workstations, SUPDUPl, Windows...

Please blame me and not Mr. Mankins if you object to injection of this
message into the TCP-IP mailing list. His comments suggest that we 
ought to reconsider the function of the workstation and PC in relation
to mainframes and interconnecting (inter)nets. Where is the proper dividing
line between workstation and mainframe? How can context be satisfactorily
maintained between these two over connections of varying capacity and
delay?

Vint
------
	
Begin forwarded message
Received: from BFLY-VAX.BBN.COM by A.ISI.EDU with TCP; Sun 11 Oct 87 22:02:24-EDT
Date: 11 Oct 87 21:05:17 EDT (Sun)
From: David Mankins <dm@bfly-vax.bbn.com>
To: CERF@a.isi.edu, info-futures@bu-cs.bu.edu
Subject: SUPDUP, window systems, slow-links, few packets, fast response 
Return-Path: <dm@bfly-vax.bbn.com>
Sender: dm@bfly-vax.bbn.com


[Mr. Cerf, I'll follow JR's lead and let you decide if this is worth
posting to the TCP/IP mailing list.]

I hesitate to prolong the SUPDUP discussion on the TCP/IP list, nor do I
want to light the flames of window-systems religious wars.  However, in
discussing window systems, people have seemed to assume that a network
window protocol has to work by shipping bulk packages of pixels and
mouse movements around (as X v.10 did).  As has been pointed out, this
is mostly impractical on the ARPANET, or across a serial line (the
technologies of the past).

But there is an alternative approach: that taken by Sun's
News, for example.  News ships a high-level description of a window
across the network (in the case of News, the description is a
Postscript program).  I don't know how it deals with mouse-motions and
things like menus, but I understand that those are handled by the
computer on your desk, too.  There are reported to be implementations
of News on personal computers communicating with a workstation over a
serial line (the technologies of the impoverished present).

Well, not only the technologies of the impoverished present.  At last
year's X forum at MIT, one person asked, ``Is my Cray going to have to
pause in calculating a Fourier transform because someone moved a mouse
across their desk?''.  There are some very rich people who were
concerned about the micro-management of pixels pushed onto the host by
the early X.  In all fairness, I should remark that X v.11 gives the
small, cheap computer with the display lots of information that let it
save the big, expensive computer (and the network that joins them)
from this kind of micro-management.

In a way, the News solution is like Stallman's remote-editing
protocol: put the responsiveness and user-interface in the $700
hardware on the user's desk, and the computing and file-storage with
the big machine across the country (or across the hall).  (Stallman's
remote editing protocol description is very good, by the way, I second
the recommendations it has received on the TCP-IP list.  Time still
may not have passed it by.)

Another approach to the remote-editing/slow link problem was explored
by a DEC intern at Project Athena.  He looked into combining
Stallman's remote-editing protocol with an adaptive encoding
data-compression scheme for editing across slow links.  I think the
computation required to do the data-compression cost more than the
transmission time saved.  Either that or the summer ended before he
finished his work -- I don't recall anything coming out of it.

Yet another approach to the slow-link/cheap but sophisticated hardware
problem I've seen is this thing from Apple called ``Macworkstation''
or ``Machostconnection'' or something like that.  This was a
serial-line remote procedure call protocol that permitted your
mainframe to invoke Macintosh routines to do menu and icon
hunt-and-click computing -- either allowing you to debug your Mac
programs in the rich debugging environment of your mainframe, or
conceivably allowing your mainframe program to have a Macintosh
user-interface.  I think there are some third-party products like this
now (took 'em long enough).

(dm)

          --------------------
End forwarded message
		

-----------[000106][next][prev][last][first]----------------------------------------------------
Date:      Mon, 12-Oct-87 11:20:59 EDT
From:      Stevens@A.ISI.EDU (Jim Stevens)
To:        comp.protocols.tcp-ip
Subject:   :  Request for info on Packet Radio Networks

A good introductioin to packet radios networks is the January 1987
Proceedings of the IEEE.  That issue is a special issue on Packet
Radios Networks.
-------

-----------[000107][next][prev][last][first]----------------------------------------------------
Date:      Mon, 12-Oct-87 12:02:18 EDT
From:      kent@DECWRL.DEC.COM
To:        comp.protocols.tcp-ip
Subject:   Re: supdup protocol

Some work was done a number of years ago (I can't find a reference, but
it was at Arizona) to investigate how to use a micro to do editing
across a 1200 baud link. They had fairly good results with doing
pre-fetch and post-write, applying essentially a "virtual editor" model
at the backend, with a "virtual window" underneath it that did
demand-paging across the link. Editing commands across the link
operated on lines, not characters, though that wasn't necessarily the
interface presented to the user.

Anyone else remember this? Can you give more details? They certainly
used a protocol that was lighter weight than TCP, but the idea is an
old one. With 9600 baud links, I would think we could achieve acceptable results.

chris

-----------[000108][next][prev][last][first]----------------------------------------------------
Date:      Mon, 12-Oct-87 12:40:13 EDT
From:      harry@rainy.atmos.washington.edu (Harry Edmon)
To:        comp.os.vms,comp.protocols.tcp-ip
Subject:   Problem with CMU IP/TCP version 6.2

I am running CMU IP/TCP version 6.2 on a Microvax II in a VaxCluster.
When I start up IP/TCP, it exits with the error:

 XE read error, EC = 000000A00

Can anyone help me fix this?
-- 
Harry Edmon                   UUCP:   harry@rainy.atmos.washington.edu or
(206) 543-0547                        uw-beaver!geops!rainy!harry
Department of Atmospheric Sciences
University of Washington        BITNET: HARRY@UWARITA

-----------[000109][next][prev][last][first]----------------------------------------------------
Date:      Tue, 13-Oct-87 08:43:24 EDT
From:      pogran@CCQ.BBN.COM (Ken Pogran)
To:        comp.protocols.tcp-ip
Subject:   Re:  NSFNET woe: causes and consequences

Dave,

Please note that I suggested using the PSN idle timer to recycle
VCs as a WORKAROUND; it certainly isn't the proper long-term
RESOLUTION.  That is more likely to be the rebuilding of the
driver that you suggest.

You mention that packets "get lost if traffic is flowing at the
time of the VC CLEAR."  If we use an IDLE timer to generate the
clear, wouldn't we mostly avoid the problem -- because we clear
when it's idle -- i.e., no traffic flowing?

Regards,
 Ken

-----------[000110][next][prev][last][first]----------------------------------------------------
Date:      Tue, 13 Oct 87 08:43:24 EDT
From:      Ken Pogran <pogran@ccq.bbn.com>
To:        Mills@louie.udel.edu
Cc:        Ken Pogran <pogran@ccq.bbn.com>, tcp-ip@sri-nic.arpa
Subject:   Re:  NSFNET woe: causes and consequences
Dave,

Please note that I suggested using the PSN idle timer to recycle
VCs as a WORKAROUND; it certainly isn't the proper long-term
RESOLUTION.  That is more likely to be the rebuilding of the
driver that you suggest.

You mention that packets "get lost if traffic is flowing at the
time of the VC CLEAR."  If we use an IDLE timer to generate the
clear, wouldn't we mostly avoid the problem -- because we clear
when it's idle -- i.e., no traffic flowing?

Regards,
 Ken
-----------[000111][next][prev][last][first]----------------------------------------------------
Date:      Tue, 13-Oct-87 10:08:00 EDT
From:      DCP@QUABBIN.SCRC.SYMBOLICS.COM (David C. Plummer)
To:        comp.protocols.tcp-ip
Subject:   Re: TCP performance limitations


    Date: 8 Oct 87 16:05:47 GMT
    From: csustan!csun!psivax!nrcvax!ihm@LLL-WINKEN.ARPA  (Ian H. Merritt)

    >There is a fourth way that we (Symbolics) have done which you did not
    >mentioned:
    >
    >(a) Pick a compile-time unrolling factor, usually a power of 2, say 16 = 2^4.
    >(b) Divide the data length by the unrolling factor, obtaining a quotient
    >    and remainder.  When the unrolling factor is a power of two, the
    >    quotient is a shift and the remainder is a logical AND.
    >(c) Write a unrolled loop whose length is the unrolling factor.  Execute
    >    this loop <quotient> times.
    >(d) Write an un-unrolled loop (whose length is therefore 1).  Execute
    >    this loop <remainder> times.

    Or if you have memory to burn (which is fast becoming a common
    condition), just unroll the loop for the maximum condition and branch
    into it at the appropriate point to process the length of the actual
    packet.

First of all, that's 65535 octets for TCP.  Second, I believe that was
one of the three techniques metioned by the person to whom I was
replying.  Third, we (Symbolics) can't do that without playing some
really nasty games with the compiler.  You see, we're of the opinion
that assembly language is a thing of the past, and there aren't any good
Lisp constructs for the kind of computed GO necessary to pull this trick
off.  I can't think of any good tricks in FORTRAN, either.  I'm not
familiar with Pascal, Ada or C to know if those higher-level languages
allow such things.  Fourth, depending on your CPU architecture, there
may be ideal unrolling constants which would keep the unrolled loop
inside an instruction prefetch buffer; complete unrolling would actually
be a degredation.

-----------[000112][next][prev][last][first]----------------------------------------------------
Date:      Tue, 13 Oct 87 13:12:37 -0400
From:      dan@WILMA.BBN.COM
To:        "Christopher A. Kent" <kent@SONORA.DEC.COM>
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: supdup protocol and local editing
> Some work was done a number of years ago (...  at Arizona) to
> investigate how to use a micro to do editing across a 1200 baud link.
> They ...  [applied] a "virtual editor" model at the backend ...
> Editing commands across the link operated on lines, not characters,
> though that wasn't necessarily the interface presented to the user.

I don't know the work at Arizona you refer to, but I did something
very much like this back when I had an ECD MicroMind micro in my
office in 1979.  The MicroMind had a "terminal program" which was also
a full-screen editor.  To edit a file, I sent over the lines I wanted
to change, edited them for awhile, then sent them back.  The
"protocol" across the line "operated on lines, not characters," and
was about as lightweight as you could get: it was the Unix "ed" line
editor command set!  To edit a file, I would invoke ed on it, print a
range of lines, edit them locally, then send a change command to put
them back in the ed buffer.  Keyboard macros in the MicroMind made it
all quick and easy.  Like other people, I was very willing to put up
with a few seconds' wait at the beginning and end of my editing to get
an instantaneous response to each keystroke (and other advantages,
such as a real meta key).  This was done at 2400 baud, because Unix
couldn't receive characters any faster without flow control (which
the MicroMind didn't have).

Sending over integral numbers of lines was just right, since that's
what your local editor wants to deal with anyway.  It also means you can
easily handle having two different representations for "lines"
(records vs. streams, crlf vs. lf, etc.)  on the two machines.  Also,
if the backend "editor" can mark the beginning and end of each region
sent to the local micro in a way which does not change as lines are
added or deleted outside each region (which ed had), then you can
trivially have independent windows on the same file at the same time
with virtually no local "intelligence".

	Dan Franklin
-----------[000113][next][prev][last][first]----------------------------------------------------
Date:      Tue, 13-Oct-87 11:34:43 EDT
From:      jh@tut.fi (Juha Hein{nen)
To:        comp.protocols.tcp-ip
Subject:   slip for vms?

I'm looking for SLIP (serial line ip) for VMS.  Wollongong doesn't
seem to have one.  How is it with CMU/Tek TCP/IP package or does
someone know any other source?

-- 
	Juha Heinanen
	Tampere Univ. of Technology
	Finland
	jh@tut.fi (Internet), tut!jh (UUCP)

-----------[000114][next][prev][last][first]----------------------------------------------------
Date:      Tue, 13-Oct-87 12:59:05 EDT
From:      Mills@UDEL.EDU
To:        comp.protocols.tcp-ip
Subject:   Re:  NSFNET woe: causes and consequences

Ken,

My experience is that it is not unlikely that a packet happens to be in transit
from one end when the other end decides to close. While this would not
occur too often, it would occur often enough to dominate losses due other
causes.

Dave

-----------[000115][next][prev][last][first]----------------------------------------------------
Date:      13 Oct 1987 17:53-PDT
From:      STJOHNS@SRI-NIC.ARPA
To:        AFDDN.TCP-IP@GUNTER-ADAM.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: Can of worms revisited
Darrel,  billing  by  packet is based on the X.25 packet, not the
subnet tinygram.  Mike
-----------[000116][next][prev][last][first]----------------------------------------------------
Date:      Tue, 13-Oct-87 16:11:45 EDT
From:      barmar@think.COM (Barry Margolin)
To:        comp.protocols.tcp-ip
Subject:   Re: supdup protocol

In article <8710121554.AA07611@armagnac.DEC.COM> "Christopher A. Kent" <kent@sonora.dec.com> writes:
>Some work was done a number of years ago (I can't find a reference, but
>it was at Arizona) to investigate how to use a micro to do editing
>across a 1200 baud link.

I don't think this is the exact paper you are talking about, but it is
similar:

Judd, J. Stephen, Corinne J. Letilley, "Memory and Communication
Tradeoffs During Screen-Editor Sessions", Univ of Saskatchewan, August
16, 1984.

Abstract:

   Screen Editor sessions typically make heavy use of the
communication channel between processor and display screen.  This is
because relatibely simple and quick operations like window movements
can cause the transfer of 1000 or more characters.  To get a
quantitative measure of communication requirements, we need to
determine how people use such systems.  We accumulated a
representative sample of user activity by tracing the movements of the
cursor during 1500 editing sessions.  Some information about these
sessions is presented.

   To make effective use of an interactive screen editor, the two-way
channel between the computer and the screen terminal must have a
fairly high baud rate.  By simulating the observed sessions at various
baud rates, we measured the amount of time lost during such a session
if the baud rate is low.  Then we estimated the increase in
performance afforded by keeping a buffer of text lines local to the
terminal.  Resultant graphs are suitable for comparing the performance
of terminals with various memory sizes and baud rates.

   We prpose a terminal that takes an active role in the management of
text during editing sessions and we estimate its impact on CPU demands
in the host.  This work has implications for the design of terminal
hardware and screen-editor software.

---
Barry Margolin
Thinking Machines Corp.

barmar@think.com
seismo!think!barmar

-----------[000117][next][prev][last][first]----------------------------------------------------
Date:      Tue, 13-Oct-87 17:26:15 EDT
From:      lars@ACC-SB-UNIX.ARPA (Lars Poulsen)
To:        comp.protocols.tcp-ip
Subject:   PSN 7.0 and NSFnet Gateways

With the deployment of PSN 7.0 now under  way,  a  few  X.25
sites  have  reported  performance  problems,  and  more may
follow as the network changes to the new end-to-end  module.
Since  most  of  the  X.25  sites  are  connected  using ACC
products, any such problems are of concern to ACC.

It is our understanding that the problems seen so far are of
two kinds:

(1)  Throughput drops for some very few hosts with very high
     traffic  load.  This  has  been  attributed to a buffer
     shortage  in  the  CMU-14  node  and  an   error   (now
     corrected)  in  the "routing patch" code.  This problem
     will disappear when the network goes to PSN 7.1.

(2)  A shortage of virtual circuits between X.25  hosts  and
     the PSN.  This seems to only affect a few gateway hosts
     with many EGP peers.  BBN has suggested more aggressive
     reclaiming  of  idle virtual circuits; this can be done
     either in the PSN or the host code. However, it may not
     help if the gateway's routing daemon polls its peers at
     fixed intervals: all the virtual circuits will then  be
     needed at the same time.

The goal of this conversion is to provide higher throughput,
and  this  will be to everyone's advantage. Individual sites
that suffer the opposite effect may want  to  contact  ACC's
customer  service  to  enquire  about  the availability of a
product update which addresses  the  throughput  issue  from
another angle by using larger packet sizes and larger packet
windows.  Like all of our product updates, this is  free  to
customers  under  service  contract,  and  available  for  a
nominal fee to others.

Lars Poulsen, ACC Customer Service
SERVICE@ACC-SB-UNIX.ARPA

-----------[000118][next][prev][last][first]----------------------------------------------------
Date:      Tue, 13-Oct-87 19:18:50 EDT
From:      roger@celtics.UUCP (Roger B.A. Klorese)
To:        comp.protocols.tcp-ip,comp.unix.wizards,comp.dcom.lans
Subject:   Ethernet - Hyperchannel Gateway

Does anyone know of a product providing an Ethernet-to-Hyperchannel
gateway?  I'm looking for a "black box" to sit on an ethernet and
pass TCP-IP and its friends in both directions.
-- 
 ///==\\   (Your message here...)
///        Roger B.A. Klorese, CELERITY (Northeast Area)
\\\        40 Speen St., Framingham, MA 01701  +1 617 872-1552
 \\\==//   celtics!roger@necntc.nec.com - necntc!celtics!roger

-----------[000119][next][prev][last][first]----------------------------------------------------
Date:      Tue, 13-Oct-87 20:53:00 EDT
From:      STJOHNS@SRI-NIC.ARPA
To:        comp.protocols.tcp-ip
Subject:   Re: Can of worms revisited

Darrel,  billing  by  packet is based on the X.25 packet, not the
subnet tinygram.  Mike

-----------[000120][next][prev][last][first]----------------------------------------------------
Date:      Wed, 14-Oct-87 03:40:23 EDT
From:      dan@WILMA.BBN.COM
To:        comp.protocols.tcp-ip
Subject:   Re: supdup protocol and local editing

> Some work was done a number of years ago (...  at Arizona) to
> investigate how to use a micro to do editing across a 1200 baud link.
> They ...  [applied] a "virtual editor" model at the backend ...
> Editing commands across the link operated on lines, not characters,
> though that wasn't necessarily the interface presented to the user.

I don't know the work at Arizona you refer to, but I did something
very much like this back when I had an ECD MicroMind micro in my
office in 1979.  The MicroMind had a "terminal program" which was also
a full-screen editor.  To edit a file, I sent over the lines I wanted
to change, edited them for awhile, then sent them back.  The
"protocol" across the line "operated on lines, not characters," and
was about as lightweight as you could get: it was the Unix "ed" line
editor command set!  To edit a file, I would invoke ed on it, print a
range of lines, edit them locally, then send a change command to put
them back in the ed buffer.  Keyboard macros in the MicroMind made it
all quick and easy.  Like other people, I was very willing to put up
with a few seconds' wait at the beginning and end of my editing to get
an instantaneous response to each keystroke (and other advantages,
such as a real meta key).  This was done at 2400 baud, because Unix
couldn't receive characters any faster without flow control (which
the MicroMind didn't have).

Sending over integral numbers of lines was just right, since that's
what your local editor wants to deal with anyway.  It also means you can
easily handle having two different representations for "lines"
(records vs. streams, crlf vs. lf, etc.)  on the two machines.  Also,
if the backend "editor" can mark the beginning and end of each region
sent to the local micro in a way which does not change as lines are
added or deleted outside each region (which ed had), then you can
trivially have independent windows on the same file at the same time
with virtually no local "intelligence".

	Dan Franklin

-----------[000121][next][prev][last][first]----------------------------------------------------
Date:      Wed, 14-Oct-87 04:57:49 EDT
From:      kent@DECWRL.DEC.COM
To:        comp.protocols.tcp-ip
Subject:   Re: supdup protocol and local editing

Ah. I was off-base -- the work was done at Rutgers, not Arizona. The
full reference is DCS-TR-110, "Software Design Issues in the Architecture and
Implementation of Distributed Text Editors", Robert N. Goldberg, Dept.
of Computer Science, Hill Center for the Mathematical Sciences, Busch
Campus, Rutgers University, New Brunswick, NJ 08903.

I don't have a copy, so I can't tell you more, but I'll repeat what I
recall of the basic premise: the display/user interface portion ran on
a PC and accessed the file through a line editor interface, across a
1200 baud line. The PC viewed the whole file in a manner reminiscent of
virtual memory. The author developed a concept he called "optimal
pre-fetch", based on keystroke analysis of various editors, to allow
the PC half to minimize the time the user spent waiting for lines to be
fetched across the link.

Seems like a good place to start looking, for someone looking to
implement this sort of system. I believe that a simple editor makes a
better interface for this sort of work than a full-blown remote file
system, but it means that youre implementing a fairly special-purpose
interface. In this day of faster communication, it may no longer be
worth the effort.

chris

-----------[000122][next][prev][last][first]----------------------------------------------------
Date:      Wed, 14-Oct-87 09:28:41 EDT
From:      sas1@sphinx.uchicago.edu (Stuart Schmukler)
To:        comp.protocols.tcp-ip
Subject:   Re: supdup protocol

In article <8710121554.AA07611@armagnac.DEC.COM> "Christopher A. Kent" <kent@sonora.dec.com> writes:
>Some work was done a number of years ago (I can't find a reference, but
>it was at Arizona) to investigate how to use a micro to do editing
>across a 1200 baud link. 

I think that I have the references you are talking about; they were written
by Christopher W. Fraser of The University of Arizona and others. They are:

C.W. Fraser,"A Generalized Text Editor", Communications of the ACM, March
1980, Volume 23, Number 3.

C.W. Fraser, "A Compact, Portable CRT-based Text editor", Software-Practice
and Experience, Vol. 9, 121-125 (1979).

Cary A. Coutant and C.W. Fraser, "A Device for Display Terminals",
Software-Practice and Experience, Vol. 10, 183-187 (1980).

And a report from their department:

TR 79-7a, C.W. Fraser, "The Display Editor S"

They concluded that the links available at the time UUCP 1200 baud dialups
were to slow and error prone for effective use.  Got a copy of the software
from them on a ratfor distribution tape.  They may still have it available
for copying from their archives.  [I hope so because that tape has vanished
over the years.]

SaS

-----------[000123][next][prev][last][first]----------------------------------------------------
Date:      Wed, 14-Oct-87 10:51:01 EDT
From:      markl@ALLSPICE.LCS.MIT.EDU
To:        comp.protocols.tcp-ip
Subject:   Re: Workstations, SUPDUPl, Windows...

I've been working on mainframe-processing-vs-workstation-processing
issues for a couple of years now, in the form of the Pcmail
distributed mail system.  Everyone receives their mail at a central
point and reads and modifies it over the network at a workstation.
The question is, how much mail processing is done on the mainframe and
how much at the workstation?  This becomes especially interesting if
your workstations have wildly differing capabilities and some are
unable to perform sophisticated operations like searches or sorts on
their own.  A related problem is how the workstation manages to
communicate efficiently with the mainframe over a 1200 BPS network
connection.  Dave Clark and I spent a fair amount of time designing a
set of operations that minimised packet traffic over slow links, and
placed a minimal computing burden on the mainframe, while not placing
too much of a computing burden on resource-poor workstations.  

markl

Internet: markl@ptt.lcs.mit.edu

Mark L. Lambert
MIT Laboratory for Computer Science
Distributed Systems Group

----------

-----------[000124][next][prev][last][first]----------------------------------------------------
Date:      Wed, 14-Oct-87 11:59:57 EDT
From:      lars@ACC-SB-UNIX.ARPA (Lars Poulsen)
To:        comp.protocols.tcp-ip
Subject:   Re:  X.25 problems

> Date: 14 Oct 1987 05:13-EDT
> Subject: X.25 problems
> From: CERF@A.ISI.EDU
> To: service@ACC-SB-UNIX.ARPA
> 
> I don't understand why the introduction of release 7.0 should
> exacerbate X.25 VC shortages - the limitation is in the ACC
> software, isn't it (maximum VCs set at 64?) and this would be
> a bottleneck rgardless of the IMP release (7 or its predecessor)?
> Why would these problems surface only with release 7?
> 
> Vint

Vint,
	The limitation is actually in firmware rather than in software.
We run the entire packet level on a 68000 on our X.25 board. And yes,
the limit is 64.
	Our device driver closes virtual circuits after (currently 10)
minutes of idle time. Since the timer value is in a source code #define
and we provide source code, any system manager can tighten this, and free
up VC's after two minutes, if they desire.
	The timer is set fairly long; we at one time closed circuits after
one idle minute, only to find that we would be thrashing VC's: Under certain
network conditions, the packet round trip time could go up to 80 seconds.
Under pathological conditions (buffer shortage in the PSN) we have even seen
30 seconds round trip time for an ICMP echo addressed to the host itself.
This can only be explained by the X25 equivalent of 1822 "blocking".
We are REALLY looking forward to the new End-to-end module curing this
problem.
	The lack of virtual circuits usually becomes a problem when
the network becomes pathologically slow. We speculate that this is
because transfers that normally complete in a couple of minutes
may take up to a half hour under these conditions, and thus there is
much more overlap.
	The transition release PSN7.0 has more code in it than either
PSN6.0 or PSN7.1 ; this means fewer buffers. This tends to provoke
the situation described above.
	Finally, I should mention that I have seen that hosts that use
many virtual circuits tend to have a few of these with bursts of real
traffic, such as you would expect for "normal" TCP use (SMTP, FTP,
TELNET) and a large number of VCs with very short bursts (< 5 packets)
with large intervals (one burst every 15 minutes or so). Invariably,
these VCs are to GATEWAYS, which is why I speculated that this might
be EGP traffic (I have never really read up on routing protocols).
I am told that each gateway peers with no more than 3 core EGP-mumblers.
I am now speculating, that maybe some gateway daemons like to ping each
gateway that they hear about to make sure it is reachable, but this
is speculative.
	I hope this helps you understand why we are concerned about
the transition.

	/ Lars Poulsen
	  ACC Customer Service
	  Service@ACC-SB-UNIX.ARPA

-----------[000125][next][prev][last][first]----------------------------------------------------
Date:      Wed, 14-Oct-87 12:25:00 EDT
From:      CERF@A.ISI.EDU
To:        comp.protocols.tcp-ip
Subject:   Re:  X.25 problems

Lars,

thanks for the explanation - what you say means that the
problem could arise in earlier releases, but is exacerbated by
the shortage of buffers. Memory again! In this day and age, one
wishes that memory problems would be a thing of the past. Do you
know how much memory complement is carried by each C30E IMP on
the ARPANET and/or MILNET?

Vint

-----------[000126][next][prev][last][first]----------------------------------------------------
Date:      Wed, 14-Oct-87 13:45:43 EDT
From:      pogran@CCQ.BBN.COM (Ken Pogran)
To:        comp.protocols.tcp-ip
Subject:   Re:  X.25 problems

Vint,

C/30Es in the ARPANET and MILNET have 256KW (1/2 Megabyte) of
memory.  C/300s, just beginning to be introduced at particularly
"busy" nodes, have twice that.  It's certainly a far cry from the
"old days" of Honeywell 516s and 316s; then again, there's a lot
more functionality in PSNs these days, and each PSN typically
serves a larger number of host interfaces than in the past.

By the way, I second Lars Poulsen's comment about "REALLY looking
forward to the new End-to-End module" alleviating some of the
X.25 performance problems that have been seen.  In PSN 7.0,
interoperability between X.25-connected and 1822-connected hosts
is "built in" rather than "grafted on," and we should see a good
bit of improvement.  Nothing that can, in and of itself, make it
seem like the network has infinitely more transmission resources,
but ...

Finally, everyone should understand that all of the changes and
improvements, to both the network and its hosts, are being
introduced into an environment of ever-increasing traffic and
numbers of gateways.  So, when changes are made and they settle
down after initial problems are corrected, etc., we must
remember in making "before" and "after" performance
comparisons that the load being imposed upon the network is
higher "after" than it was "before"!

Ken Pogran

-----------[000127][next][prev][last][first]----------------------------------------------------
Date:      Wed, 14 Oct 87 13:45:43 EDT
From:      Ken Pogran <pogran@ccq.bbn.com>
To:        CERF@a.isi.edu
Cc:        lars@acc-sb-unix.arpa, service@acc-sb-unix.arpa, arpaupgrade@bbn.com, gary@acc-sb-unix.arpa, tcp-ip@sri-nic.arpa, pogran@ccq.bbn.com
Subject:   Re:  X.25 problems
Vint,

C/30Es in the ARPANET and MILNET have 256KW (1/2 Megabyte) of
memory.  C/300s, just beginning to be introduced at particularly
"busy" nodes, have twice that.  It's certainly a far cry from the
"old days" of Honeywell 516s and 316s; then again, there's a lot
more functionality in PSNs these days, and each PSN typically
serves a larger number of host interfaces than in the past.

By the way, I second Lars Poulsen's comment about "REALLY looking
forward to the new End-to-End module" alleviating some of the
X.25 performance problems that have been seen.  In PSN 7.0,
interoperability between X.25-connected and 1822-connected hosts
is "built in" rather than "grafted on," and we should see a good
bit of improvement.  Nothing that can, in and of itself, make it
seem like the network has infinitely more transmission resources,
but ...

Finally, everyone should understand that all of the changes and
improvements, to both the network and its hosts, are being
introduced into an environment of ever-increasing traffic and
numbers of gateways.  So, when changes are made and they settle
down after initial problems are corrected, etc., we must
remember in making "before" and "after" performance
comparisons that the load being imposed upon the network is
higher "after" than it was "before"!

Ken Pogran
-----------[000128][next][prev][last][first]----------------------------------------------------
Date:      14 Oct 1987 18:49-PDT
From:      STJOHNS@SRI-NIC.ARPA
To:        BILLW@MATHOM.CISCO.COM
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: Authentication
Bill,  I've  been trying to get out an RFC detailing the protocol
we use between the TACs and the TACACS boxes, but I haven't had a
chance  to edit it and format it properly.  I can send you a copy
(paper) if you want to take  a  look  at  it.   Sorry,  it  isn't
wrapped in anything resembling encryption.  Mike
-----------[000129][next][prev][last][first]----------------------------------------------------
Date:      Wed, 14-Oct-87 15:56:28 EDT
From:      shor@sphinx.uchicago.edu (Melinda Shore)
To:        comp.protocols.tcp-ip,comp.unix.wizards,comp.dcom.lans
Subject:   Re: Ethernet - Hyperchannel Gateway

In article <1822@celtics.UUCP> roger@celtics.UUCP (Roger B.A. Klorese) writes:
>Does anyone know of a product providing an Ethernet-to-Hyperchannel
>gateway?  I'm looking for a "black box" to sit on an ethernet and
>pass TCP-IP and its friends in both directions.

No offense, but ha, ha, ha.  We're in the same position, since we
need to run TCP/IP on our Cray and we get to the machine through the
Hyperchannel, and the whole thing has been pretty aggravating.  Don't
bother talking with NSC, they don't even have their IP-able driver
in alpha-test yet.

It turns out that most of the people in the world who do this use a
Sun.  John Lekashman at NASA-Ames has modified 4.3 if_hy.c so that
actually works on a macro-Vax (Unibus), and I've almost finished
hacking that up to work on with a PI12 on a microVax.  John's driver is
available for anonymous ftp from orville.arpa.  Contact me if you want
the microVax version.

As far as I know, nobody has come up with any kind of standalone
bridge.
-- 
Melinda Shore                                   ..!hao!oddjob!sphinx!shor
Pittsburgh Supercomputing Center                     shore@morgul.psc.edu

-----------[000130][next][prev][last][first]----------------------------------------------------
Date:      Wed, 14-Oct-87 15:58:42 EDT
From:      bae@LLL-TIS.ARPA (Hwa Jin Bae)
To:        comp.protocols.tcp-ip
Subject:   Re: Fw: request for peer review

It sounds like a neat idea (Ollie ?).  I especially like the idea of dynamic 
node address resolution.  The BIND is just too primitive not to mention the 
cryptic implementation.  I wonder how your Landmark Routing will turn out
in junction with some of the latest high level name binding techniques used
in RPC based systems (most recent one I know of being the NCS from apollo).
The NCS Location Broker may be able to do a better job if you finish the
implementation of your stuff, I think....

May I have to honor of reading one of the chapters of your article?
I am very interested.
Best wishes.

Hwajin

-----------[000131][next][prev][last][first]----------------------------------------------------
Date:      Wed, 14 Oct 87 18:44:36 -0400
From:      dan@WILMA.BBN.COM
To:        Barry Shein <bzs@BU-CS.BU.EDU>
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: Predictable network access prices (was: RCTE)
> ... one has to be able to
> show that the economy of scale is working in general and, as I
> believe, that the per-quantum costs would end up costing the smaller
> user more ...

Not to mention that the machinery for counting, and accounting for,
packets can itself cost serious money.  Nicholas Johnson observed some
years ago that half the cost of a long-distance telephone call was in
billing you for that call.  (Nicholas Johnson was an FCC Commissioner
and so presumably in a position to know.)

	Dan
-----------[000132][next][prev][last][first]----------------------------------------------------
Date:      Wed, 14 Oct 87 16:39:00 edt
From:      padwa%harvsc3@harvard.harvard.edu (Danny Padwa)
To:        "tcp-ip@sri-nic.arpa"%hucsc@harvard.harvard.edu, padwa@harvard.harvard.edu
Subject:   supdup protocol and local editing
I recently worked on a distributed screen editor at the National Magnetic
Fusion energy Computer Center in Livermore, California. Their terminal
emulator package has a screen editor that distributes the work between the
IBM-PC (terminal) and the CRAY (host). This allows responsiveness from the
PC and the avoidance of long uploads to/from the CRAY. If you are interested in
such a package, you may want to get in touch with them.
		Danny Padwa
		PADWA@HARVSC3.HARVARD.EDU

-----------[000133][next][prev][last][first]----------------------------------------------------
Date:      Wed, 14-Oct-87 16:59:35 EDT
From:      Mills@UDEL.EDU
To:        comp.protocols.tcp-ip
Subject:   Re:  X.25 problems

Lars,

Non-core gateway daemons, the only ones likely to use ACC interfaces, do
NOT ping gateways other than the three corespeakers and then only with
EGP, which has intrinsic provisions to limit the polling frequency.
The only other polling-type protocol likely to appear over ARPANET paths
is the Network Time Protocol (NTP) spoken by a few gateways, but not
PSC to my knowledge. Therefore, I must conclude that whatever the cause
of vast numbers of ARPANET host pairs with one end at PSC it is due to
normal traffic spasms. Note that the so-called extra-hop problem due
to incomplete knowledge at the corespeakers can create a non-reciprocal
situation where two circuits, not one, are required between certain
host pairs.

What I am not hearing in your explanation on how the ACC interface handles
VC allocation is what happens when all VCs are fully allocated. I have
heard from PSC staff that the driver complains in messages to the operator
when an attempt is made to open another VC past the 64-circuit max. I
would assume the polite driver would keep an activity with entries for
each active VC and clear the oldest/least-used to make room for the next
one. I would assume this would also happen if an incoming call request
appeared from the network and all VCs were committed. Further, I would
assume both the PSN and ACC would do this kind of thing, no matter what the
timeouts chosen.

Dave

-----------[000134][next][prev][last][first]----------------------------------------------------
Date:      Wed, 14-Oct-87 17:09:00 EDT
From:      CERF@A.ISI.EDU
To:        comp.protocols.tcp-ip
Subject:   Re:  X.25 problems

Ken,

if the improvements are not keeping up with the load they are the wrong
improvements!

Vint

-----------[000135][next][prev][last][first]----------------------------------------------------
Date:      Wed, 14-Oct-87 19:35:53 EDT
From:      wade@violet.berkeley.edu (Wade Stebbings)
To:        comp.protocols.tcp-ip,comp.unix.wizards,comp.dcom.lans
Subject:   Re: Ethernet - Hyperchannel Gateway

> Does anyone know of a product providing an Ethernet-to-Hyperchannel
> gateway?  I'm looking for a "black box" to sit on an ethernet and
> pass TCP-IP and its friends in both directions.

We dedicate a Vax 750 for this purpose, but I hear that Network
Systems is going to offer an ethernet adapter soon.  Unfortunately,
this is all I know about it.  Check with your local NSC rep.

	Wade Stebbings
	CFC -- UC Berkeley
	wade@violet.berkeley.edu

-----------[000136][next][prev][last][first]----------------------------------------------------
Date:      Wed, 14-Oct-87 21:18:14 EDT
From:      BILLW@MATHOM.CISCO.COM (William Westfield)
To:        comp.protocols.tcp-ip
Subject:   Authentication

Is there a spec for a general purpose authentication service ?
What I want is somewhere I can send a UDP datagram containing things
like my host name, my user name, my password, and perhaps other info
(all somewhat encrypted, hopefully), and get back a response that says
yes or no.  (note that this is different than the TCP level authentication
server described in rfc931...

Thanks
Bill Westfield
cisco Systems.
is is iT

-----------[000137][next][prev][last][first]----------------------------------------------------
Date:      Wed, 14-Oct-87 21:49:00 EDT
From:      STJOHNS@SRI-NIC.ARPA
To:        comp.protocols.tcp-ip
Subject:   Re: Authentication

Bill,  I've  been trying to get out an RFC detailing the protocol
we use between the TACs and the TACACS boxes, but I haven't had a
chance  to edit it and format it properly.  I can send you a copy
(paper) if you want to take  a  look  at  it.   Sorry,  it  isn't
wrapped in anything resembling encryption.  Mike

-----------[000138][next][prev][last][first]----------------------------------------------------
Date:      Wed, 14-Oct-87 22:04:08 EDT
From:      postel@VENERA.ISI.EDU
To:        comp.protocols.tcp-ip
Subject:   A Standard for the Transmission of IP Datagrams over IEEE 802 Networks


Your comments please

--jon.

Network Working Group                                          J. Postel
Request for Comments:  DRAFT                                 J. Reynolds
                                                                     ISI
Obsoletes: RFC-948                                             mmmm 1987



 A Standard for the Transmission of IP Datagrams over IEEE 802 Networks


Status of this Memo

   This RFC specifies a standard method of encapsulating the Internet
   Protocol (IP) [1] datagrams and Address Resolution Protocol (ARP) [2]
   requests and replies on IEEE 802 Networks.  This RFC specifies a
   protocol standard for the Internet community.  Distribution of this
   memo is unlimited.

Acknowledgment

   This memo would not exist with out the very significant contributions
   of Drew Perkins of Carnegie Mellon University and Jacob Rekhter of
   the T.J. Watson Research Center, IBM Corporation.

Introduction

   The goal of this specification is to have implementations for
   transmitting IP datagrams and ARP request and replies be compatible
   and interwork.  To achieve this it may be necessary in a few cases to
   limit the use that IP datagrams make of the capabilities of a
   particular IEEE 802 network.

   This memo describes the use of IP and ARP on three types of networks.
   It is not necessary that the use of IP and ARP be consistent across
   all three types of networks, only that it be consistent within each
   type.

   The IEEE 802 specifications define a family of standards for Local
   Area Networks (LANs) that deal with the Physical and Data Link Layers
   as defined by the ISO Open System Interconnection Reference Model
   (ISO/OSI).  Several Physical Layer standards (802.3, 802.4, and
   802.5) [3,4,5] and one Data Link Layer Standard (802.2) [6] have been
   defined.  The IEEE Physical Layer standards specify the ISO/OSI
   Physical Layer and the Media Access Control Sublayer of the ISO/OSI
   Data Link Layer.  The 802.2 Data Link Layer standard specifies the
   Logical Link Control Sublayer of the ISO/OSI Data Link Layer.

   All communication is performed using 802.2 type 1 communication.  The



Postel & Reynolds                                               [Page 1]
 
RFC DRAFT           IP and ARP on IEEE 802 Networks            mmmm 1987


   802.2 type 2 communication is not used.

   The 802.x networks may have 16-bit or 48-bit physical addresses.

   It is the goal of this memo to specify enough about the use of IP and
   ARP on each type of network such that:

      (1) all equipment using IP or ARP on 802.3 networks will
      interoperate,

      (2) all equipment using IP or ARP on 802.4 networks will
      interoperate,

      (3) all equipment using IP or ARP on 802.5 networks will
      interoperate.

   Of course, the goal of IP is interoperability between computers
   attached to different networks, when those networks are
   interconnected via an IP gateway [8].

Description

   IEEE 802 networks may be used as IP networks of any class (A, B, or
   C).  These systems use two Link Service Access Point (LSAP) fields of
   the LLC header in much the same way the ARPANET uses the "link"
   field.  Further, there is an extension of the LLC header called the
   Sub-Network Access Protocol (SNAP).

   IP datagrams are sent on 802 networks encapsulated within the 802.2
   LLC and SNAP data link layers, and the 802.3, 802.4, or 802.5
   physical networks layers.  The SNAP is used with an Organization Code
   indicating that the following 16 bits specify the EtherType code (as
   listed in Assigned Numbers [7]).

   Note that the 802.3 standard specifies a transmission rate of from 1
   to 20 megabit/second, the 802.4 standard specifies 1, 5, and 10
   megabit/second, and the 802.5 standard specifies 1 and 4
   megabit/second.  The typical transmission rates used are 10
   megabit/second (802.3) or 4 megabit/second (802.5).  However, this
   specification for the transmission of IP Datagrams does not depend on
   the transmission rate.










Postel & Reynolds                                               [Page 2]
 
RFC DRAFT           IP and ARP on IEEE 802 Networks            mmmm 1987


                                                                  Header

   ...--------+--------+--------+
              MAC Header        |                        802.{3/4/5} MAC
   ...--------+--------+--------+

   +--------+--------+--------+
   | DSAP=K1| SSAP=K1| Control|                                802.2 LLC
   +--------+--------+--------+

   +--------+--------+---------+--------+--------+
   |Protocol Id or Org Code =K2|    EtherType    |            802.2 SNAP
   +--------+--------+---------+--------+--------+

   The total length of the LLC  Header and the SNAP header is 8-octets,
   making the 802.2 protocol overhead come out on a nice boundary.

   The K1 value is 170 (decimal).

   The K2 value is 0 (zero).

   The control value is 3 (for Unnumbered Information).

Address Mappings

   The mapping of 32-bit Internet addresses to 16-bit or 48-bit 802
   addresses must be done via the dynamic discovery procedure of the
   Address Resolution Protocol (ARP) [2].

   Internet addresses are assigned arbitrarily on Internet networks.
   Each host's implementation must know its own Internet address and
   respond to Address Resolution requests appropriately.  It must also
   use ARP to translate Internet addresses to 802 addresses when needed.

   The ARP Details

      The ARP protocol has several fields that parameterize its use in
      any specific context [2].  These fields are:

         hrd     16 - bits       The Hardware Type Code
         pro     16 - bits       The Protocol Type Code
         hln      8 - bits       Bytes in each hardware address
         pln      8 - bits       Bytes in each protocol address
         op      16 - bits       Operation Code

      The hardware type code assigned for the 802 networks (of all
      kinds) is 6 (see [7] page 16).




Postel & Reynolds                                               [Page 3]
 
RFC DRAFT           IP and ARP on IEEE 802 Networks            mmmm 1987


      The protocol type code for IP is 2048 (see [7] page 14).

      The hardware address length is 2 (for 16-bit 802 addresses), or 6
      (for 48-bit 802 addresses).

      The protocol address length (for IP) is 4.

      The operation code is 1 for request and 2 for reply.

Broadcast Address

   The broadcast Internet address (the address on that network with a
   host part of all binary ones) should be mapped to the broadcast 802
   address (of all binary ones) (see [8] page 14).

Trailer Formats

   Some versions of Unix 4.x bsd use a different encapsulation method in
   order to get better network performance with the VAX virtual memory
   architecture.  Consenting systems on the same 802 network may use
   this format between themselves.  Details of the trailer encapsulation
   method may be found in [9].  However, all hosts must be able to
   communicate using the standard (non-trailer) method.

Byte Order

   As described in Appendix B of the Internet Protocol specification
   [1], the IP datagram is transmitted over 802 networks as a series of
   8-bit bytes.  This byte transmission order has been called "big-
   endian" [11].

Maximum Transmission Unit

   The Maximum Transmission Unit (MTU) differs on the different types of
   802 networks.  In the following there are comments on the MTU for
   each type of 802 network.  However, on any particular network all
   hosts must use the same MTU.  In the following, the terms "maximum
   packet size" and "maximum transmission unit" are equivalent.

Frame Format and MAC Level Issues

   For all hardware types

      IP datagrams and ARP requests and replies are transmitted in
      standard 802.2 LLC Type 1 Unnumbered Information format, control
      code 3, with the DSAP and the SSAP fields of the 802.2 header set
      to 170, the assigned global SAP value for SNAP [6].  The 24-bit
      Organization Code in the SNAP is zero, and the remaining 16 bits



Postel & Reynolds                                               [Page 4]
 
RFC DRAFT           IP and ARP on IEEE 802 Networks            mmmm 1987


      are the EtherType from Assigned Numbers [7] (IP = 2048, ARP =
      2054).

      IEEE 802.x packets may have a minimum size restriction.  When
      necessary, the data field should be padded (with octets of zero)
      to meet the 802.x minimum frame size requirements.  This padding
      is not part of the IP datagram and is not included in the total
      length field of the IP header.

      For compatibility (and common sense) the minimum packet size used
      with IP datagrams is 28 octets, which is 20 (minimum IP header) +
      8 (LLC+SNAP header) = 28 octets (not including the MAC header).

      The minimum packet size used with ARP is 24 octets, which is 20
      (ARP with 2 octet hardware addresses and 4 octet protocol
      addresses) + 8 (LLC+SNAP header) = 24 octets (not including the
      MAC header).

      In typical situations, the packet size used with ARP is 32 octets,
      which is 28 (ARP with 6 octet hardware addresses and 4 octet
      protocol addresses) + 8 (LLC+SNAP header) = 32 octets (not
      including the MAC header).

      IEEE 802.x packets may have a maximum size restriction.
      Implementations are encouraged to support full-length packets.

      For compatibility purposes, the maximum packet size used with IP
      datagrams or ARP requests and replies must be consistent on a
      particular network.  Each type of 802 network has a different
      specification for the maximum packet size.

      Gateway implementations must be prepared to accept full-length
      packets and fragment them when necessary.

      Host implementations should be prepared to accept full-length
      packets, however hosts must not send datagrams longer than 576
      octets unless they have explicit knowledge that the destination is
      prepared to accept them.  A host may communicate its size
      preference in TCP based applications via the TCP Maximum Segment
      Size option [10].

      Datagrams on 802.x networks may be longer than the general
      Internet default maximum packet size of 576 octets.  Hosts
      connected to an 802.x network should keep this in mind when
      sending datagrams to hosts not on the same 802.x network.  It may
      be appropriate to send smaller datagrams to avoid unnecessary
      fragmentation at intermediate gateways.  Please see [10] for
      further information.



Postel & Reynolds                                               [Page 5]
 
RFC DRAFT           IP and ARP on IEEE 802 Networks            mmmm 1987


   For 802.3

      IEEE 802.3 networks have a minimum packet size that depends on the
      transmission rate.  For 10 megabit/second 802.3 networks the
      minimum packet size is 64 octets.

      IEEE 802.3 networks have a maximum packet size which depends on
      the transmission rate.  For 10 megabit/second 802.3 networks the
      maximum packet size is 1518 octets.

      The MAC header is 6 octets of source address, 6 octets of
      destination address, 2 octets of length, and (at the end of the
      packet) 4 octets of CRC, for a total of 18 octets.

      Note that 1518 - 18 (MAC header) - 8 (LLC+SNAP header) = 1492 for
      the IP datagram (including the IP header).

      One popular combination of 802.3 parameters is the "Ethernet"
      style in which networks use 48-bit physical addresses and 10
      megabit/second transmission rate.

   For 802.4

      IEEE 802.4 networks have no minimum packet size.

      IEEE 802.4 networks have no maximum packet size.

      The MAC header is 6 octets of source address, 6 octets of
      destination address, 2 octets of length, and (at the end of the
      packet) 4 octets of CRC, for a total of 18 octets.

      For compatibility, the maximum packet size used with IP datagrams
      or ARP requests and replies is 1492 octets for the IP datagram
      (including the IP header) plus 8 octets for the LLC+SNAP header,
      for a total of 1500 octets (not including the MAC header).

      In one combination of 802.4 parameters, 48-bit physical addresses
      and 10 megabit/second transmission rate are used.













Postel & Reynolds                                               [Page 6]
 
RFC DRAFT           IP and ARP on IEEE 802 Networks            mmmm 1987


   For 802.5

      IEEE 802.5 networks have no minimum packet size.

      IEEE 802.5 networks have no maximum packet size.

      The MAC header is 6 octets of source address, 6 octets of
      destination address, 2 octets of length, plus another 18 octets of
      what ???, and (at the end of the packet) 4 octets of CRC, for a
      total of 36 octets.

      In one combination of 802.5 parameters, 48-bit physical addresses
      and 4 megabit/second transmission rate are used.

      There is a convention that IBM style 802.5 networks will not use
      packets larger than 8232 octets.  With a MAC header of 36 octets
      and the LLC+SNAP header of 8 octets, the IP datagram (including IP
      header) may not exceed 8188 octets.

      Note that a MAC level bridge linking two rings may limit the size
      of packets forwarded to 552 octets, with a MAC header of 36 octets
      and the LLC+SNAP of 8 octets, the IP datagram (including the IP
      header) may be limited to 508 octets.  This is less that the
      default IP MTU of 576 octets, and may cause significant
      performance problems due to excessive datagram fragmentation.

         One implementation will not support IP datagram communication
         across a MAC level bridge unless the bridge will allow an IP
         MTU of at least 1020 octets.

      The dynamic address discovery procedure is to do a ARP request.
      The IBM style 802.5 networks support two different types of
      broadcasts, local ring broadcasts and all rings broadcasts.  To
      limit the number of all rings broadcasts to a minimum, it is
      desirable (though not required) that an ARP request first be sent
      as a local ring broadcast, without a Routing Information Field
      (RIF).  If the local ring broadcast is not supported or if the
      local ring broadcast is unsuccessful after some reasonable time
      has elapsed, then send the ARP request as an all rings broadcast
      with an empty RIF.

      When an ARP request or reply is received, all implementations are
      required to understand at least local ring broadcasts (no RIF) and
      all ring broadcasts from the same ring (empty RIF).  If the
      implementation supports IBM style source routing, then a non-empty
      RIF is stored for future transmissions to the host originating the
      ARP request or reply.  If this source routing is not supported
      them all packets with non-empty RIFs should be gracefully ignored.



Postel & Reynolds                                               [Page 7]
 
RFC DRAFT           IP and ARP on IEEE 802 Networks            mmmm 1987


      It is possible that when sending an ARP request via an all rings
      broadcast that multiple copies of the request will arrive at the
      destination as a result of the request being forwarded by several
      MAC layer bridges.  However, these "copies" will have taken
      different routes so the contents of the RIF will differ.  An
      implementation of ARP in this context must determine which of
      these "copies" to use and to ignore the others.  There are three
      obvious strategies: (1) take the first and ignore the rest (that
      is, once you have an entry in the ARP cache don't change it), (2)
      take the last, (that is, always up date the ARP cache with the
      latest ARP message), or (3) take the one with the shortest path,
      (that is, replace the ARP cache information with the latest ARP
      message data if it is a shorter route).  Since there is no problem
      of incompatibility for interworking of different implementations
      if different strategies are chosen, the choice is up to each
      implementor.  The recipient of the ARP request must send an ARP
      reply as a point to point message using the RIF information.

      Note that a MAC level bridge linking two rings may limit the size
      of packets forwarded to 552 octets, with a MAC header of 36 octets
      and the LLC+SNAP of 8 octets, the ARP request or reply may be
      limited to 508 octets.

      The RIF information should be kept distinct from the ARP table.
      That is, there is, in principle, the ARP table to map from IP
      addresses to 802 48-bit addresses, and the RIF table to map from
      those to 802.5 source routes, if necessary.  In practical
      implementations it may be convenient to store the ARP and RIF
      information together.

         Storing the information together may speed up access to the
         information when it is used.  On the other hand, in a
         generalized implementation for all types of 802 networks a
         significant amount of memory might be wasted in an ARP cache if
         space for the RIF information were always reserved.

      IP broadcasts (datagrams with a IP broadcast address) must be sent
      as 802.5 all ring broadcasts.

      Since current interface hardware allows only one group address,
      and since the functional addresses are not globally unique, IP and
      ARP do not use either of these features.  Further, in the IBM
      style 802.5 networks there are only 31 functional addresses
      available for user definition.

      IP precedence should not be mapped to 802.5 priority.  All IP and
      ARP packets should be sent at the default 802.5 priority.  The
      default priority is 3.



Postel & Reynolds                                               [Page 8]
 
RFC DRAFT           IP and ARP on IEEE 802 Networks            mmmm 1987


      An 802.5 address not recognized report should be mapped an ICMP
      destination unreachable message.

      MAC Management Support

         While not necessary for supporting IP and ARP, IEEE 802.5
         devices should be able to respond to EXCHANGE ID (XID) and TEST
         LINK (TEST) frames.

         When either an XID or a TEST frame is received a response must
         be returned.

         When responding to an XID or a TEST frame the sense of the
         poll/final bit must be preserved.  That is, a frame received
         with the poll/final bit reset must have the response returned
         with the poll/final bit reset and vice versa.

         The XID command or response has an LLC control field value of
         245 (decimal) if poll is off or 253 (decimal) if poll is on.
         (See Appendix on Numbers.)

         The TEST command or response has an LLC control field value of
         199 (decimal) if poll is off or 207 (decimal) if poll is on.
         (See Appendix on Numbers.)

         A command frame is identified with high order bit of the SSAP
         address reset.  Response frames have high order bit of the SSAP
         address set to one.

         TEST command frames are merely echoed exactly as received,
         after swapping the Destination Address/Source Address and
         DSAP/SSAP and setting the response bit.

         XID commands frames received should return the 802.2 XID
         Information field in the response as 0.128.129 indicating
         connectionless service (type 1) and swap the addresses and set
         the response bit.

Interoperation with Ethernet

   It is possible to use the Ethernet link level protocol [12] on the
   same physical cable with the IEEE 802.3 link level protocol.  A
   computer interfaced to a physical cable used in this way could
   potentially read both Ethernet and 802.3 packets from the network.
   If a computer does read both types of packets, it must keep track of
   which link protocol was used with each other computer on the network
   and use the proper link protocol when sending packets.




Postel & Reynolds                                               [Page 9]
 
RFC DRAFT           IP and ARP on IEEE 802 Networks            mmmm 1987


   One should note that in such an environment, link level broadcast
   packets will not reach all the computers attached to the network, but
   only those using the link level protocol used for the broadcast.

   Since it must be assumed that most computers will read and send using
   only one type of link protocol, it is recommended that if such an
   environment (a network with both link protocols) is necessary, an IP
   gateway be used as if there were two distinct networks.

   Note that the MTU for the Ethernet allows a 1500 octet IP datagram,
   with the MTU for the 802.3 network allows only a 1492 octet IP
   datagram.


Appendix on Numbers

   The IEEE likes to specify numbers in bit transmission order, or bit-
   wise little-endian order.  The Internet protocols are documented in
   byte-wise big-endian order.  This may cause some confusion about the
   proper values to use for numbers.  Here are the conversions for some
   numbers of interest.

   Number          IEEE    IEEE            Internet        Internet
                   HEX     Binary          Binary          Decimal

   UI Op Code      0xC0    11000000        00000011          3
   SAP for SNAP    0x55    01010101        10101010        170
   XID             0xAF    10101111        11110101        245
   XID             0xBF    10111111        11111101        253
   TEST            0xE3    11100011        11000111        199
   TEST            0xF3    11110011        11001111        207
   Info            0x810100                                0.128.129



















Postel & Reynolds                                              [Page 10]
 
RFC DRAFT           IP and ARP on IEEE 802 Networks            mmmm 1987


References

   [1]   Postel, J., "Internet Protocol", RFC-791, USC/Information
         Sciences Institute, September 1981.

   [2]   Plummer, D., "An Ethernet Address Resolution Protocol - or -
         Converting Network Protocol Addresses to 48.bit Ethernet
         Address for Transmission on Ethernet Hardware", RFC-826, MIT,
         November 1982.

   [3]   IEEE, "IEEE Standards for Local Area Networks: Carrier Sense
         Multiple Access with Collision Detection (CSMA/CD) Access
         Method and Physical Layer Specifications", IEEE, New York, New
         York, 1985.

   [4]   IEEE, "IEEE Standards for Local Area Networks: Token-Passing
         Bus Access Method and Physical Layer Specification", IEEE, New
         York, New York, 1985.

   [5]   IEEE, "IEEE Standards for Local Area Networks: Token Ring
         Access Method and Physical Layer Specifications", IEEE, New
         York, New York, 1985.

   [6]   IEEE, "IEEE Standards for Local Area Networks: Logical Link
         Control", IEEE, New York, New York, 1985.

   [7]   Reynolds, J.K., and J. Postel, "Assigned Numbers", RFC-1010,
         USC/Information Sciences Institute, May 1987.

   [8]   Braden, R., and J. Postel, "Requirements for Internet
         Gateways", RFC-1009, USC/Information Sciences Institute, June
         1987.

   [9]   Leffler, S., and Karels, M.,  "Trailer Encapsulations", RFC-
         893, University of California at Berkeley, April 1984.

   [10]  Postel, J., "The TCP Maximum Segment Size Option and Relate
         Topics", RFC-879, USC/Information Sciences Institute, November
         1983.

   [11]  Cohen, D., "On Holy Wars and a Plea for Peace", Computer, IEEE,
         October 1981.

   [12]  D-I-X, "The Ethernet - A Local Area Network: Data Link Layer
         and Physical Layer Specifications", Digital, Intel, and Xerox,
         November 1982.





Postel & Reynolds                                              [Page 11]


----- End Forwarded Message -----

-----------[000139][next][prev][last][first]----------------------------------------------------
Date:      Thu, 15-Oct-87 06:51:29 EDT
From:      dan@WILMA.BBN.COM
To:        comp.protocols.tcp-ip
Subject:   Re: Predictable network access prices (was: RCTE)

> ... one has to be able to
> show that the economy of scale is working in general and, as I
> believe, that the per-quantum costs would end up costing the smaller
> user more ...

Not to mention that the machinery for counting, and accounting for,
packets can itself cost serious money.  Nicholas Johnson observed some
years ago that half the cost of a long-distance telephone call was in
billing you for that call.  (Nicholas Johnson was an FCC Commissioner
and so presumably in a position to know.)

	Dan

-----------[000140][next][prev][last][first]----------------------------------------------------
Date:      Thu, 15-Oct-87 10:14:17 EDT
From:      dave@rosesun.Rosemount.COM (Dave Marquardt)
To:        comp.protocols.tcp-ip,comp.unix.wizards,comp.dcom.lans
Subject:   Re: Ethernet - Hyperchannel Gateway

In article <1822@celtics.UUCP> roger@celtics.UUCP (Roger B.A. Klorese) writes:
>Does anyone know of a product providing an Ethernet-to-Hyperchannel
>gateway?  I'm looking for a "black box" to sit on an ethernet and
>pass TCP-IP and its friends in both directions.

We just met with a Network Systems Corp. salesman this week, and NSC themselves
now have Hyperchannel-Ethernet bridges.  Here's a short description of some
products:

	EN601:	Bridges Ethernets over HYPERchannel-10(r) (10 Mbps media)

	EN602:	Bridges Ethernets over HYPERchannel(r) telecommunication links
		(up to 2 Mbps)

	EN603:	Bridges Ethernets over HYPERchannel-50(r) (50 Mbps)

	EN641:	The IP Router EN641 from Network Systems(r) provides a
		gateway between Ethernet networks and HYPERchannel(r)
		networks.  This gateway creates an internet, or backbone,
		among local workstation networks and high-performance
		mainframes attached to HYPERchannel(r).

HYPERchannel is a registered trademark of Network Systems Corporation.

	Dave

-----------[000141][next][prev][last][first]----------------------------------------------------
Date:      Thu, 15-Oct-87 10:40:39 EDT
From:      kline@UXC.CSO.UIUC.EDU (Charley Kline)
To:        comp.protocols.tcp-ip
Subject:   Re: Ethernet - Hyperchannel Gateway

Network Systems just announced their IP router product, I believe it's
called an EN610. Availability is 1Q88, it's in Beta test right now.
You get four ethernets and a Hyperchannel-50 in the box, which runs IP
routing code. The routing program is gated. From the alpha test rig I
saw up in Brooklyn Park it performs quite well... I saw it moving above
2000 packets per second between a Hyperchannel and an Ethernet.

As they say, contact your local NSC rep.

-----
Charley Kline
University of Illinois Computing Services
kline@uxc.cso.uiuc.edu
kline@uiucvmd.bitnet
{ihnp4,uunet,pur-ee,convex}!uiucuxc!kline

-----------[000142][next][prev][last][first]----------------------------------------------------
Date:      Thu, 15-Oct-87 12:45:52 EDT
From:      Mills@UDEL.EDU
To:        comp.protocols.tcp-ip
Subject:   ARPAgrams

For BBN, DARPA, DCA, DDN, NSF and all the ships at sea:

When are we going to get ARPANET Type-3 packets back? It would surely be
much more efficient, cheaper and less intrusive in the long run if ARPANET
and clones understood connectionless service.

[ARPANET Type-3 packets were once used for datagram experiments, but have
fallen down a hole along the way. They had a max packet size of about 1K bits
and no flow control at all. Obviously my plaintive hoot implies fatter,
controlled datagrams or ARPAgrams.)

Dave

P.S.: Where are you, Jack Haverty, now that we need you most? DLM

-----------[000143][next][prev][last][first]----------------------------------------------------
Date:      Thu, 15-Oct-87 14:59:33 EDT
From:      ddp+@ANDREW.CMU.EDU (Drew Daniel Perkins)
To:        comp.protocols.tcp-ip
Subject:   Re: Ethernet - Hyperchannel Gateway

Just last week or so, I got some advertising blurbs from NSC claiming that
they now have Ethernet  to Hyperchannel Bridges AND IP gateways.  I also read
it in atleast two trade rags.  I am very curious where they got their gateway
implementation.  Did they write it from scratch, or are they OEM'ing it from
one of the other vendors (Proteon, CISCO, Bridge, etc.)?  Their boxes
supported something like 5 ethernets at a time, according to what I read.
Did anyone else get one of these announcements?

Drew

-----------[000144][next][prev][last][first]----------------------------------------------------
Date:      Thu, 15-Oct-87 16:30:12 EDT
From:      hedrick@TOPAZ.RUTGERS.EDU (Charles Hedrick)
To:        comp.protocols.tcp-ip
Subject:   Re: supdup protocol

There was also a Ph.D. thesis on this subject by Robt. Goldberg
at Rutgers.  I believe this is the work that the original message
referred to.  I believe Rutgers makes all of its theses available
through University Microfilms.  It would also be available as
a Computer Science Dept. technical report, I assume.  It is
unlikely that any online copies are around, so I can only refer
you to the Department for more information.

-----------[000145][next][prev][last][first]----------------------------------------------------
Date:      Thu, 15-Oct-87 20:35:00 EDT
From:      CERF@A.ISI.EDU
To:        comp.protocols.tcp-ip
Subject:   Re:  ARPAgrams

Dave,

if you get type 3 back it will have to be as a result of putting
in some kind of flow control in the IMP-IMP and IMP/HOST 
interface. What would type 3 look like through an X.25 interface,
or would you limit this capability to 1822 interface users?

Vint

-----------[000146][next][prev][last][first]----------------------------------------------------
Date:      Fri, 16-Oct-87 00:56:17 EDT
From:      hedrick@TOPAZ.RUTGERS.EDU (Charles Hedrick)
To:        comp.protocols.tcp-ip
Subject:   network monitoring

I have just started to keep statistics and generate reports from our
cisco gateways.  I had been waiting for HEMP to finish, and cisco to
implement some whizbang ASN.1 monster.  Then I realized that really
that is unnecesary.  A simple program can connect to a gateway and by
issuing "show" commands get just about any piece of data I could ever
want.  The issue is not getting data.  I can easily get so much data
that I drown in paper.  The issue is what to do with it once I have
it.  So the question is, does anybody have enough experience with
network monitoring to know what kind of statistics it is useful to
collect and what kinds of reports it is useful to produce.  For the
moment, I'm collecting data hourly, and producing daily reports on
errors and other items of comparatively short-term interest.  (Of
course we don't wait for the daily report to know that a line is down.
We have monitoring tools that ping gateways and selected hosts
regularly, so we know when something is down very soon.)  I am also
collecting packet counts for all the gateways, as well as counts of
some events that might indicate that the gateways are overloaded (if
they ever happened, which they don't seem to).  From this I plan to
produce usage reports weekly or monthly, and generate long-term
trends.  (Of course we all know what the graphs will look like, but
administrators like to see graphs showing that the stuff they have
paid for is getting growing usage.)  Also I will probably try to pull
out some specific numbers like the busiest hour, and usage vs. time of
day.  But there are zillions of things like this I could do.  Does
anyone have any suggestions which ones turn out to be useful?

For your amusement, here's one of my daily error reports.  (This is
done more or less entirely in awk, by the way.)  [In case anybody
actually looks at it, a couple of comments:
  The reloads were to bring up new software.
  The large number of resets on some interfaces are mostly typical
	of 3Com Multibus Ethernet cards.  It doesn't seem to
	indicate anything wrong.  The Interlan cards on our
	newer boxes don't seem to do this.
  "lo-input" means an hour in which there was less than 10 packets
	input.  This could indicate that something has stopped
	hearing the network.  In this case it happens to be
	interfaces whose networks aren't completely in service yet.
]

 Path: topaz.rutgers.edu!aramis.rutgers.edu!hedrick
 From: hedrick@aramis.rutgers.edu
 Newsgroups: ru.netlog
 Subject: gateway errors
 Message-ID: <1907@aramis.rutgers.edu>
 Date: 16 Oct 87 04:13:08 GMT
 Sender: root@aramis.rutgers.edu
 Lines: 55

Errors for lcsr-gw

Thu 1987 Oct 15 02:55:05 reload 1
Thu 1987 Oct 15 02:55:06 Ethernet0 state up
Thu 1987 Oct 15 02:55:06 Ethernet1 state up
Thu 1987 Oct 15 02:55:06 Ethernet2 state up
Thu 1987 Oct 15 02:55:06 Ethernet3 state up

interface   address         in-errs out-errs resets hangs in-hangs lo-input

Ethernet0   128.6.4.1             0        0      0     0        0        0
Ethernet1   128.6.5.41            0       39     11     0        0        0
Ethernet2   128.6.13.1            0        0      0     0        0        0
Ethernet3   128.6.21.3            0        0      0     0        0        0

Errors for nb-gw

Thu 1987 Oct 15 02:55:17 reload 1
Thu 1987 Oct 15 02:55:17 Ethernet1 state up
Thu 1987 Oct 15 02:55:17 Ethernet2 state up
Thu 1987 Oct 15 02:55:17 Ethernet3 state up
Thu 1987 Oct 15 02:55:17 Ethernet0 state up
Thu 1987 Oct 15 02:55:17 Serial0 state up
Thu 1987 Oct 15 02:55:17 DDN-18220 state up

interface   address         in-errs out-errs resets hangs in-hangs lo-input

Serial0     128.6.254.1           3        0      0     0        0        0
DDN-18220   10.1.0.89             4        0      0     0        0        3
Ethernet0   128.6.13.39           0        0      1     0        0        0
Ethernet1   128.6.21.1           11      206     18     0        0        0
Ethernet2   128.6.4.27           33     1448     85     0        0        0
Ethernet3   128.6.7.1             0      249     15     0        0        0

Errors for eng-gw

interface   address         in-errs out-errs resets hangs in-hangs lo-input

Ethernet0   128.6.21.2            0        0      0     0        0        0
Ethernet1   128.6.3.13            0        0      0     0        0        0
Ethernet2   128.6.14.1            0        0      0     0        0        0
Ethernet3   128.6.22.1            0        0      0     0        0       23

Errors for ccis-gw

Thu 1987 Oct 15 10:55:35 Serial0 state down
Thu 1987 Oct 15 18:55:54 Serial0 state up

interface   address         in-errs out-errs resets hangs in-hangs lo-input

Serial0     128.6.253.2          11        0      0     0        0       15
Serial1     128.6.252.2           0        0      0     0        0        0
Ethernet0   128.6.7.2             0        0      0     0        0        0
Ethernet1   128.6.21.7            0        0      0     0        0        0
Ethernet3   128.6.18.1           11       12      1     0        0        0

-----------[000147][next][prev][last][first]----------------------------------------------------
Date:      Fri, 16-Oct-87 08:19:43 EDT
From:      ejc@TSCA.ISTC.SRI.COM
To:        comp.protocols.tcp-ip
Subject:   Re: ARPAgrams


It would be a great improvement to have another transport flavor
available for wire subnets, but I think more than a minor tuning of
current protocols is required.  First, a bit of history..  Type 3 had
the appearance to the user of the characteristics described by Dave.
However, the way it was done did not improve the real movement of
packets within the ARPAnet itself.  The only difference between type 0
(normal) and type 3 (best efforts) was the initiating IMP did not wait
for a RFNM before sending packets.  The result was that more than eight
packets could be in the net simultaneously.  This reduced the delay
variance somewhat to allow us to experiment with packet voice, but had
the potential of creating havoc in the sub-net.  The variance was still
large (from SRI to LL, 400ms< T < 2200ms) although quite less than type
0, and thruput low < 10 kb/s user data).  Not great real-time
performance, since the buffering strategies in the speech terminal
would introduce additional ETE delays to smooth out the variance.
Hence, the maximum delay (in this case 2200 ms) became the actual ETE
delay.

If one wants to provide such a service, you will need more than flow
control.  I would guess that flow control alone might reduce delay
variance, but likely push the mean toward the high end of the
distribution.  Our terminal buffering strategy did that anyway.  You
will need to figure out how to move SOME subset of packets through the
net faster, either by queue priorities, some type of VC reservation, or
something else.  But it is a major departure from datagram mode.

Hence, the subnet would need to support two different kinds of
transport protocol simultaneously.  The PODA scheme in the WB SATnet is
the only operational subnet transport protocol that I know of that
offers this.  Is there a serious interest/desire in developing such a
capability in wire subnets?  Packet voice is only one of several
applications that need real-time (low, bounded delay) transport.

Earl

-----------[000148][next][prev][last][first]----------------------------------------------------
Date:      Fri, 16-Oct-87 08:36:45 EDT
From:      pogran@CCQ.BBN.COM (Ken Pogran)
To:        comp.protocols.tcp-ip
Subject:   Re:  X.25 problems

Dave,

In your message Wednesday to Lars Poulsen of ACC you said,

    "I would assume the polite driver would keep an activity with
     entries for each active VC and clear the oldest/least-used
     to make room for the next one. I would assume this would
     also happen if an incoming call request appeared from the
     network and all VCs were committed. Further, I would assume
     both the PSN and ACC would do this kind of thing, no matter
     what the imeouts chosen."

The PSN developers tell me that the PSN CAN'T initiate a clear of
an active (and non-idle) VC just to make room for the next VC as
that is a violation of the X.25 spec.  The DTE can, of course,
initiate a clear of VCs for any reason.

One thing we CAN do in the PSN is limit, from its side, the
number of active VCs with a host.  For example, if the PSN is
configured to not allow more than N VCs with a host because we
know that's a limitation in the host, the PSN would decline an
incoming call request for a host if it would be the N+1st active
VC with that host.  This might be desirable if the host's
receiving an incoming call request from the PSN that put it over
its limit would cause its software to hang or otherwise behave
badly instead of cleanly rejecting the incoming call.  Don't know
if this pertains to any behavior we're currently seeing in the
net, though.

Ken

P.S.  The developers also tell me that having an idle timer
"stretches" the spec.  I am NOT going to split hairs and get into
semantic discussions over when a VC is "active" and when it is
"idle!"

-----------[000149][next][prev][last][first]----------------------------------------------------
Date:      Fri, 16 Oct 87 08:36:45 EDT
From:      Ken Pogran <pogran@ccq.bbn.com>
To:        Mills@louie.udel.edu
Cc:        Lars Poulsen <lars@acc-sb-unix.arpa>, CERF@a.isi.edu, service@acc-sb-unix.arpa, arpaupgrade@bbn.com, gary@acc-sb-unix.arpa, tcp-ip@sri-nic.arpa
Subject:   Re:  X.25 problems
Dave,

In your message Wednesday to Lars Poulsen of ACC you said,

    "I would assume the polite driver would keep an activity with
     entries for each active VC and clear the oldest/least-used
     to make room for the next one. I would assume this would
     also happen if an incoming call request appeared from the
     network and all VCs were committed. Further, I would assume
     both the PSN and ACC would do this kind of thing, no matter
     what the imeouts chosen."

The PSN developers tell me that the PSN CAN'T initiate a clear of
an active (and non-idle) VC just to make room for the next VC as
that is a violation of the X.25 spec.  The DTE can, of course,
initiate a clear of VCs for any reason.

One thing we CAN do in the PSN is limit, from its side, the
number of active VCs with a host.  For example, if the PSN is
configured to not allow more than N VCs with a host because we
know that's a limitation in the host, the PSN would decline an
incoming call request for a host if it would be the N+1st active
VC with that host.  This might be desirable if the host's
receiving an incoming call request from the PSN that put it over
its limit would cause its software to hang or otherwise behave
badly instead of cleanly rejecting the incoming call.  Don't know
if this pertains to any behavior we're currently seeing in the
net, though.

Ken

P.S.  The developers also tell me that having an idle timer
"stretches" the spec.  I am NOT going to split hairs and get into
semantic discussions over when a VC is "active" and when it is
"idle!"
-----------[000150][next][prev][last][first]----------------------------------------------------
Date:      Fri, 16-Oct-87 11:32:46 EDT
From:      lekash@ORVILLE.ARPA (John Lekashman)
To:        comp.protocols.tcp-ip
Subject:   Re: Ethernet - Hyperchannel Gateway

This box is done by NSC from scratch.  You will get a real evaluation
of it in a couple of months after we get one.  Its scheduled to
arrive in a finite number of weeks, (like 3 from now) here.

As to ether-hyper gateways, one is still stuck with using
either a sun or a vax.  Some folks at Pittsburgh
(Melinda Shore at PSC if you missed the last message) are busy
doing it for a microvax.

There was a bit of harshness at NSC in that previous message, they
are trying, its just that IP is a relatively new technology
to them.  After all, they're coming out of the IBM mainframe
device connect area.  They have expressed to me a commitment to 
really follow through on IP support.  Anyone who has worked in
the commercial world knows that it takes some period of time 
to come out with new products.  We received this
commitment about eleven months ago. I think that this is fairly
fast response, especially on something like new hardware.  

					john

-----------[000151][next][prev][last][first]----------------------------------------------------
Date:      Fri, 16-Oct-87 15:03:39 EDT
From:      chris@gargoyle.UChicago.EDU (Chris Johnston)
To:        comp.protocols.tcp-ip,comp.unix.wizards,comp.dcom.lans
Subject:   Re: Ethernet - Hyperchannel Gateway

In article <1822@celtics.UUCP> roger@celtics.UUCP (Roger B.A. Klorese) writes:
>Does anyone know of a product providing an Ethernet-to-Hyperchannel
>gateway?  I'm looking for a "black box" to sit on an ethernet and
>pass TCP-IP and its friends in both directions.

Yesterday I was reading the Oct 1 Electronics.  (I'm way behind on my
reading.)  It had an article about a new ethernet hyperchannel router
from Network Systems.  Fully configured ($50K) it will handle 8
ethernets and 2 hyperchannels.  They claim it will handle 10,000
packets per second.  The EN641 is an IP router.  The EN60X is a
bridge.

The same issue of Electronics says AMD has announced a 200 Mbit/sec
FDDI chip set.

cj

-----------[000152][next][prev][last][first]----------------------------------------------------
Date:      Fri, 16-Oct-87 18:21:46 EDT
From:      ddk@beta.UUCP (David D Kaas)
To:        comp.protocols.tcp-ip,comp.unix.wizards,comp.dcom.lans
Subject:   Re: Ethernet - Hyperchannel Gateway



	Network Systems Corp. has products that will connect Ethernet
to Hyperchannel.  EN60x Bridge for Ethernet to Ethernet over a Hyperchannel
link.  IP router En641 as an Ethernet to Hyperchannel gateway.  They support
tcp/ip for some hosts (vm, mvs, vms...)
I think these have been released in just the last few weeks.  They are supposed
to be up and working?

-- 
Dave Kaas - D.O.E. Richland, Wa.
	e41126%rlvax3.xnet@lanl.gov

-----------[000153][next][prev][last][first]----------------------------------------------------
Date:      Fri, 16-Oct-87 22:49:20 EDT
From:      Mills@UDEL.EDU
To:        comp.protocols.tcp-ip
Subject:   Re:  ARPAgrams

Vint,

Details, he sez. How 'bout an extended X.25 header like the amateur
packet radio AX.25 where the extended address fields encode the
ARPANET address and where flow control can be effected by the
usual LAP-B window? No call-control packets. Separate implied
window for each distinct ARPANET address, as now. Sure, this is
likely to turn some sheets to the wind, but there are lots of
variations and you can stand upwind if necessary.

Dave

-----------[000154][next][prev][last][first]----------------------------------------------------
Date:      Fri, 16-Oct-87 23:07:10 EDT
From:      Mills@UDEL.EDU
To:        comp.protocols.tcp-ip
Subject:   Re:  Ethernet - Hyperchannel Gateway

Melinda,

Linkabit built a HYPERchannel driver for the fuzzball. I don't think you
really want to know that, since a Craymonster could instantly reduce
a silly LSI-11 to a husk, but I thought it would be fun to establish
the fact. Come to think of it, the uVAX would lose a little juice as
well.

Dave

-----------[000155][next][prev][last][first]----------------------------------------------------
Date:      Fri, 16-Oct-87 23:30:51 EDT
From:      Mills@UDEL.EDU
To:        comp.protocols.tcp-ip
Subject:   Re:  X.25 problems

Ken,

I'm not particulary persuaded by the "non-spec" argument. Let them eat
resets. Holding a VC in reserve may be a useful workaround, but it
does not get at the heart of the problem. Granted, if the VC pool
were exhausted in the PSN, there is not much you can do; however,
if the pool were exhausted in the host, a incoming-call packet
could still be passed to the firmware, which could then close a
VC on its own. The same trick could work in the PSN, which would
close the VC with a dirty-smelly reset.

Aitcha glad we have X.25 to bash? We could always gang up on the TCP
reliable-close issue...

Dave

-----------[000156][next][prev][last][first]----------------------------------------------------
Date:      Sat, 17 Oct 1987  18:25 MDT
From:      Keith Petersen <W8SDZ@SIMTEL20.ARPA>
To:        packet-radio@EDDIE.MIT.EDU, TCP-IP@SRI-NIC.ARPA
Cc:        Info-IBMPC@C.ISI.EDU, Info-HZ100@RADC-TOPS20.ARPA
Subject:   KA9Q Internet TCP/IP for MSDOS files available from SIMTEL20
Now available via standard anonymous FTP from SIMTEL20...

Filename		Type	 Bytes	 CRC

Directory PD:<MSDOS.KA9Q-TCPIP>
ARCFILES.DIR		ASCII	 14494  36FCH <--listing of all ARC dirs
NET_BM.ARC		BINARY	 23563  0FD9H
NET_DES.ARC		BINARY	 18954  2373H
NET_DOC.ARC		BINARY	106817  3824H
NET_EXE.ARC		BINARY	 90321  BA0DH
NET_READ.ME		ASCII	  1844  62EBH
NET_SRC.ARC		BINARY	208746  D2DBH
TNC_ASH.ARC		BINARY	 57272  72ADH
TNC_LDR.ARC		BINARY	 15810  695BH
TNC_TNC1.ARC		BINARY	 33640  588FH
TNC_TNC2.ARC		BINARY	 49768  84D2H

All ARCs here are SEA-compatible.  Here is the NET_READ.ME file:

Welcome to the KA9Q Internet Package!

*** WARNING:  The 870829.0 release was cobbled together during the paper
*** presentations at the 6th ARRL Digital Conference in Redondo Beach, CA.
*** It therefore has not been tested nearly as well as the previous release,
*** 870526.0... therefore, don't throw away your old disks until you've run
*** this long enough to be happy with it!  Problem reports always welcome.

The .ARC files that make up the distribution are compressed archives that
were created with the ARC program produced by System Enhancement Associates.

The distribution is structured based on the directory structure used to
create the software:

NET_BM.ARC	.\BM	- sources to Bdale's Mailer, and Gerard's Gateway
NET_DES.ARC	.\DES	- an implementation of DES (Data Encryption Standard)
			  for possible use in validating logins, etc.
NET_DOC.ARC	.\DOC	- all of the doc files
NET_EXE.ARC	.\EXE	- executable programs and config files
NET_SRC.ARC	.\SRC	- sources to NET.EXE

TNC_ASH.ARC	.\TNC\ASH	- KISS for the VADCG and ASHBY boards
TNC_LDR.ARC	.\TNC\LDR	- N4HY's KISS downloader in Turbo Pascal
TNC_TNC1.ARC	.\TNC\TNC1	- KISS for the TAPR TNC-1 and clones
TNC_TNC2.ARC	.\TNC\TNC2	- KISS for the TAPR TNC-2 and clones


Whatever you do, *PLEASE* don't unpack all of the .ARC files in one directory,
as there are duplicate names all over the place... Makefiles, README files,
etc.

After unpacking, look for a README file in each archive.  Read this first,
before you do *anything* else.  Some are just informative, some are very
important.

Finally, we're constantly striving to improve this software, and the 
distribution as a whole.  Comments may be forwarded to Bdale Garbee, N3EUA.
Several of the Doc files include info on how to reach me...

Above all, HAVE FUN!

73 - Bdale, N3EUA
-----------[000157][next][prev][last][first]----------------------------------------------------
Date:      Sat, 17-Oct-87 16:43:10 EDT
From:      alan@mn-at1.UUCP (Alan Klietz)
To:        comp.protocols.tcp-ip
Subject:   Re: supdup protocol and local editing

In article <8710140650.AA03966@ucbvax.Berkeley.EDU> dan@WILMA.BBN.COM writes:
<
<To edit a file, I sent over the lines I wanted
<to change, edited them for awhile, then sent them back.  The
<"protocol" across the line "operated on lines, not characters," and
<was about as lightweight as you could get: it was the Unix "ed" line
<editor command set!  To edit a file, I would invoke ed on it, print a
<range of lines, edit them locally, then send a change command to put
<them back in the ed buffer. 
<
<Sending over integral numbers of lines was just right, since that's
<what your local editor wants to deal with anyway.  It also means you can
<easily handle having two different representations for "lines"
<(records vs. streams, crlf vs. lf, etc.)  on the two machines.  Also,
<if the backend "editor" can mark the beginning and end of each region
<sent to the local micro in a way which does not change as lines are
<added or deleted outside each region (which ed had), then you can
<trivially have independent windows on the same file at the same time
<with virtually no local "intelligence".
<

I wrote a program named "rvi" which does just this, albeit you need
a computer instead of just a smart terminal.   It fetches multiple
screenfuls of text into a sliding window on the file.  (See Volume 7
of the source archives.)

While this scheme gives you low overhead to the host, and avoids
the need to have single-character I/O and short RTTs, it loses 
on slow networks (although some fancy window-prefetching can mitigate
this somewhat.)    It also loses on flexibility (there are some vi
commands that you cannot bloody do with ed).  And you don't get the
file recovery capability of real vi.

Of course, you could build a smart remote editor server and fix some
of these problems.  But for portability, remember that ed and ed
clones are everywhere.

-Al  (alan@uc.msc.umn.edu)

-----------[000158][next][prev][last][first]----------------------------------------------------
Date:      Sun, 18-Oct-87 11:29:06 EDT
From:      geoff@eagle_snax.UUCP ( R.H. coast near the top)
To:        comp.protocols.tcp-ip
Subject:   Re: Authentication

I faced this problem with PC-NFS: how do you "log in" to a PC and
acquire credentials to use over the wire. Finding nothing in the
standards world, I rolled my own: a fairly trivial RPC-based
implementation. The server side code ("pcnfsd") is in the public
domain, and a number of people are using it. It doesn't use
encryption (useless unless you're going to do it ALL right: see
the "Secure RPC" paper from one of the recent Usenix's) but it
does use a rot13-like scrambling to discourage casual browsers.

-- 
Geoff Arnold, Sun Microsystems       | "Picture a bright blue ball,
East Coast Division (home of PC-NFS) |  Spinning, spinning free;
UUCP: {ihnp4,decwrl,...}!sun!garnold |  Dizzy with possibility...
ARPA: garnold@sun.com                |  Ashes, ashes, all fall down..."

-----------[000159][next][prev][last][first]----------------------------------------------------
Date:      Mon, 19-Oct-87 00:18:20 EDT
From:      JBVB@AI.AI.MIT.EDU ("James B. VanBokkelen")
To:        comp.protocols.tcp-ip
Subject:   Re: slip for vms?

At last March's TCP/IP conference in Monterey, David Kashtan of SRI said
that he wanted to put SLIP into the SRI VMS TCP/IP.  I don't know if he
did so.

jbvb

-----------[000160][next][prev][last][first]----------------------------------------------------
Date:      Mon, 19-Oct-87 02:20:24 EDT
From:      minshall@OPAL.BERKELEY.EDU
To:        comp.protocols.tcp-ip
Subject:   Re: A Standard for the Transmission of IP Datagrams over IEEE 802 Networks

Hi.

For a while I've thought that I should someday try to understand 802.x.

With the proposed RFC in hand, as well as the IEEE 802.2 and 802.5 and
the IBM Token Ring Network Architecture Reference, I thought this weekend
would be as good a time as any.

(Before beginning I should mention that I think that "token ring" is
what every campus should be installing.  It is precisely for that reason
that I am picking on the token ring section not a little.)

802.2 -

First off, even if we are only supporting type 1 (connectionless) service,
we MUST to support XID (Exchange ID) and TEST in addition to UI (Unnumbered
Information).

Our goal is stated to be "all equipment using IP or ARP on 802.[345] networks
will interoperate".  However, there are variables in each of these networks:
which speed they run at, and the size of the MAC-level address (16 bits or
48 bits).  Probably we need to suggest that each 3-tuple (802.x, speed,
address size) should interoperate on the same wire.

There is a nice picture (page 3) of DSAP=K1 SSAP=K1.  Unfortunately,
this doesn't account for the fact that the low order bit of the SSAP
distinguishes between "command" and "response".  (It IS the low (ARPA)
order bit, isn't it?  On page 9 we seem to say something else.)

On page 5, "Implementations are encouraged to support full-length
packets".  I would like to see this stronger.  Just because the IP packet
(plus MAC/LLC stuff) is N bytes is no reason to suppose that some
gateway/bridge/whatever won't pad out the packet to N+1 bytes (say because
N+1 is odd, and the DMA chip works on halfwords), where N+1 is still a
legal packet size.  Anyway, I think that maximum length packets should
be mandatory.

802.5 -

There is a maximum packet size for 802.5 networks.  Basically, there is
a timer with a DEFAULT value of 10ms (THT).  "Default" means that the
degrees of freedom of 802.5 (or 802.X, I suppose) networks is worse than
one might naively think.  4E6*10E-3/8 = approx 5,000 bytes (with a little
more to be subtracted due to hardware delays, etc.).

Now, frame format.  The MAC header is 2 bytes (SD and AC), 1 byte (FC),
2 addresses (4 bytes total or 12 bytes total), a CRC (4 bytes), and
2 bytes (ED and FS).  If the packet is using source routing, then
there are 2 bytes of "routing control", plus up to 16 bytes (2 bytes
* 8 entries) of "segment numbers".  Without source routing, the MAC
header is 13 or 21 bytes in length; with source routing the MAC header
is between 15 bytes and 39 bytes in length.  (Aren't all these odd
numbers wonderful!)

Now I have a question.  A year (or two) back, the issue of source routing
was very very controversial in the halls of IEEE 802.X.  Is this still
the case?  Unfortunately, the IEEE documents I have are demonstrably old
(eg, they don't mention SNAP at all, plus the covers have become discolored).

I'm not wild about embracing source routing if source routing is still
controversial within the 802 committee.

Note, by the way, that the PRESENCE of the Routing Information Field (RIF)
in an 802.5 packet is indicated by a bit in the Source Address field (what
we might think of as the ethernet source address).  Sigh.

We say "IP broadcasts ... must be sent as 802.5 all ring broadcasts".
Again, we are up against the source routing wall:  "all ring broadcasts"
are a feature of source routing.

(The whole idea that "source routing" is like a MAC-bridge is a bit dubious
to me.  I'm used to thinking of the TransLan as a MAC-bridge.  The analogy
is that to get the TransLan to propagate an ethernet broadcast packet,
I have to change the packet to include information that says "TransLan, please
forward this".  And, to get the TransLan to forward a non-broadcast packet,
I need to change the packet by addressing it to the bridge, and include
in the packet the route the packet should take.  Doesn't sound very "MAC-
level" to me.)  (And then add to that the fact that, according to the
proposed RFC, certain of these MAC-level bridges (in quotes) constrain
the conversation to packet sizes smaller than either of the two bridged
networks or the two endpoints!)


Appendix on Numbers -

Gasp.  My head hurts.  However, I think (think) that the IEEE *bytes*
are transmitted/displayed just like ARPA bytes.  So, IEEE 0x810100
would be ARPA 129.128.0.  However, I think that the XID response should
be IEEE 0x818000 == ARPA 129.1.0.

To add to the fun, it appears that the IEEE 802.5 (5 5 5 5 5) bit ordering
is ARPA-like.  So, and IEEE 802.5 0x80 is an ARPA 0x80 is an "IEEE" 0x01.

Yours,
	Greg Minshall

ps - Someday, I'd like to see us look at Type 2 (connection-oriented) service.
IBM, which uses Type 2, gets fairly impressive numbers on performance of
their networks.

Also, someday, I'd like us to think about allowing one LLC packet carry > 1
IP/ARP packet.  I even know the first place I'd put this to use:  I need to
ARP on an ethernet/802.3 network; I'd like to use trailers.  So, I need to
send out 4 separate packets.  It would be nice to send one.  Also, this
might be nice from (and to) gateways.

-----------[000161][next][prev][last][first]----------------------------------------------------
Date:      Mon, 19-Oct-87 10:18:28 EDT
From:      richb@dartvax.UUCP (Richard E. Brown)
To:        comp.protocols.tcp-ip
Subject:   Re: supdup protocol and local editing

At Dartmouth, we have been using just such a front/back end
editing scheme for 5 years.  The initial implementation was a
Z-80 box with 32K RAM and 24K ROM which acted as a screen manager
for a simple terminal and talked to a back-end editor on a host.
This setup did two things:  a) it provided "scrollback" of the last n
lines of your session (limited by the 32K RAM), and b) cached a
copy of the hosts file, to provide local editing.  Editing
changes were shipped back to the host, line-by-line.  We had
several hundred around campus, with good results.

We now have a Macintosh version which uses the same back end
editor (now implemented on Unix, VAX/VMS, and DCTS, with VM/CMS
to come).  The Mac provides much more scrollback, and places the
screen editing text in a separate window.  It uses the standard
Macintosh cut-copy-paste paradigms.

Rich Brown                           Telephone:   603/646-3648
Manager of Special Projects          E-Mail: richard.e.brown@dartmouth.edu
Dartmouth College                            richard.e.brown@dartcms1.bitnet
Kiewit Computer Center                       richard.e.brown@dartvax.uucp
Hanover, NH 03755                    AppleLink:  A0183

-----------[000162][next][prev][last][first]----------------------------------------------------
Date:      Mon, 19-Oct-87 10:58:56 EDT
From:      ras@rayssdb.RAY.COM (Ralph A. Shaw)
To:        comp.protocols.tcp-ip,comp.unix.wizards,comp.dcom.lans
Subject:   Re: Ethernet - Hyperchannel Gateway

We at Raytheon have had some experience with the Hyperchannel
products, in particular the BC601, or EN601 as it is now known.
While I do not speak for the larger group of sites within the company,
I'll try and bring up some of the problems we think we have run into
with the EN601 product.  This is merely using the Hyperchannel bus
as a carrier, and allowing ethernets to talk to each other, and is not
performing any type of gateway/protocol translation facility between
the TCP, DECNet, XNS or other protocol machines and the NETEX/BFX
machines.

We have a number of different locations scattered throughout Mass
and this site in RI that are interconnected via both A-Hyperchannel
and B-Hyperchannel equipment over T1 lines.  Some of the locations
are tied together with Bridge GS3/M's, some with Vitalink Translan's,
some with the AT&T ISN "EBIM" adapters, and 5 locations with the
NSC EN601's, all presumably as part of an evaluation and/or production
installation, both of which add up to sites in at least 10 towns on
an extended ethernet; (total net population: 300+, 70% DECNet)

To make a long story short, many of the problems we have had have
been related to having such a widely spread out extended LAN.  One
of the failings of the EN601 is the lack of visibility into what is
going on, in the way of maintenance and diagnostic aids as an ethernet
bridge, compared to the Bridge/Vitalink style of products.  Another
problems may result in an inconsistency of loop detection algorithms
between the different vendors' bridges (while Bridge/Vitalink are
supposed to cooperate).  Yet Another situation (which is still unclear 
as to it's impact) is the fact that at least multicast packets are
batched up into a 4K buffer, and then VC-transferred to each other
EN601 in sequence, imposing quite a delay when the BFX traffic is
going on (making for very choppy telnet sessions).

Anyway, the 601's are still here, and NSC is supposedly working on the
problems we have with them, and they have improved them dramatically
in the time since we first got them in (we were an early Beta-test),
but I think that no matter what they do, the BC601 will always be
compromised by the fact it has to time-slice over the HyperChannel.
-- 
Ralph Shaw, 		
Raytheon Co.,		Submarine Signal Division
Portsmouth, RI		02871
ras@rayssd.RAY.COM  or  ihnp4!rayssd!ras

-----------[000163][next][prev][last][first]----------------------------------------------------
Date:      Mon, 19-Oct-87 11:05:26 EDT
From:      shore@MORGUL.PSC.EDU (Melinda Shore)
To:        comp.protocols.tcp-ip
Subject:   Re: Ethernet - Hyperchannel Gateway

> There was a bit of harshness at NSC in that previous message, they
> are trying, its just that IP is a relatively new technology
> to them.  

I'm not sure which previous message you're referring to.  It's quite
possible that it's mine.  I'm not annoyed with them because they haven't
been supporting IP, I'm annoyed with them because our support from them
has been such a problem.  We've wasted a lot of time because, when given
a description of what we're trying to do, they've come up with incorrect
configurations and sold us the wrong stuff.  Also, this list is the
first place I've heard of their Hyperchannel/Ethernet bridges, and we
had a meeting with some of their very highly placed marketing and
technical people (V.P.s, etc) to discuss their future products and
directions, and they never mentioned them.  We've had other problems
with them as well, but that's another story ...

Melinda Shore
Pittsburgh Supercomputing Center

-----------[000164][next][prev][last][first]----------------------------------------------------
Date:      Mon, 19-Oct-87 12:56:55 EDT
From:      robert@pvab.UUCP (Robert Claeson)
To:        comp.protocols.tcp-ip
Subject:   PC-NFS NET PCNET command

I just entered the command "NET" on a PC with PC-NFS. It showed me the
'usage' list. Among the options there was a "PCNET [ON | OFF]". I couldn't
find any reference to this in the manual. Anyone who knows what this
does? I have PC-NFS 1.0.
-- 
Robert Claeson, System Administrator, PVAB, Box 4040, S-171 04 Solna, Sweden
eunet: robert@pvab
uucp:  sun!enea!pvab!robert

-----------[000165][next][prev][last][first]----------------------------------------------------
Date:      Mon, 19-Oct-87 14:52:39 EDT
From:      peter@julian.UUCP
To:        comp.dcom.modems,comp.protocols.tcp-ip
Subject:   Codenoll Fiber modems

We have couple of buildings that happen to have some spare 50 micron
fiber cable between them.  We want to use this cable to link a set of
ethernets via IP gateway boxes (probably Cisco systems).  One product
that was recommended to us (by Cisco systems) was the model 3030a
modem from Codenoll (us$795 each).  This seems like a very reasonable
price for a 10Mb/sec modem.  I'm looking for sites that are using
these modems to get some idea about how well they work.  Anyone using
them?
-- 
Peter Marshall, Data Comm. Manager
CCS, U. of Western Ontario, London, Canada N6A 5B7
(519)661-2151x6032 
pm@uwovax.BITNET; pm@uwovax.uwo.cdn; peter@julian.uucp; ...!watmath!julian!peter

-----------[000166][next][prev][last][first]----------------------------------------------------
Date:      Mon, 19-Oct-87 16:36:03 EDT
From:      bdale@winfree.UUCP (Bdale Garbee)
To:        comp.protocols.tcp-ip
Subject:   Re: Re: supdup protocol and local editing

In article <8710140650.AA03966@ucbvax.Berkeley.EDU> dan@WILMA.BBN.COM writes:
>Sending over integral numbers of lines was just right, since that's
>what your local editor wants to deal with anyway.  It also means you can
>easily handle having two different representations for "lines"
>(records vs. streams, crlf vs. lf, etc.)  on the two machines.  Also,
>if the backend "editor" can mark the beginning and end of each region
>sent to the local micro in a way which does not change as lines are
>added or deleted outside each region (which ed had), then you can
>trivially have independent windows on the same file at the same time
>with virtually no local "intelligence".

I spend a lot of time each day using an HP terminal tied into an HP3000 
system dealing with HP's corporate mail system.  The editor in the mail
system works in a very similar fashion, but I find it nearly unbearable.

I guess it depends on what you are used to, but using terminal-level
intelligence to edit portions of files/messages is a problem, particularly
when you want to clip and yank hunks of text that cross the boundaries of
what you have in the local terminal.  In particular, all of the systems I've
used of this type require contortions to insert a large hunk of text in the
middle of a file/message.

The system described (using ed commands and a smart terminal) is a neat trick,
but I'd like to see us avoid designing a protocol based on that model.
-- 
Bdale Garbee, N3EUA			phone: 303/495-0091 h, 303/590-2868 w
uucp: {bellcore,crash,hp-lsd,ncc,pitt,usafa}!winfree!bdale
arpa: bdale@net1.ucsd.edu		packet: n3eua @ k0hoa, Colorado Springs
fido: sysop of 128/19 at 303/495-2061, 2400/1200/300 baud, 24hrs/day

-----------[000167][next][prev][last][first]----------------------------------------------------
Date:      Mon, 19-Oct-87 18:34:50 EDT
From:      larry@wiley.UUCP (Larry Gouger)
To:        comp.sys.mac,comp.protocols.appletalk,comp.protocols.tcp-ip
Subject:   Mac Serial Line IP Driver?

I am in the process of writing a program to remotely demonstrate
our high speed text search machine.  I have chosen the Macintosh
as the host of this demo because of its MMI standards and because
it is easily transported.

I have chosen to use tcp/ip communications and the SLIP protocol to
manage the communications line.  I chose tcp/ip because that is the
way clients interact with our search machine now, and I am hoping that
the Mac MMI can easily be ported to an Ethernet based connection.

Now for my question...

1) Has anyone implemented a SLIP driver for the Macintosh?

   ( I am familiar with Mikel Matthews port of a PC based SLIP
     program.  It is difficult to integrate smoothly into the
     Mac's Event Driven program style. )

2) Has it been implemented in such a way as to work with any (Apples?)
ethernet drivers.  That is to say it appears as an additional interface?

I am familiar (some what) with SLIP on the Sun, and it is implemented as
an additional interface allowing most of the protocol work to be done by
existing network drivers.

thanks,

Larry Gouger
(213) 297-3542 or see .signature below for UUCP routing...

-- 
-larry.					---------------------------------------
					Larry Gouger  TRW Inc., CoyoteWorks
					Redondo Beach, CA
					        ...!{trwrb,cit-vax}!wiley!larry

-----------[000168][next][prev][last][first]----------------------------------------------------
Date:      Mon, 19-Oct-87 22:44:28 EDT
From:      JBVB@AI.AI.MIT.EDU ("James B. VanBokkelen")
To:        comp.protocols.tcp-ip
Subject:   Re: ... Transmission of IP Datagrams over IEEE 802 ...

The bridge vendor whose bridges require source routing is IBM.  I do not
know whose bridges restrict the size of the packet to 508 bytes of IP data,
but I sincerely hope it isn't IBM.

You see, I don't really expect IBM to notice that their product has this sort
of mis-feature, and I do expect them to sell a great deal of whatever it is,
mis-feature or not, so my own code is likely to have to live with whatever
they do (like MS/DOS).

jbvb

-----------[000169][next][prev][last][first]----------------------------------------------------
Date:      Tue, 20-Oct-87 11:08:37 EDT
From:      nar@wvucsb.UUCP (Narendar Reddy)
To:        comp.protocols.tcp-ip
Subject:   computer protocols --TCP/IP and SLIP

Does any one know if there is any TCP/IP along with SLIP running under
Unix System V Revision 3.1.  These are available under 4.3 bsd unix,
but i am having hard time to find one which runs under Unix System V.
I would very much appreciate if any one can give me some information 
such that we can buy it.  

TCP/IP should run on AT&T 3b2/400s and talk to VAX/750 through serial
line interface.  Two AT&T 3b2/400s are connected by 3bnet kit similar
to ethernet kit.

Thanks in advance.  

-----------[000170][next][prev][last][first]----------------------------------------------------
Date:      Tue, 20-Oct-87 15:59:39 EDT
From:      ron@topaz.rutgers.edu (Ron Natalie)
To:        comp.dcom.modems,comp.protocols.tcp-ip
Subject:   Re: Codenoll Fiber modems

We put a pair of FiberCom Whisperlan transcievers into operation
recently.  These are running over about 2000' of fiber between
two buildings here at Rutgers.  One end is plugged directly into
an IP/DECNET Gateway (CISCO) and the other is plugged into a
TCP multiport box which intern has the central gateways (a collection
of several CISCO boxes and a VAX 750).  The boxes cost about $850
and we haven't had any problem with them.  They have the advantage
(unlike the Codenol, I believe) of allowing multiple units to be
ringed.  That is, you can connect three or more of these boxes
together using fiber rather than using them as a point to point
link.  This ends up saving you the price and the overhead of a
board in the CISCO box in the middle of the net having to copy
packets bridging accross the two fiber segments.

There are a number of others coming on the market including one
from Optical Data Systems that claims to be selling for under $600.
We haven't seen them yet.

-Ron

-----------[000171][next][prev][last][first]----------------------------------------------------
Date:      Tue, 20-Oct-87 17:05:01 EDT
From:      smb@ulysses.homer.nj.att.com (Steven Bellovin)
To:        comp.protocols.tcp-ip,comp.protocols.tcp-ip.ibmpc
Subject:   TCP/IP wanted for the ATT 6300+

I'm looking for a TCP/IP that will run under the Unix system (or at worst,
under Simultask) on the ATT 6300+.  The ideal medium is Ethernet, though
Starlan is probably acceptable.

Thanx, and please reply by mail.

		--Steve Bellovin
		{ucbvax,gatech,most of BTL}!ulysses!smb
		smb@att.arpa (maybe)

-----------[000172][next][prev][last][first]----------------------------------------------------
Date:      Tue, 20-Oct-87 20:01:27 EDT
From:      Mills@UDEL.EDU
To:        comp.protocols.tcp-ip
Subject:   Bogon sightings

Folks,

A couple of days ago, while working on modifications to some intricate
routing algorithms, a bogus squawk for net 0.0.0.0 escaped our swamps
and landed at the core gateways. The squawker got plugged pretty quick,
but may have uncorked some pretty strange bogons in the process. First,
some hosts, in particular a UTexas dude, began believing the squawker
10.2.0.96 was the gateway to Oz and other wondrous places, so began
sending mail, domain-name requests and other stuff to that address.
All this wouldn't have mattered much, since the squawker should advise
all squawkees via ICMP Unmentionable messages to do otherwise.

Alas, the squawker had a bug which simply accepted all traffic landing
there, rather than refuse or redirect it. That was caught very quick,
you might surmise, but not before a lot of domain-name requests to
the BRL rootservers appeared (!!) and were in fact dutifully answered
correctly. A few mail messages landed also, but were automatically
forwarded to the correct destinations (some recipients are not going to
believe the return path!). All this was pretty embarassing, but inexplicable,
unless the bogon released and then contained a couple of days ago were
implicated and the implication that 0.0.0.0 was "default to anywhere"
persisted for a surprisingly long time.

Now the good part. Today I say an RWHOm (UDP port 513) appear at the
squawker with source address 1.1.1.1 and destination 1.0.0.0. Er, ah.

Yoboy. Please send in the UFO team. I though you might get a chuckle
out of this. Me, I'm somewhere between hilarity and catatonic shock.

Dave

-----------[000173][next][prev][last][first]----------------------------------------------------
Date:      Wed, 21-Oct-87 00:31:43 EDT
From:      henry@utzoo.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: TCP checksum unrolling

> ... Some C compilers won't accept the wierd syntax below; or maybe I
> should point out, as you wretch on the floor, that there is at least ONE
> c compiler that DOES accept this syntax.

This particular piece of ugliness (switch labels inside a loop, for loop
unrolling) is known as Duff's Device, and is legitimate C that any correct
C compiler is supposed to accept.

				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,decvax,pyramid}!utzoo!henry

-----------[000174][next][prev][last][first]----------------------------------------------------
Date:      Wed, 21-Oct-87 00:34:19 EDT
From:      minshall@OPAL.BERKELEY.EDU
To:        comp.protocols.tcp-ip
Subject:   Re: ... Transmission of IP Datagrams over IEEE 802 ...

In a recent posting I asked an essentially political question about
the status of source routing within the IEEE 802 committees.

Today I checked with someone who has been attending some of the 802
meetings.  She directed me to Daniel Pitt, of IBM Raleigh (dreaded
home of SNA) as being a good contact on the 802.5 committee.  I then
spoke with Daniel Pitt.

From what my two informants of today said, the situation appears to be
that the 802 committee decided that the 802.5 committee (which was the
only one which had come forward with the source routing proposal) could
do whatever it wanted in terms of bridges, AS LONG AS their "product"
(ie: end result) would inter-operate with the "internetworking"
standardized on by the 802.1 committee (see below).

In addition, the 802.1 committee was charged with coming up with a proposal
for "internetworking", with the proviso that the 802.1 proposal had to
be "shown" to work with all the MAC-types (ie: 802.3, 802.4, 802.5).

At this point, 802.5 and 802.1 are going ahead with their separate
but not antithetical proposals (and each proposal is, apparently,
in the "bits and bytes" stage).

Daniel Pitt, though understandably unwilling to predict the future
success of any proposed standard, mentioned that the current 802.5
work looks like source routing as documented by IBM in the "Token-Ring
Network Architecture Reference" book.

Greg Minshall

ps  -  802.1 is a group which is supposed to tie together the work of
the 802.2 committee (which is mostly a Data Link Layer-entity) and the
three MAC committees (802.3,4,5, the physical layer).  802.1 is also
supposed to explain how the 802 family fits into the ISO OSIRM, how
network management works, and how "internetworking" happens.

-----------[000175][next][prev][last][first]----------------------------------------------------
Date:      21 Oct 87 12:14:00 PST
From:      <art@acc.arpa>
To:        "tcp-ip" <tcp-ip@sri-nic.arpa>
Cc:        ietf@gateway.mitre.org
Subject:   DDN X.25 Logical

Has anyone besides us tried to use DDN X.25 Logical Addressing?

We would be interested in talking to other folks who are interested
in, or have some experience with, X.25 logical addressing.

					Art Berggreen
					art@acc.arpa

------
-----------[000176][next][prev][last][first]----------------------------------------------------
Date:      Wed, 21-Oct-87 14:26:32 EDT
From:      lamaster@pioneer.arpa (Hugh LaMaster)
To:        comp.protocols.tcp-ip
Subject:   Re: ... Transmission of IP Datagrams over IEEE 802 ...

I understood from previous postings that the encapsulation of
IP on 802.2 was to be compatible with 802.3 4 5 + (FDDI) etc.
I don't really understand the source routing issue, but I thought
that IBM planned to use it for SNA over 802.5 type stuff...No?  I
would hate to see multiple encapsulations for 802.2 depending
on whether the medium is 802.3 4 or 5 ...




  Hugh LaMaster, m/s 233-9,  UUCP {topaz,lll-crg,ucbvax}!
  NASA Ames Research Center                ames!pioneer!lamaster
  Moffett Field, CA 94035    ARPA lamaster@ames-pioneer.arpa
  Phone:  (415)694-6117      ARPA lamaster@pioneer.arc.nasa.gov

(Disclaimer: "All opinions solely the author's responsibility")

-----------[000177][next][prev][last][first]----------------------------------------------------
Date:      Wed, 21-Oct-87 16:14:00 EDT
From:      art@ACC.ARPA
To:        comp.protocols.tcp-ip
Subject:   DDN X.25 Logical


Has anyone besides us tried to use DDN X.25 Logical Addressing?

We would be interested in talking to other folks who are interested
in, or have some experience with, X.25 logical addressing.

					Art Berggreen
					art@.EDave rroduEduor

-----------[000178][next][prev][last][first]----------------------------------------------------
Date:      21 Oct 87 13:03:28 GMT
From:      cca!mirror!rayssd!brunix!nancy!sgf@husc6.harvard.edu  ( _/**/Sam_Fulcomer )
To:        tcp-ip@sri-nic.arpa
Subject:   Re: TCP/IP wanted for the ATT 6300+
==

Does anyone know of an IP implementation for the PC that will function
as a gateway?


-------------------------------------------------------------------------
		BITNET		sgf@BROWNCS
		CSNET		sgf@cs.brown.edu
		ARPANET 	sgf%cs.brown.edu@relay.cs.net
		UUCP		{ihnp4,allegra,decvax,princeton}!brunix!sgf
		TELECOM		401-863-3618
-----------[000179][next][prev][last][first]----------------------------------------------------
Date:      Wed, 21-Oct-87 22:24:40 EDT
From:      hutton@SCUBED.ARPA (Thomas Hutton)
To:        comp.protocols.tcp-ip
Subject:   Ip forwarding under Ultrix 2.0


Has anyone looked at or used ip forwarding under ultrix 2.0.   It seems to
be only working slightly (sometimes it forwards - sometimes not).  I havent
been able to pin point it exactly yet although our sun 3/160 running sun3.4 
seems to not have the same problems.  

Has anyone else seen this or better yet - know of a patch/fix??????



Thomas Hutton
Scubed

-----------[000180][next][prev][last][first]----------------------------------------------------
Date:      Thu, 22-Oct-87 01:34:30 EDT
From:      minshall@OPAL.BERKELEY.EDU
To:        comp.protocols.tcp-ip
Subject:   Re: ... Transmission of IP Datagrams over IEEE 802 ...

Hugh,
	The catch is that, as defined by IBM and (apparently) by the
current work in 802.5, "source routing" is a MAC issue.  MAC, which
means Media Access Control (or some such) is separate from LLC (==
Logical Link Control).  802.2 is LLC; if source routing were a LLC
entity, then it would apply to all 802.X MAC layers.

	I *suppose* that some of the past controversy about source
routing was whether it belonged at the MAC layer, or at the LLC
layer.  To me, it seems it belongs at the LLC layer, since routing
information (in the IBM scheme) is acquired by an LLC-layer broadcast
(a TEST or XID command) which picks up (and returns) routing information.
Additionally, the IBM "SEND_MAC_DATA" primitive (which corresponds with
the 802.2 MA_DATA.request primitive) supplies, in addition to the
MAC address of the destination, the "source route" (previously
acquired).

	Anyway, source routing is purely an 802.5 issue.  The source
routing happens during 802.5 MAC encapsulation, not during 802.2 LLC
encapsulation.  The 802.2 encapsulation remains the same.

	What it does mean, I guess, is that the interface (in your
kernel, or whereever) between the LLC and MAC layers is different
with an 802.5 MAC than with an 802.3 or 802.4 MAC.

	Hope this sheds a bit more light.

Greg Minshall

-----------[000181][next][prev][last][first]----------------------------------------------------
Date:      Thu, 22-Oct-87 02:53:09 EDT
From:      guru@wuccrc.UUCP (Guru Parulkar)
To:        comp.protocols.tcp-ip
Subject:   References






I'll appreciate very much if somebody could point me to references
(RFCs, conference papers, journal papers, tech reports, etc) which 
support the following observations:
(Of course, these observation were made on this bboard in some or the
other context, but I was not able to keep track of references.)

1. Network protocols, such as TCP/IP are integrated in operating
   systems in such way that the maximum achievable throughput out of these
   protocols is less than 1 Mb/sec. In other words, even in a loopback
   situation, the maximum throughput is limited to a number like 1
   Mb/sec. 

2. Given a very high speed communication subnetwork (of the order of
   100 Mb/sec) and applications which can use such high bandwidths, it is
   not clear if TCP/IP architecture would be appropriate for such an
   environment.   

3. How much of "less than optimistic" performance of today's INTERNET
   can be attributed to the fact that it has 

     a.  links which still operate at low speeds, such as 9600 baud
     b.  multiply connected subnetworks


Thanks!

guru parulkar           wuccrc!guru@uunet.uu.net or parulkar@udel.edu
Washington University

-----------[000182][next][prev][last][first]----------------------------------------------------
Date:      Thu, 22-Oct-87 10:18:29 EDT
From:      ddp+@ANDREW.CMU.EDU (Drew Daniel Perkins)
To:        comp.protocols.tcp-ip
Subject:   ethernet chips


I'm endeavoring to build a high speed ethernet interface, and I need to
decide which chip set to use.  I'm sure lot's of people out there have
opinions on which is currently the best.  If you've had any experiences with
some of the current chips, please comment.  I will summarize to tcp-ip if I
get any responses.  The ships that I can think of off the to of my head are
the Intel 82586, AMD LANCE, Seeq, Fujitsu, Western Digital (?) and Nation
Semiconductor.  The attributes that I am looking for are:
1. High bandwidth.  Able to handle full 10Mb speed or atleast VERY close to
it, like 9Mb maybe.
2. Low latency.  If I tell it to send a packet, it should be transmitted
ASAP.
3. Low host software overhead.  Host shouldn't have to deal with collisions,
etc.
4. Low relative memory bandwidth utilization, i.e. shouldn't make many
references to host memory other than actual packet data.
5. Good multicast support.  Host shouldn't have to look at every multicast
packet if it wants just a few.

Drew

-----------[000183][next][prev][last][first]----------------------------------------------------
Date:      Thu, 22-Oct-87 11:58:35 EST
From:      jas@MONK.PROTEON.COM (John A. Shriver)
To:        comp.protocols.tcp-ip
Subject:   ... Transmission of IP Datagrams over IEEE 802 ...

Source routing is not soley in internal 802.5 issue.  Unfortunately
IBM (a/k/a 802.5) source routing IS an LLC issue, and in turn a
Network layer issue.

IBM source routing is implemented in 802.2 class 2 quite
transparently.  When the stations begin their virtual circuit, they
exchange XID messages.  These XID messages are sent as limited
broadcast, and serve as the needle to resolve a source route.  The
source route is stored inside the LLC connection block, and the class
2 application can be completely ignorant of it.  The arguments to
L_DATA_CONNECT.request remain the same for all three of the MAC
layers.  (This is the TRANSMIT.I.FRAME in the PC, you only supply the
data, it provides all LLC and MAC headers.)

The problem is that 802.2 class 1 cannot implement source routing
transparently, or does not in any current IBM implmentation.  It is
the responsibility of the LLC user (the NETWORK layer) to discover and
provide the source route.  Basically, there is an extra argument to
the L_DATA.request in 802.2 of "remote_source_route" ONLY in the case
of 802.5 as the MAC layer.  (This is the TRANSMIT.UI.FRAME in the PC,
you have to supply the LLC & MAC headers, including the Routing
Information Field.)

The reason class 1 is not transparent is very primitive: there is no
connection block to store the source route in.  IBM has no reason to
be concerned, since all of their "important" software uses class 2
(SNA, NetBios).  Unfortunately, the rest of the world (IP, DECnet,
XNS, Novell IPX, Banyan Vines) is using class 1.  So far, only IBM has
tried to implment source routing on any class 1 application, namely
their IP.

The most interesting point to note is that if any of the bridges on a
source route fail, everything goes crazy.  There is no dynamic
recovery implemented.  Your 802.2 class 2 virtual circuit will DIE.
Your class 1 application will have packets black holing.  The frame
copied bits will not provide the sender ANY indication that this is
happening.  (I don't even know if the first-hop bridge sets the frame
copied bit.)  This is distinctly inferior to the current state of the
art in IP, XNS, or DECnet routing, where the routers will find a new
route before the Transport layer times out.

This black-hole phenomenon will cause users to ask for more layering
violations.  As is, ARP has become hardware dependent for 802.5.  Now
people will want to flush the 802.5 ARP/source route cache when a TCP
connection is getting timeouts.  The problem is that one layering
violation usually causes more of them to propogate up through the
layers. 

-----------[000184][next][prev][last][first]----------------------------------------------------
Date:      Thu, 22-Oct-87 15:42:12 EST
From:      narten@PURDUE.EDU (Thomas Narten)
To:        comp.protocols.tcp-ip
Subject:   Re: Bogon sightings

Couple of observations.

1) Default routes tend to suck up everything, both valid and invalid.
First, there are obvious things like permutations of various broadcast
addresses. In subnetted environments, they also suck up packets for
subnetted networks that "don't exist". For instance, Suppose that
gateway A-GATE is on the ARPANET and fronts as a gateway to a subnetted
network A-NET. A-GATE runs EGP and has full routing tables, and all
machines on A-NET have a default route pointing to A-GATE.

Now suppose someone starts sending packets to machine BOGUSHOST.
BOGUSHOST is on a nonexistent subnet. Normally, these packets get
forwarded to A-GATE via the default route. A-GATE doesn't have a
default route, so the packet is discarded.

Now, if j-random core gateway X advertises a default route, we get a
routing loop. Instead of dropping packets for BOGUSHOST, A-GATE
forwards them to X. X, knowing nothing about subnets, forwards the
packet back to A-GATE.

The problem is, gateway A and gateway X don't have a consistent view
of A-NET. X sees it as one  network, A doesn't view A-NET as a
single network.

In my experiences with subnetting, these homeless packets appear from
time to time, much like invalid broadcasts that should never appear
either. In fact, invalid broadcasts in particular get sucked up by
default routes.

>All this wouldn't have mattered much, since the squawker should advise
>all squawkees via ICMP Unmentionable messages to do otherwise.

2) ICMP Unmentionable messages have little effect on some UDP oriented
protocols in 4.3. Errors can only be returned to sockets that have a
foreign address bound into them via connect(). Typically, UDP based
query/response servers send response messages by including the
destination address as an argument to the write call. When ICMP
errors get returned to the server, the OS has no way of matching the
error message with the socket it goes to.

Named is my favorite scapegoat in this regard.

3) The distributed version of 4.3 does not pass ICMP Unreachable
errors back to the TCP layer. It does process src quench and redirects
though. Machine's relying heavily on default routes almost always have
connections time out (after several retransmissions), as apposed to
having the connection attempt abort quickly with a "host unreachable"
error.  The fix for this is less than a dozen lines of code, I have
not seen Berkeley send out a fix for it (though in fairness, I may
have missed it).

This is all rather disturbing considering the heavy dependence placed
on default routing, and the recognized need for network applications
to better respond to network feedback.

Thomas

-----------[000185][next][prev][last][first]----------------------------------------------------
Date:      Thu, 22-Oct-87 16:27:28 EDT
From:      ddp+@ANDREW.CMU.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   Wellfleet


Does anyone have any experience with the recently announced Wellfleet series
of "Communications Servers".  Their Communications Server is a combination
LAN Bridge and multi-protocol router, with lot's of hardware support.  They
claim to support both IP and DECnet.  Did they do their own IP (I imagine
they probably did)?.  What routing protocols (if any) does it support?  What
kind of throughput does it "really" get?

Drew

-----------[000186][next][prev][last][first]----------------------------------------------------
Date:      Thu, 22-Oct-87 20:56:15 EST
From:      edward@comp.vuw.ac.nz (Ed Wilkinson)
To:        comp.protocols.tcp-ip,comp.unix.questions
Subject:   how do I setup a unique address for our system?

We're trying to setup some sort of tcp-based  network  and  would
like to read some documents detailing how to chose a tcp address.
i.e. the 4 number string.  Could someone please point me  towards
an  article/paper which describes how to chose such a network ad-
dress? We've got a Vax 750 running (soon) Ultrix 2.0 & are trying
to  connect  a VMS machine using their tcp package. Later we hope
to add some Suns to this network.

Any references or info would be greatly  appreciated.  Thanks  in
advance.

Ed Wilkinson     ...!uunet!vuwcomp!edward   or   edward@comp.vuw.ac.nz
-- 
Ed Wilkinson     ...!uunet!vuwcomp!edward   or   edward@comp.vuw.ac.nz

-----------[000187][next][prev][last][first]----------------------------------------------------
Date:      Thu, 22-Oct-87 23:23:01 EST
From:      morgan@navajo (Bob Morgan)
To:        comp.protocols.tcp-ip,comp.dcom.lans
Subject:   A handful of 802.X questions

In the wake of the IP-over-802 draft RFC, I'd like to ask a few
questions:

1.  Is any network anywhere using 802.2 over 802.3?  Does anyone
running a large Inter-Ethernet have any plans to move to 802.2 over
802.3?

2.  Is there any published justification for the use of source routing
in IBM's bridging of token-rings, given its apparent violation of the
network/logical-link/MAC layering principles?  Can anyone anywhere
defend it?

3.  Is anyone anywhere doing MAC-level interconnection of token-ring
and Ethernet/802.3 networks?

4.  Has the 802.1 committee published anything about what it is up to?

5.  Is SNAP an official part of 802.2?  Is there anything written down
about it anywhere?  It's not in my copy of 802.2.

I've got lots more, but those will do for now.

In wonderment,

- RL "Bob" Morgan
  Networking Systems
  Stanford University
  morgan@jessica.stanford.edu

-----------[000188][next][prev][last][first]----------------------------------------------------
Date:      Fri, 23-Oct-87 00:29:16 EST
From:      JBVB@AI.AI.MIT.EDU ("James B. VanBokkelen")
To:        comp.protocols.tcp-ip
Subject:   Re: ... Transmission of IP Datagrams over IEEE 802 ...

Well, one of the problems here is that IBM has defined 802.5, more or
less, and they seem to have felt they wanted to sell: 1) MAC-level
bridges that don't learn connectivity like Ethernet bridges do, instead
requiring that packets be source-routed, and 2) MAC-level bridges
which can only hack datagrams about 1/10th the size of the MTU of the
connected LANs.

Given that they are still successfully selling EBCDIC architectures, I'm
not sure what we can do besides document what they have legislated.  So,
ARP and related MAC-level routing schemes take a nasty hit, and useful
MAC-level transparency amongst 802.x LANs becomes much harder to achieve.

Anyway, at the 802.2 level, the encapsulation is indeed identical.  As
Greg has pointed out, 802.2 is LLC, not MAC.

jbvb

-----------[000189][next][prev][last][first]----------------------------------------------------
Date:      Fri, 23-Oct-87 01:19:28 EST
From:      YAKOV@IBM.COM (Jacob Rekhter)
To:        comp.protocols.tcp-ip
Subject:   802.5 MTU

The reason why MTU of 508 bytes is mentioned in RFC is
that IN THEORY (!!!!) MAC Layer bridge architecture
allows bridge to restrict frame size to 552 bytes (508 MTU IP).
However, I doubt, that in real life we are going to see
this phenomena. It was mentioned in RFC JUST FOR COMPLETENESS.
In real life IP works over 802.5 with MAC layer bridges
JUST FINE.
Jacob Rekhter (Yakov@IBM.COM)

-----------[000190][next][prev][last][first]----------------------------------------------------
Date:      Fri, 23-Oct-87 01:43:18 EST
From:      henry@utzoo.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: TCP performance limitations

> ... Fourth, depending on your CPU architecture, there
> may be ideal unrolling constants which would keep the unrolled loop
> inside an instruction prefetch buffer; complete unrolling would actually
> be a degredation.

In particular, almost any CPU with a cache -- which means most anything above
the PC level nowadays -- will have an optimum degree of unrolling for loops
that iterate a given number of times.  It's not just a question of whether
the loop will fit; eventually the extra main-memory fetches needed to get
a larger loop into the cache wipe out the gains from reduced loop-control
overhead.  For straightforward caches (with a loop that will *fit* in the
cache!), elapsed time versus degree of unrolling is a nice smooth curve with
a quite marked minimum.  Based on the look I took at this, if the ratio of
your cache speed to memory speed isn't striking, and your loop control is
not grossly costly (due to e.g. pipeline breaks), the minimum has a good
chance of falling at a fairly modest unrolling factor, maybe 8 or 16.

				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,decvax,pyramid}!utzoo!henry

-----------[000191][next][prev][last][first]----------------------------------------------------
Date:      Fri, 23-Oct-87 09:25:14 EST
From:      jpt@UMN-REI-UC.ARPA (Joseph P. Thomas)
To:        comp.protocols.tcp-ip
Subject:   Re:  PC-NFS NET PCNET command


	My PC-NFS V2.0 manual says that 'net pcnet [ on | off ]' allows
PN-NFS to run simultaneously with the IBM PC Network Program. Which IBM
PC Network Program is anybodys guess - I'm somewhat ignorent on what IBM
offers...

Joseph Thomas, Communications Staff, Minnesota SuperComputer Center
jpt@uc.msc.umn.edu ( jpt@umn-rei-uc.arpa )

-----------[000192][next][prev][last][first]----------------------------------------------------
Date:      Fri, 23-Oct-87 09:51:28 EST
From:      little@MACOM4.ARPA (Mike Little)
To:        comp.protocols.tcp-ip
Subject:   Null TCP/SLIP msg and list-servers

I don't know what's wrong, but several people have been telling me they've
received multiple copies of my null message.  The message was sent because
the launch button is right next to the kill button (a case in point for
user interface designers and missile operators).  Only one copy was sent
from here as far as I can tell (I only got one copy back).  An interesting
note about the TCP/IP list is that it takes many hours to service a single
message, the null message having about six hours turnaround to myself.  I
have noticed delivery in excess of ten hours from origination.  Made me 
wonder if the list is serviced singly, top to bottom.  Would a divide-and-
conquer technique work here (multiple mail-list machines working on divided
parts of the whole list) or is this really a CPU bound situation?
					-Mike
P.S. I do appreciate the few people who have sent me replies giving
	me a heads up on the empty message - thanks.

-----------[000193][next][prev][last][first]----------------------------------------------------
Date:      Fri, 23-Oct-87 13:59:08 EST
From:      minshall@OPAL.BERKELEY.EDU
To:        comp.protocols.tcp-ip
Subject:   Re: ... Transmission of IP Datagrams over IEEE 802 ...

John,
	Yes, I see what you mean.  It does appear (from the IBM
documentation) that the LLC-layer hides source routing in 
type 2 service.  This is, I suppose, one of the big reasons for
complaints about source routing.

	You mention that if one of the bridges goes down, everything
goes crazy.  However, this is the case with MAC-level bridges (eg: TransLan),
and with "proxy ARP routing".  So, this doesn't put source routing
in any worse company than those (although none of those are what I
would pick for a network - mainly for this reason).

	Now, I have previously asked the question "what's the politics
of source routing in the IEEE 802 committees?".  I then reported on
a conversation with an IBMer on 802.5.  From that conversation, it
seemed that source routing was, probably, only a matter of time (though
the IBMer, to his credit, was very cautious about anyone's ability to
predict the outcome of any given standardization process).

	I have talked to one final 802 person.  This person is
Mick Seaman, of DEC, who is on the 802.1 committee.  He prefaced
his comments, and reiterated throughout, that his comments were
HIS PERSONAL comments; that they DID NOT necessarily represent DEC
or IEEE 802 views.

	Mick Seaman's introductory remarks were that source routing
was something IBM was interested in in order to support existing IBM
products.  However, he said, there was a general 802 interest in
supporting multiple paths.  He expressed a bit of worry, though,
that some schemes might conflict with (to-be-developed) ISO schemes.
He also expressed his view that source routing MIGHT not make it
through the standardization effort (though he attributed this view to
his lack of cynicism about standardization processes).  He felt much
surer that ISO would be very unlikely to standardize source routing,
even in an 802.5 environment.

	He said there were some improvements to source routing that
could be made, but wasn't sure that the 802.5 committee was consistent
in looking at the overall picture (as opposed to spending time on
bits and other low-level issues).

	When I mentioned there was some interest within the internet
to standardize on 802 encapsulation, including source routing for
802.5 networks, he said he was worried that the source routing (802.5)
document was not yet technically stable (from the specification point
of view).

	End of report.

Greg Minshall

ps - Someone, in a private note, mentioned that maybe 802.1 (the
internetworking portion) had been quashed by ISO.  From what Mick
Seaman said (though I didn't ask him about this), it appears that
the original 802.1 charter had been "to solve all the world's networking
problems, in the context of 802", but that the current charter seems
to be "solve those problems in networking which are peculiar to LAN's,
and which no one else [ISO, I suppose] is actively working on".  He
seemed quite happy with the current charter, and they are apparently
quite close to releasing a draft standard.

-----------[000194][next][prev][last][first]----------------------------------------------------
Date:      Fri, 23 Oct 87 15:54 EST
From:      REILLY@wharton.upenn.edu
To:        tcp-ip@sri-nic.arpa
Subject:   NFS

Where can a good description for and list of implementations of
NFS be found?

Thanks
-----------[000195][next][prev][last][first]----------------------------------------------------
Date:      Fri, 23-Oct-87 16:54:00 EST
From:      REILLY@WHARTON.UPENN.EDU
To:        comp.protocols.tcp-ip
Subject:   NFS


Where can a good description for and list of implementations of
NFS be found?

Thanks

-----------[000196][next][prev][last][first]----------------------------------------------------
Date:      Fri, 23-Oct-87 17:28:51 EST
From:      djh@cci632.UUCP (Daniel J. Hazekamp)
To:        comp.unix.wizards,comp.dcom.lans,comp.protocols.tcp-ip
Subject:   Ethernet Bridge

We are currently looking into the use of an Ethernet Bridge to link
two seperate Ethernets. Any vendor/product information, as well as
personal recommendations are welcome.

Dan Hazekamp
CCI
Rochester, NY
rochester!cci632!djh

-----------[000197][next][prev][last][first]----------------------------------------------------
Date:      Fri, 23-Oct-87 23:48:32 EST
From:      earle@jplopto.uucp (Greg Earle)
To:        comp.sys.ibm.pc,comp.protocols.tcp-ip
Subject:   MIT/CMU PC/IP under Microsoft C V. 4.0?

I just tried to bring up PC/IP using Micosoft C V. 4.0 (as opposed to 3.0,
the original development environment).  The PC/IP code contains some things
which are meant to emulate certain UNIX standard C library behavior; for
example in UNIX calling `exit()' really calls some cleanup functions, and
then `_exit()' for the crash and burn.  PC/IP tries to emulate this by
redefining a UNIXish exit() function, and adding a function `exit_hook(func)'
to add a cleanup function to the exit() list.  MSC V 4.0 barfs on this, because
it causes exit() to be multiply defined when LINK'ed.  I tried to get around
this by attempting to manually call the exit handlers installed by exit_hook
before any exit()'s that come after an exit_hook, and removing the PC/IP
exit() redefinition.  I don't think this is working correctly, because the
net result (using a MICOM/Interlan NI5010 board) is that the very first PC/IP
program run (after bootup) works fine, but after that, any succeeding programs
hang the system, waiting for the disk or ramdisk (i.e., disk light goes on,
no diagnostics show on console if enabled, disk light stays on forever, PC
clone hangs and has to be power-cycled).  I take this to mean that the PC/IP
program or library is not resetting an Interrupt somehow, thus locking out
succeeding invocations of other PC/IP programs - presumably due to my Operator
Error :-( .  Also, a printf.c file calls a function with an argument called
`signed', which now seems to be a reserved keyword in MSC 4.0. [NB: This is
using Suntronics PC/AT clones; Phoenix or ALM BIOS]

Since MSC 4.0 supports exit() doing cleanup with handlers using onexit(),
I'd like to find out if there is a newer version available for FTP
that already takes advantage of this fact (and other V3 - V4 enhancements).
Sorry to bother the TCP/IP list with this, but I thought I'd find a few PC/IP
users on it ...

	Greg Earle		earle@jplopto.JPL.NASA.GOV
	S(*CENSORED*)t		earle%jplopto@jpl-elroy.ARPA	[aka:]
	Rockwell International	earle%jplopto@elroy.JPL.NASA.GOV
	Seal Beach, CA		...!cit-vax!elroy!smeagol!jplopto!earle

-----------[000198][next][prev][last][first]----------------------------------------------------
Date:      Sat, 24-Oct-87 01:21:58 EST
From:      jim@aob.aob.mn.org (Jim Anderson)
To:        comp.dcom.lans,comp.sys.ibm.pc,comp.protocols.tcp-ip
Subject:   PS/2 Model 80 network <19K memory

I found out tonight that I need to specify a network to connect several
(2-20) IBM PS/2 Model 80 machines together.  The computers will be running
MS-DOS 3.3, along with a large CAD/CAM program.  The upshot of this
configuration is that there is less than 19K left in memory for a network
driver.  I am told that Novell needs 39K.  At this time, price is not an
over-riding concern - getting something (anything reasonable speed) that
will work is needed.  Currently, there is no existing network to link to,
although a XENIX/UNIX fileserver running TCP/IP could be made available.
This fileserver would be an Altos series 2000 80386 based machine running
XENIX V.  As far as I know, it would have an Excelan board in the Altos.
I have not had any experience with TCP/IP, nor have I had any experience
with PC networks.

Now for the crunch.  The people need to know by 9:30 Monday morning if it
can be done, and if so, approximate dollars so it can be submitted in a
budget.  Because of this, I need advice/information ASAP!  Thank you in
advance for any help or pointers you may be able to give me.
-- 
Jim Anderson			(612) 636-2869
Anderson O'Brien, Inc		New mail:jim@aob.mn.org
2575 N. Fairview Ave.		Old mail:{rutgers,gatech,amdahl}!meccts!aob!jim
St. Paul, MN  55113		"Fireball... Let me see... How How H@roc@roc@

-----------[000199][next][prev][last][first]----------------------------------------------------
Date:      Sat, 24-Oct-87 15:21:17 EST
From:      Mills@UDEL.EDU
To:        comp.protocols.tcp-ip
Subject:   On broadcasts, congestion and gong

Folks,

It so happens that two of the five NSFNET gateways to ARPANET are broken just
now (PSNs seem to be down), so the remaining three have to carry the load. At
one of them (linkabit-gw) the traffic is so heavy that source-quench messages
are being sent regularily (see my recent note on how the fuzzballs now do
selective-preemption and source-quench generation). Looking through the logs I
see the single most persistent abuser is host 128.220.2.1, which is sending
gobs of packets to 128.220.0.0, evidently the broadcast address on net
128.220. I did not capture the data portion of these packets, but I strongly
suspect they are Unix rhos that never, never, never should be allowed outside
of that net. Now, not only are broadcasts considered rude to the max outside
the local net, but in this case those broadcasts are severly impacting service
for many other users. At one point 35 packets were queued with the source and
destination addresses shown. If it were not for their selective-preemption
policy, the fuzzballs would be choked with these things and nobody would be
getting good service at all.

While host 128.220.2.1 might be considered borderline bogus, the 128.220
gateway must be considered clearly over the edge. Why did it allow any traffic
for local-net destinations outside that net at all, much less allow
128.220.0.0 bogons to escape to ARPANET? I suspect the problem is a mixture of
subnetted 4.2 and 4.3 Unix systems with different broadcast addresses. In any
case, RFC-1009 clearly advises the all-zeros and all-ones variant of local
addresses to be drop-kicked at the gateway. Will the operator of that gateway
reveal the vendor so we can send the gongfermers after them? Better yet,
compile a list of vendors known to be noncompliant with RFC-1009 and post at
the upcoming INENG meeting and TCP/IP conference.

RFC-1009 expresses a significant amount os sweat, experience and gong and was
not intended to be taken lightly. Has anybody incorporated it as part of an
RFP? Has any vendor claimed compliance? If the lessons therein are not taken
seriously, we will continue to see problems as above and an inexorable decline
in service quality throughout the Internet community.

Dave

-----------[000200][next][prev][last][first]----------------------------------------------------
Date:      Sat, 24-Oct-87 21:54:01 EST
From:      hedrick@TOPAZ.RUTGERS.EDU (Charles Hedrick)
To:        comp.protocols.tcp-ip
Subject:   problems with IMP connection hanging

For the last several weeks (since the upgrade in the software
in the Arpanet PSN?) we have been having trouble with our IMP
connection hanging.  This problem has also been observed by a
site in California.  They have been told by NOC that they are
the only people with such problems, so they should assume it is
bad hardware.  In order to help prevent us and NOC from going
off on a wild goose-chase of this nature, I'd like to hear 
whether any other people are having similar problems.  The
symptom we see is that no traffic is flowing.  I confess that
in the past I have not been watching carefully enough to observe
the states of all of the lights, but I believe everything is
normal except that we are simply not seeing ready for next
bit from the IMP.  Our configuration is 
  ECU with local connection to IMP (Arpanet IMP 89)
  ECU at our end connected to Cisco router, using ACC Multibus 1822 card
The configuration has been troublefree for quite some time before
this set of problems began.  We have been able to reset it at times
by pushing the reset button on the ECU at our end, or by issuing
a softwre reset to the ACC card (simulating more or less what happens
when the machine powers up).  Once we had to have NOC intervene.
They ran a loopback test on the interface, and when they went back
into normal mode, things were fine.  I believe the other site that
is seeing these problems is also using a pair of ECU's, and that
the general symptoms are similar.  Is anyone else seeing a related
problem?

-----------[000201][next][prev][last][first]----------------------------------------------------
Date:      Sun, 25-Oct-87 01:36:07 EST
From:      rob@PRESTO.IG.COM (Rob Liebschutz)
To:        comp.protocols.tcp-ip
Subject:   Re: problems with IMP connection hanging

We are also having a problem with our PSN connection that began
immediately after the PSN upgrade.

Configuration: LSI 11/23 core gateway with 1822 interface and ACC
	Robustness card (for booting via Arpanet from the NOC)
	Connected to PSN 32 with ECUs.

The symptoms are that the ECU RFNB light goes out on the local ECU
(gateway end) and the gateway says that the interface is down.  If the
gateway crashes (which it has done several times since the upgrade),
it can't be loaded.  The last time this happened, we were able to get
things working again, when the NOC looped back the interface,
downloaded the gateway, and then looped back the interface again.  The
second loopback is necessary because the interface goes down right
after the gateway comes up.  Resetting the local ECU does not help.

-----------[000202][next][prev][last][first]----------------------------------------------------
Date:      Sun, 25-Oct-87 01:56:44 EST
From:      stu@splut.UUCP (Stu Cobb)
To:        comp.protocols.tcp-ip
Subject:   Re: computer protocols --TCP/IP and SLIP


I'll second that.  We have a port of Sys V (by Microport) running on an AT
clone.  It wants very much to run TCP and SLIP, as it serves a bunch of
amateur packet radio hackers (who are getting into TCP/IP).  Anyone know
where to go for info?

I'll write the dern thing if I have to (grumble, grumble).  But it would be
nice not to.

Stu

-----------[000203][next][prev][last][first]----------------------------------------------------
Date:      Sun, 25-Oct-87 12:41:28 EST
From:      hartvig@diku.UUCP (Hartvig Ekner)
To:        comp.dcom.lans,comp.protocols.tcp-ip
Subject:   TCP/IP max. datagram size





I have been looking through the MIL-STD's for TCP and
IP (1770 and 1771 I think), and nowhere is there
an indication of how large IP datagrams you can
expect coming over an 802.3 Ethernet. Does anybody
know if there is a limit (other than the 1500
byte limit on ethernet packets)? The document
says something about that you MUST be able to
handle 576 byte datagrams without fragmenting
them, but that's all it says...

Similarly, is there a limit on TCP 'datagrams'
(I can't remember the correct name), or must a
TCP/IP implementation be able to handle virtually
any size?

Thanks,

Hartvig Ekner               ...mcvax!diku!hartvig
University Of Copenhagen

-----------[000204][next][prev][last][first]----------------------------------------------------
Date:      Sun, 25-Oct-87 12:59:01 EST
From:      retrac@titan.rice.edu (John Carter)
To:        comp.protocols.tcp-ip
Subject:   What's the "best" TCP/IP throughput?


    I'm a fairly new reader of this newsgroup, so I apologize if this has
already been discussed.  I would like to know what the best performance
figures are for large memory to memory transfers using TCP-IP.  More
specifically, what are the fastest reported average transfer times for
transferring 10 Mbytes over a 10 Mbit/sec ethernet?  (or) What is the
highest reported throughput of DATA across a 10 Mbit/sec ethernet using
TCP-IP?

    Def.:  Memory to memory above means, the client generates the data
           out of thin air and the server puts them all in one buffer
	   (the "best case" situation).  I'm interested in raw transfer
	   rates and the cost of TCP-IP overhead on performance.

    I have seen performance figures for van Jacobson's modifications to
Berkeley 4.3 TCP-IP which gave measurements of 23.3 secs for 10 MB over
a 10 Mbit/sec ethernet (effective throughput of 3.4 Mbit/sec).  Are there
any better?

John Carter
Dept. of Computer Science, Rice University

=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*
*   UUCP: {Backbone or Internet site}!rice!retrac       oo                   =
=   ARPA:  retrac@rice.edu                              <                    *
*   CSNET: retrac@rice.csnet                            U  - Bleh.           =
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*

-----------[000205][next][prev][last][first]----------------------------------------------------
Date:      Sun, 25-Oct-87 15:12:53 EST
From:      melohn@SUN.COM (Bill Melohn)
To:        comp.protocols.tcp-ip
Subject:   Re: A handful of 802.X questions

>1.  Is any network anywhere using 802.2 over 802.3?  Does anyone
>running a large Inter-Ethernet have any plans to move to 802.2 over
>802.3?

You can run TCP/IP using SNAP encapsulation over both 802.3 and 802.4
using the Sunlink OSI software. Even though the 802.3 and Ethernet
networks share the same hardware interface, they have to be separate
IP networks using the Sun as a gateway. This is of limited utility
between hosts that understand Ethernet today; however in the future it
is possible that such a gateway capability will be necessary for
talking between implementations which only do Ethernet and those who
only do 802.2/802.3, or 802.2 LLC bridges between different 802.X
media.

-----------[000206][next][prev][last][first]----------------------------------------------------
Date:      Sun, 25 Oct 87 20:32:41 -0500
From:      Mike Brescia <brescia@PARK-STREET.BBN.COM>
To:        Mills@LOUIE.UDEL.EDU
Cc:        tcp-ip@SRI-NIC.ARPA, nsfnet@SH.CS.NET, brescia@PARK-STREET.BBN.COM
Subject:   Re: On broadcasts, congestion and gong

Dave, I see 'jhu' net, 128.220 reachable via macom1 (10.0.0.111) which tells
me that it's somewhere in the NSF topology.  Is there any place where further
live routing info can be gotten, so as to trace the bogus gateway, or do we
wait for them to 'fess up.

Re: broadcasts, I frequently see them being forwarded by a 'host' to a
'gateway' "because it is not for me".  This is with no subnetting, either.
The particular hosts sometimes have the gateway switch turned on even when
they have only one interface.

    Mike
-----------[000207][next][prev][last][first]----------------------------------------------------
Date:      Sun, 25-Oct-87 17:47:41 EST
From:      LYNCH@A.ISI.EDU (Dan Lynch)
To:        comp.protocols.tcp-ip
Subject:   Re: PS/2 Model 80 network <19K memory


Jim,  Your only hope is to get a product that has the majority of the
protocols implemented on the board and not in your "host".  A number
of vendors make such beasts for TCP/IP.  CMC, Excelan, MICOM-Interlan,
Ungermann-Bass all make one of these "smart boards".  They price out
at under a thousand dollars.  I expect all of their host side drivers
to be under the magic 19K, but I'm not absolutely sure.

Good luck,
Dan
-------

-----------[000208][next][prev][last][first]----------------------------------------------------
Date:      Sun, 25-Oct-87 18:37:08 EST
From:      PAP4@AI.AI.MIT.EDU ("Philip A. Prindeville")
To:        comp.protocols.tcp-ip
Subject:   Re: TCP performance limitations (unrolling loops, etc)

Why is this starting to look like `net.arch' to me?  Could it be
the academic throttling of an overly esoteric subject?

I think we've all made our basic points -- let's leave something
as an exercise for the (already bored) reader.

Heading for the fall-out shelters...

-Philip

-----------[000209][next][prev][last][first]----------------------------------------------------
Date:      Sun, 25-Oct-87 19:34:00 EST
From:      Kastenholz@MIT-MULTICS.ARPA
To:        comp.protocols.tcp-ip
Subject:   sign me to list

Please add me to the tcp/ip discussion list.  Thank you.

Apologies to all on the list who will see this appear - I forgot who to
send the request to without getting a broadcast..  Frank Kastenholz

-----------[000210][next][prev][last][first]----------------------------------------------------
Date:      Sun, 25-Oct-87 20:51:22 EST
From:      brescia@PARK-STREET.BBN.COM (Mike Brescia)
To:        comp.protocols.tcp-ip
Subject:   Re: On broadcasts, congestion and gong


Dave, I see 'jhu' net, 128.220 reachable via macom1 (10.0.0.111) which tells
me that it's somewhere in the NSF topology.  Is there any place where further
live routing info can be gotten, so as to trace the bogus gateway, or do we
wait for them to 'fess up.

Re: broadcasts, I frequently see them being forwarded by a 'host' to a
'gateway' "because it is not for me".  This is with no subnetting, either.
The particular hosts sometimes have the gateway switch turned on even when
they have only one interface.

    Mike

-----------[000211][next][prev][last][first]----------------------------------------------------
Date:      Sun, 25-Oct-87 21:10:28 EST
From:      geoff@eagle_snax.UUCP ( R.H. coast near the top)
To:        comp.protocols.tcp-ip
Subject:   Re: PC-NFS NET PCNET command

In article <8710231325.AA07476@uc.msc.umn.edu>, jpt@UMN-REI-UC.ARPA (Joseph P. Thomas) writes:
> 
> 	My PC-NFS V2.0 manual says that 'net pcnet [ on | off ]' allows
> PN-NFS to run simultaneously with the IBM PC Network Program.

It's actually a small performance issue. We tried installing PC-NFS in
a PC and then running the IBM PC Network Program (talking to a Sytek
card) on top of it. The PC Network program thinks that the PC-NFS
drives are local PC disks, which allows it to publish them to other
nodes on the PC Network and gateway through to the NFS world. It turned out
that the PC Network program did certain directory searching operations
in a rather unusual way that caused us to flush our own directory search
cache rather faster than usual, slowing down directory searches on the PC 
Network clients. We added some code to our own directory search logic to
cater for this; since it was sub-optimal for other applications we added
the NET PCNET ON|OFF feature to toggle its use.

-- 
Geoff Arnold, Sun Microsystems       | "Picture a bright blue ball,
East Coast Division (home of PC-NFS) |  Spinning, spinning free;
UUCP: {ihnp4,decwrl,...}!sun!garnold |  Dizzy with possibility...
ARPA: garnold@sun.com                |  Ashes, ashes, all fall down..."

-----------[000212][next][prev][last][first]----------------------------------------------------
Date:      Sun, 25-Oct-87 23:23:05 EST
From:      hedrick@TOPAZ.RUTGERS.EDU (Charles Hedrick)
To:        comp.protocols.tcp-ip
Subject:   Re: A handful of 802.X questions

Let me point out to any vendors who may be on this list that any
vendor who produces a TCP/IP implementation for Ethernet that
understands only the SNAP encapsulation is going to have a lot of
irate customers.  We expect TCP/IP implemenations for Ethernet
hardware to interoperate with existing implementations.  If they
want to do something else as well, that's fine with me.  But if
it won't talk to 4.3, we will consider that it is not TCP/IP.

-----------[000213][next][prev][last][first]----------------------------------------------------
Date:      Mon, 26-Oct-87 00:30:47 EST
From:      asjoshi@phoenix.Princeton.EDU (Amit S. Joshi)
To:        comp.protocols.tcp-ip
Subject:   TCP/IP for IBM AT (in Turbo C)

Hi,

I have  a question and some (hopefully) useful info.

First the info: I have managed to port Phil Karn's KA9Q TCP/IP package
to Turbo C. The advantage ? There is now absolutely no assembly code. It
is now entirely in C (of course Turbo C). Anybody who is interested should
get in touch with me. NOTE: I have not tested the functions that were provided
I needed the code to use it as a fast communications backbone to a simulator 
being built here. I am pretty sure that the layers are working till TCP
and hopefully the others work too.

The question: Do the Packet Radio people have some policy on how thier code
gets distributed before and after modifications. If so who should I get
in touch with or would the relevant person please let me know what I am 
supposed to do. 

Thanks,




-- 
Amit Joshi         |  BITNET :  Q3696@PUCC.BITNET
                   |  USENET :  ...seismo!princeton!phoenix!asjoshi
"There's a pleasure in being mad ...which none but madmen know!" St.Dryden

-----------[000214][next][prev][last][first]----------------------------------------------------
Date:      Mon, 26-Oct-87 02:47:54 EST
From:      Mills@UDEL.EDU
To:        comp.protocols.tcp-ip
Subject:   Re:  On broadcasts, congestion and gong


Mike,

Routing tables do bristle on the NSFNET gateway critters. Scott Brim
at Cornell is the brushman. Hans-Werner Braun has many cans of paint
for the brush as well. Bruised linkabit-gw is not the only stalwart
for the flood - cu-arpa, psc-gw and the twin Maryland gatekeepers
also stem the flood. Should you ask them for routing details, I suspect
they would volunteer at least 76 trombones.

Hosts that gratuitously offer to function as gateways are probably the
single most dangerous and destructive animal that the Internet has
ever seen. I can't even begin to boggle on the waves of destruction
4.2/4.3 systems have caused in the name of that disease. This is not
to impune 4.2/4.3 systems themselves, just the wisdom, or lack of it,'
of gratuitous packet transport without care for the awful damage
that can occur.

Dave

-----------[000215][next][prev][last][first]----------------------------------------------------
Date:      Mon, 26-Oct-87 03:12:58 EST
From:      pavlov@hscfvax.UUCP (840033@G.Pavlov)
To:        comp.unix.wizards,comp.dcom.lans,comp.protocols.tcp-ip
Subject:   Re: Ethernet Bridge

In article <2063@cci632.UUCP>, djh@cci632.UUCP (Daniel J. Hazekamp) writes:
> We are currently looking into the use of an Ethernet Bridge to link
> two seperate Ethernets. Any vendor/product information, as well as
> personal recommendations are welcome.
> 
 Me too ! Please !

 greg pavlov, fstrf, amherst, ny.
 
 ....harvard!hscfvax!pavlov

-----------[000216][next][prev][last][first]----------------------------------------------------
Date:      Mon, 26-Oct-87 06:12:24 EST
From:      HANK@TAUNIVM.BITNET (Hank Nussbacher)
To:        comp.protocols.tcp-ip
Subject:   Ultrix

I am looking to hear user feedback on DECs Ultrix system that allows the
connection of DNA and Tcp/Ip systems.  What doesn't run properly?  How
good is the translation?  Does it work better in one direction than the
other?  Does it handle terminal servers properly?  Has anyone ported
TN3270 to run on Ultrix (DECNET), hop over to the Ultrix machine and
then Telnet over to an IBM machine and get in running 3270 fullscreen?

Please reply directly to me and I will summerize to the list.

Thanks,
Hank

-----------[000217][next][prev][last][first]----------------------------------------------------
Date:      Mon, 26 Oct 87 08:57:12 EST
From:      swb@devvax.tn.cornell.edu (Scott Brim)
To:        hedrick@topaz.rutgers.edu, tcp-ip@sri-nic.arpa
Subject:   Re: problems with IMP connection hanging
lkj
-----------[000218][next][prev][last][first]----------------------------------------------------
Date:      26 Oct 87 11:05:00 PDT
From:      "Dave Crocker" <dcrocker@twg.arpa>
To:        "tcp-ip" <tcp-ip@sri-nic.arpa>
Subject:   Sys V Streams TCP
The Wollongong Group did an implementation of TCP for Sys V.3 Streams,
for AT+T.  AT+T sells the sources to the operating system, including
the TCP.

Separately, we also offer source code.

In addition, we have TCP for the Microport streams, running on an 386/AT.
"Standard" SLIP is not in the current code, but I have already added it
to our list of items.  (We have a proprietary version of a slip, but it
makes no claim to superiority, just independent development.)

Overall performance of the code seems quite good.  (However, since it is
not yet a released product, I don't think that it is reasonable to cite
numbers.)

Dave Crocker
VP, Software Engineering
The Wollongong Group

415-962-7100
------
-----------[000219][next][prev][last][first]----------------------------------------------------
Date:      Mon, 26-Oct-87 10:59:11 EST
From:      AD@BROWNVM.BITNET (Arif Diwan)
To:        comp.protocols.tcp-ip
Subject:   Re:  PC-NFS NET PCNET command

The IBM product is actually named "The IBM PC Network Program". The part # of
the PC Network Technical Ref. Manual is 6322916. I do not have the part # for
the actual product, but you can obtain it from your local IBM rep.

-----------[000220][next][prev][last][first]----------------------------------------------------
Date:      Mon, 26 Oct 87 10:59:11 EST
From:      Arif Diwan <AD%BROWNVM.BITNET@wiscvm.wisc.edu>
To:        "Joseph P. Thomas" <TCP-IP@SRI-NIC.ARPA>
Subject:   Re:  PC-NFS NET PCNET command
The IBM product is actually named "The IBM PC Network Program". The part # of
the PC Network Technical Ref. Manual is 6322916. I do not have the part # for
the actual product, but you can obtain it from your local IBM rep.
-----------[000221][next][prev][last][first]----------------------------------------------------
Date:      Mon, 26-Oct-87 12:57:13 EST
From:      btk@clem (Bryan Koch)
To:        comp.protocols.tcp-ip
Subject:   References, "best" TCP/IP performance

While TCP/IP performance is constrained by a number of factors (buffer, 
window, and transfer sizes; speed of checksum calculation; hardware
interrupt capacity; I/O bandwidth of supporting hardware), the situation
is not nearly as grim as that assumed by Guru Parulkar's recent request
for information.

We have seen TCP/IP data rates (memory to memory w/data validation by
receiver) of 11 Mbit/second over NSC HYPERchannel media.  
This was seen using 16 Kbyte blocks (MTU) and a 48 Kbyte TCP send/receive
space between two CRAY-2 systems.

Local (software) loopback rates for the same situation exceeds 60 Mbit/sec
using 32KB MTU and 16KB send/receive space on a Cray X-MP/48.

TCP and IP do have problems,  but there is nothing there which constrains
performance to the 1-8 Mbit/sec range.  It would appear from our results
that TCP/IP may well be appropriate protocols for use with FDDI media,
though performance over FDDI will be heavily dependent on the TCP/IP
implementation and parameters.

-Bryan Koch
 Cray Research, Inc.
 btk%hall.cray.com@umn-rei-uc.arpa

-----------[000222][next][prev][last][first]----------------------------------------------------
Date:      Mon, 26-Oct-87 13:05:00 EST
From:      dcrocker@TWG.ARPA ("Dave Crocker")
To:        comp.protocols.tcp-ip
Subject:   Sys V Streams TCP

The Wollongong Group did an implementation of TCP for Sys V.3 Streams,
for AT+T.  AT+T sells the sources to the operating system, including
the TCP.

Separately, we also offer source code.

In addition, we have TCP for the Microport streams, running on an 386/AT.
"Standard" SLIP is not in the current code, but I have already added it
to our list of items.  (We have a proprietary version of a slip, but it
makes no claim to superiority, just independent development.)

Overall performance of the code seems quite good.  (However, since it is
not yet a released product, I don't think that it is reasonable to cite
numbers.)

Dave Crocker
VP, Software Engineering
The Wollongong Group

415-962-7100
------

-----------[000223][next][prev][last][first]----------------------------------------------------
Date:      Mon, 26-Oct-87 13:51:36 EST
From:      AI.CLIVE@MCC.COM (Clive Dawson)
To:        comp.protocols.tcp-ip
Subject:   Re: problems with IMP connection hanging

Our DEC-2065 host (MCC.COM) also uses ECU's to connect to the UTexas
PSN.  We've had a trouble-free connection for almost 3 years now.
We've had to deal with a hand-shaking problem, because apparently
the ECU will not raise the IMP ready signal until it sees host ready,
and the TOPS-20 operating system will not raise host ready until it
sees IMP ready.  So we run a small program called AN20-HACK at system
startup time, which uses a DATAO to force host ready on, and all
goes well from then on.  If the PSN happens to go down, however,
then TOPS-20 drops host ready, and AN20-HACK must be run again
when the PSN comes back up.  Otherwise the connection will stay
down for hours until somebody notices.

To solve this problem, I created a small batch job that runs
every 30 minutes to check the status of the net.  If "INFO ARPA"
shows that the net is down, then AN20-HACK is run to try and
get things started again.

All of the above is background info to get around to the real
point of this message.  Last Friday I noticed that the H/I RFNB
light on our ECU was out.  I ran AN20-HACK, which is the normal
way to cure this, but noticed that the light stayed out.  Then
I noticed that "INFO ARPA" reported that the network was UP!
(Perhaps it simply checks "host ready"?)  This is the first time
I had ever noticed that the system reported that the net was
up when in fact it was down.  I pushed RESET on the ECU, ran
AN20-HACK once more, and all was back to normal.

The problem has not recurred since.  I don't know if this is
in any way related to the problem you reported, but I suspect
it might be.  In any case, I hope this is of some help.

CLive
-------

-----------[000224][next][prev][last][first]----------------------------------------------------
Date:      Mon, 26-Oct-87 14:32:24 EST
From:      hedrick@TOPAZ.RUTGERS.EDU (Charles Hedrick)
To:        comp.protocols.tcp-ip
Subject:   Re: problems with IMP connection hanging

Yes, this is the precise symptoms we are seeing with our
cisco gateways, down to which lights are on and what is
done to fix it.  If our experience is any indication,
you'll be seeing more of it.

-----------[000225][next][prev][last][first]----------------------------------------------------
Date:      Mon, 26-Oct-87 15:14:43 EST
From:      lekash@ORVILLE.NAS.NASA.GOV
To:        comp.protocols.tcp-ip
Subject:   Re: On broadcasts, congestion and gong

> Hosts that gratuitously offer to function as gateways are probably the
> single most dangerous and destructive animal that the Internet has
> ever seen.

yeah, and its all due to a random decision to make the default
distribution under 4.2/4.3 be ipforwarding and sendredirects on.
Every vendor doesn't bother to change them.

Here, I have instilled in our operations people the need to always
turn these off by default on every machine that comes in, and
only those machines that are gateways get it enabled.  Along with
tcpnodelack, and other default options that cause grave annoyance
value.  A few slip through the cracks, but quick action by the TCP/IP 
police stomps on such offenders.

						john

-----------[000226][next][prev][last][first]----------------------------------------------------
Date:      Mon, 26-Oct-87 16:53:23 EST
From:      merlin@hqda-ai.UUCP (David S. Hayes)
To:        comp.protocols.tcp-ip,comp.dcom.lans
Subject:   NCSA PC Telnet 2.0 - need bugs address


     Recently picked up v2.0 of the NCSA Telnet for PC.  I'm
having some troubles with it.  The manual says to submit a bug
report, but conveniently omits the address for submission.  Two
questions:

********************
     Does anyone know the bugs-report address for the National
Center for Supercomputing Applications?
********************

Second: Here's a list of my bugs so far.  Anyone else experienced
these?

BUG 1:  FTP client does not get small ascii files correctly.
     If the file is 513-nnn ("nnn" appears to max out around 2k or
     so), the file will not get from the server correctly.  It
     will be terminated at 512 bytes.  Larger files do not seem
     to have this problem.

BUG 2:	The server mode doesn't.  It won't answer requests for
     ftp or rcp (yes, I have them turned on), and won't even
     ack a ping request.


Looks like NCSA still has a bit of work to do.  It'll be great
stuff, though, when they get it finished.  I had TCP from Gold
Hill, and this beats the blazes out of that.  (It's also free.)

-- 
David S. Hayes, The Merlin of Avalon	PhoneNet:  (202) 694-6900
UUCP:  *!uunet!cos!hqda-ai!merlin	ARPA:  ai01@hios-pent.arpa

-----------[000227][next][prev][last][first]----------------------------------------------------
Date:      Mon, 26-Oct-87 17:08:31 EST
From:      edward@comp.vuw.ac.nz (Ed Wilkinson)
To:        comp.protocols.tcp-ip,comp.unix.questions
Subject:   thanks for TCP addressing info

To all those who kindly replied to my request about tcp  address-
ing info, thanks very much. Your help was most useful.

-- 
Ed Wilkinson     ...!uunet!vuwcomp!edward   or   edward@comp.vuw.ac.nz

-----------[000228][next][prev][last][first]----------------------------------------------------
Date:      Mon, 26-Oct-87 17:48:27 EST
From:      AD@BROWNVM.BITNET (Arif Diwan)
To:        comp.protocols.tcp-ip
Subject:   Re: Ethernet Bridge

I am in the process of setting up an IBM RT/PC running AIX 2.1.2 as a "bridge"
between 2 ethernets. The setup I have is a local Ethernet with TCL multiport
transceivers as hubs. Each hub supports 8 hosts, but these hubs can be cascaded
together to increase this number. Also, you can interface a cascade with an
existing IEEE 802.3 spec coax hookup with an adapter.
We also have Applitek boxes which connect the lans to a campus-wide network via
a broadband backbone. These Applitek boxes filter out "directed" packets as
opposed to broadcast packets as well as allow a simple interface to a 10Mbit
per second broadband channel.
We also have Sun and MicroVaxs which are being used as bridges. We are also
planning to use PS/2 Model 80 to bridge PCNET and Ethernet lans.
We have also bridged one AppleTalk network ethernet using a Kinetics box.
Possibilities are endless.

-----------[000229][next][prev][last][first]----------------------------------------------------
Date:      Mon, 26 Oct 87 17:48:27 EST
From:      Arif Diwan <AD%BROWNVM.BITNET@wiscvm.wisc.edu>
To:        "Daniel J. Hazekamp" <TCP-IP@SRI-NIC.ARPA>
Subject:   Re: Ethernet Bridge
I am in the process of setting up an IBM RT/PC running AIX 2.1.2 as a "bridge"
between 2 ethernets. The setup I have is a local Ethernet with TCL multiport
transceivers as hubs. Each hub supports 8 hosts, but these hubs can be cascaded
together to increase this number. Also, you can interface a cascade with an
existing IEEE 802.3 spec coax hookup with an adapter.
We also have Applitek boxes which connect the lans to a campus-wide network via
a broadband backbone. These Applitek boxes filter out "directed" packets as
opposed to broadcast packets as well as allow a simple interface to a 10Mbit
per second broadband channel.
We also have Sun and MicroVaxs which are being used as bridges. We are also
planning to use PS/2 Model 80 to bridge PCNET and Ethernet lans.
We have also bridged one AppleTalk network ethernet using a Kinetics box.
Possibilities are endless.
-----------[000230][next][prev][last][first]----------------------------------------------------
Date:      Mon, 26-Oct-87 18:09:28 EST
From:      AD@BROWNVM.BITNET (Arif Diwan)
To:        comp.protocols.tcp-ip
Subject:   Re: how do I setup a unique address for our system?

RFCs are one of the best source for this. RFC 992 discusses Internet Numbers.
There are other RFCs such as RFC 950 dealing with subnetting, and RFC 882 and
RFC 883 dealing with Domain names which may prove to be very useful.
Aside from that there is a DDN Protocol handbook call 1(800)235-3155 for
ordering info. This handbook contains background info as well numerous RFCs.

-----------[000231][next][prev][last][first]----------------------------------------------------
Date:      Mon, 26 Oct 87 18:09:28 EST
From:      Arif Diwan <AD%BROWNVM.BITNET@wiscvm.wisc.edu>
To:        Ed Wilkinson <TCP-IP@SRI-NIC.ARPA>
Subject:   Re: how do I setup a unique address for our system?
RFCs are one of the best source for this. RFC 992 discusses Internet Numbers.
There are other RFCs such as RFC 950 dealing with subnetting, and RFC 882 and
RFC 883 dealing with Domain names which may prove to be very useful.
Aside from that there is a DDN Protocol handbook call 1(800)235-3155 for
ordering info. This handbook contains background info as well numerous RFCs.
-----------[000232][next][prev][last][first]----------------------------------------------------
Date:      Mon, 26-Oct-87 18:56:29 EST
From:      ccruss@deneb.ucdavis.edu (0059;0000000000;230;9999;98;)
To:        comp.protocols.tcp-ip
Subject:   Who has institution-wide whois available?

References:


I was wondering who has institution (campus/company/...) whois available 
on  the network.  Here at UC Davis  our database for  whois includes all 
faculty  and staff on campus. It is also  possible to do a lookup of all 
personnel  in  a  particular  department.  To  use  our  whois,  specify 
UCDAVIS.EDU  as the host.  For help, do a lookup  on "help", and it will 
return a help page. 

If  you also have a whois database  available, send me a note indicating 
the  host to whom to send the request,  who  is included in the database 
and  any special features that are available.  I will  post a summary to 
the net of any responses. 

Russ
                                Russell Hobby               
                         Data Communications Manager 
     U. C. Davis                 
     Computing Services       BITNET:    RDHOBBY@UCDAVIS 
     Davis Ca 95616           UUCP:      ...!ucbvax!ucdavis!rdhobby 
     (916) 752-0236           INTERNET:  rdhobby@ucdavis.edu

-----------[000233][next][prev][last][first]----------------------------------------------------
Date:      Mon, 26-Oct-87 20:48:49 EST
From:      sreerang@nunki.usc.edu (Sreerang Rajan)
To:        comp.protocols.tcp-ip
Subject:   TCP-IP for xenix to unix


	I would like to know if anyone has implemented TCP-IP for
communication between an Intel Hypercube running Xenix and the SUNs
running Unix. Any help regarding the source code for the
implementation will be greatly appreciated.
Thanks in advance
-------------------------------------------------------------------
Sreerang Rajan
EE-systems, USC, LA
Phone: (213)741-9781
arpa: sree%durga.usc.edu@oberon.usc.edu
 [or] sreerang@nunki.usc.edu
-------------------------------------------------------------------

-----------[000234][next][prev][last][first]----------------------------------------------------
Date:      Mon, 26 Oct 87 13:00:33 IST
From:      Hank Nussbacher <HANK%TAUNIVM.BITNET@wiscvm.wisc.edu>
To:        tcp-ip@sri-nic.ARPA
Subject:   Ultrix
I am looking to hear user feedback on DECs Ultrix system that allows the
connection of DNA and Tcp/Ip systems.  What doesn't run properly?  How
good is the translation?  Does it work better in one direction than the
other?  Does it handle terminal servers properly?  Has anyone ported
TN3270 to run on Ultrix (DECNET), hop over to the Ultrix machine and
then Telnet over to an IBM machine and get in running 3270 fullscreen?

Please reply directly to me and I will summerize to the list.

Thanks,
Hank
-----------[000235][next][prev][last][first]----------------------------------------------------
Date:      Tue, 27-Oct-87 03:12:23 EST
From:      ROMKEY@XX.LCS.MIT.EDU (John Romkey)
To:        comp.protocols.tcp-ip
Subject:   Re: MIT/CMU PC/IP under Microsoft C V. 4.0?

When we wrote PC/IP, we needed a way to make sure that certain functions
were called when the program exited. Most importantly, we had to shutdown
the network interface and release its interrupt and the clock interrupt.
If you don't do that, the next time you run a program and a network
interrupt happens (like, a packet shows up for you), the PC will jump
off into hyperspace and you'll get a DOS-shattering KABOOM.

So we defined our own exit() routine; no problem, we had control over the
whole standard I/O library for the C (cross-)compiler we were using.

Then Drew Perkins at CMU ported it to MSC, and MSC 4.0 came along, and MSC 4.0
broke the exit_hook() stuff we defined. The right way to fix it is to
get rid of PC/IP's own exit() function and to change exit_hook() to just call
the MSC onexit() function. Don't worry, MSC 5.0 will probably break
this again, as they've renamed it to atexit() in 5.0.

MIT is no longer working on PC/IP. You can get a version of it for MSC 4.0
over the net from CMU; FTP Software also sells a commercial version that
works with MSC 4.0. I don't know the state of any of the other offshoots
of it regarding rolling your own programs.
				- john romkey
				 ftp software
-------

-----------[000236][next][prev][last][first]----------------------------------------------------
Date:      Tue, 27-Oct-87 04:25:05 EST
From:      chris@TRANTOR.UMD.EDU (Chris Torek)
To:        comp.protocols.tcp-ip
Subject:   Re: problems with IMP connection hanging

As of 2355 EST, our ECU link to MILNET PSN (nee IMP) #57 stopped
working.  When I went to check on it, the STOP light on our local
ECU was on; I reset it, but we have yet to reestablish communications.
I have no idea whether this is related to the PSN upgrades.

Chris

-----------[000237][next][prev][last][first]----------------------------------------------------
Date:      Tue, 27 Oct 87 08:38:31 -0500
From:      Mike Brescia <brescia@PARK-STREET.BBN.COM>
To:        Chris Torek <chris@TRANTOR.UMD.EDU>
Cc:        tcp-ip@SRI-NIC.ARPA, brescia@PARK-STREET.BBN.COM
Subject:   Re: (milnet) problems with IMP connection hanging

The PSN upgrades to release 7 are happening on the ARPANET only.  The MILNET
is not yet being changed.

The report from Chris Torek is good info to see the baseline of problems there
are in the net which is still running PSN release 6.

    Mike
-----------[000238][next][prev][last][first]----------------------------------------------------
Date:      Tue, 27-Oct-87 08:48:29 EST
From:      brescia@PARK-STREET.BBN.COM (Mike Brescia)
To:        comp.protocols.tcp-ip
Subject:   Re: (milnet) problems with IMP connection hanging


The PSN upgrades to release 7 are happening on the ARPANET only.  The MILNET
is not yet being changed.

The report from Chris Torek is good info to see the baseline of problems there
are in the net which is still running PSN release 6.

    Mike

-----------[000239][next][prev][last][first]----------------------------------------------------
Date:      Tue, 27-Oct-87 09:02:42 EST
From:      whwb@cgch.UUCP (Hans W. Barz)
To:        comp.protocols.tcp-ip
Subject:   Security and Access Restrictions


I am currently planning a bigger productive TCP/IP network. I am searching for
a possibility inside TCP/IP to restrict the access to a part of a network.
This means, that I want to allow only certain IP-adresses  to get access to a
machine or pass through a gateway. 

This is necessary, since some services at the port level are open i.e. an
intelligent programmer can connect to these ports and find out how he has to
behave to get something out of the services behind that port. For Telnet and
FTP since is obviously solved since you have to enter a user plus a password.
But we are thinking of program-to-program communication between ports and the
user should not always type user/password-combinations. What we could do, is
checking the incomming IP-adress in every server program behind a port. But is
there no general more elegant approach incorporated in TCP/IP ?


 ####    #####  ####### #     #          H.W.Barz
 #    #    #    #     #  #   #           ST
 #         #    #         # #            WRZ
 #         #    #  ####    #             R-1032-5.58
 #         #    #     #    #             CIBA-GEIGY
 #    #    #    #     #    #             CH-4002 Basel
 #####    ###    #####     #             Tel.*41-61-374520
                                         Electronic-Mail: cernvax!cgcha!whwb
'

-----------[000240][next][prev][last][first]----------------------------------------------------
Date:      Tue, 27-Oct-87 09:06:22 EST
From:      JBVB@AI.AI.MIT.EDU ("James B. VanBokkelen")
To:        comp.protocols.tcp-ip
Subject:   Re: PS/2 Model 80 network <19K memory

Re: "smart boards".  If all you want is TCP, and you have a normal PC bus,
you can use one of the smart boards.  However, if you want features like
your classic "PC LAN", you need some sort of TSR I/O redirector, and that
blows up 19K right away.  Also, none of the "smart board" vendors have
anything for the MicroChannel bus yet, although I'm sure they're banging
away at it right now.  Neither of the two cards that are shipping right now
have an on-board processor (and neither is software-compatible with its
AT-bus ancestor, either...)

jbvb

-----------[000241][next][prev][last][first]----------------------------------------------------
Date:      Tue, 27-Oct-87 09:38:34 EST
From:      LYMAN_CHAPIN@ICE9.CEO.DG.COM
To:        comp.protocols.tcp-ip
Subject:   NetBIOS on ISO TP/IP

I have been reading RFC 1001, which describes a protocol to support
NetBIOS in a TCP/IP environment, and wonder whether anyone has
thought about a standard way to support NetBIOS in an ISO Transport/IP
environment.  DG's PC Integration product includes NetBIOS running
over ISO Transport class 4 and ISO IP in PCs and MV hosts to support
MS-NET and the Microsoft redirector;  I would be interested in hearing
from anyone who has had similar experience.

-----------------------------------------------------------------------------
Lyman Chapin                  lyman@ice9.ceo.dg.com
Data General Corp.            [lyman%ice9.ceo.dg.com@relay.cs.net]
(617) 870-6056
-----------------------------------------------------------------------------

-----------[000242][next][prev][last][first]----------------------------------------------------
Date:      Tue, 27-Oct-87 09:43:33 EST
From:      mo@maximo.UUCP
To:        comp.protocols.tcp-ip
Subject:   TCP requesting ARP flushes and other "layering violations"

Hmm, a recent note explaining how with 802.5, TCP may have to 
request that the lower levels occassionallly "re-ARP" and that
this is somehow inherently evil.  I would like to present an
alterntive view.

Another way to consider the problem is that at some point,
for reasons it cannot know about, TCP decides correctly that
the path to its destination is failing.  There needs to be
a way for TCP to register a complaint with the lower levels that
it isn't happy with the level of service its getting and would
like the lower levels to "try harder."  One could argue that
the lower levels should always "try their hardest," but their
connectionless nature often precludes them from getting enough
feedback to really evaluate the effectiveness of their efforts.
So, if TCP could say - "The path to host XY.Z.Z.Y seems to be
screwed - please do anything you can to remedy the situation,"
several useful scenarios become possible.  Among them are 
redunantly reliable local cables.

The current IP and localnet architectures make is very difficult
to get improved local reliability by the simple procedure of
laying two cables (whatever that means) and installing two
interfaces in each machine.  In the simple case,
the two cables essentially MUST have separate IP network numbers
(or at least, separate subnetwork numbers)
and if one cable fails, all the TCP connections will die because
the Interfaces, not the hosts, have the Internet addresses
and there is no cleverness in the middle to reroute traffic.

The next approach is to introduce a "virtual local cable driver"
which sits atop multiple interfaces which you want to consider
the same Internet Network.  The idea is that the indirect driver
can then consider which interface to use based on delivery
success.  In Ring networks with back-channel non-delivery
information, this can work well.  With Ethernets, this is
very difficult.  One simple approach is to just send
the packet on BOTH wires!!  This is a tremendous test of
your hosts' reassembly and redundant segment discard code.
It also causes the network to use twice as much CPU time as
it would otherwise.

If, on the other hand, we could get some feedback from above
that indicated we are having path problems, then we can re-ARP
on alternate cables (assuming the cache keeps wire affinity
information) and pick up before TCP starts dropping connections.

This scenario generalises to other link media like "dialup"
connections through digital PBX's and ISDN networks as well.

Maybe the real point is that error recovery and control is
link-specific, and the procedures can often keep things going
in the face of serious problems.  But currently in most 
implementations, the low-level link drivers do not get enough
information on link quality from the modules which are in the
best position to know about it on a global scale.  Link drivers
clearly know something about the link, but the global information
may be crucial for some kinds of error recovery, particuarly
for purely datagram links.

Currently, this kind of feedback is considered a "layering
violation" by some.  I suggest that either this notion of layering
is wrong, or people have a very stilted view of the interaction between
layers.

	-Mike O'Dell

-----------[000243][next][prev][last][first]----------------------------------------------------
Date:      Tue, 27-Oct-87 10:40:10 EST
From:      stevea@laidbak.UUCP (Steve Alexander)
To:        comp.protocols.tcp-ip,comp.dcom.lans
Subject:   Re: NCSA PC Telnet 2.0 - need bugs address

The bug address for NCSA Telnet is:

telbug@ncsa.uiuc.edu, telbug%newton@uxc.cso.uiuc.edu, or
...!{ihnp4,convex,pur-ee}!uiucdcs!uiucuxc!newton!telbug
-- 
-- Steve Alexander, NFS Group | stevea%laidbak@sun.com
Lachman Associates, Inc.      | ...!{ihnp4,sun}!laidbak!stevea
1-800-LAI-UNIX x256	      | 1-312-505-9100 x256

-----------[000244][next][prev][last][first]----------------------------------------------------
Date:      Tue, 27-Oct-87 11:19:26 EST
From:      braden@VENERA.ISI.EDU
To:        comp.protocols.tcp-ip
Subject:   Re:  TCP requesting ARP flushes and other "layering violations"

Mike O'Dell writes:

	
	Another way to consider the problem is that at some point,
	for reasons it cannot know about, TCP decides correctly that
	the path to its destination is failing.  There needs to be
	a way for TCP to register a complaint with the lower levels that
	it isn't happy with the level of service its getting and would
	like the lower levels to "try harder."  One could argue that
	the lower levels should always "try their hardest," but their
	connectionless nature often precludes them from getting enough
	feedback to really evaluate the effectiveness of their efforts.
	So, if TCP could say - "The path to host XY.Z.Z.Y seems to be
	screwed - please do anything you can to remedy the situation,"
	several useful scenarios become possible.  Among them are 
	redunantly reliable local cables.
	
	...
	
	Currently, this kind of feedback is considered a "layering
	violation" by some.  I suggest that either this notion of layering
	is wrong, or people have a very stilted view of the interaction between
	layers.
	
		-Mike O'Dell
	
	
Actually, this particular "creative" layer "violation" is very much a part
of the long-accepted requirements for a well-designed TCP/IP
implementation.  It is explicitly discussed in one of the "Dave Clark
Five" papers, entitled "Fault Isolation and Recovery" (RFC816).  It is
unfortunately true that there are some TCP/IP implementations extent
in the Internet which do not have this important feature; however, the
requirement was clearly laid out in Dave's paper.  You don't have to
apologize for it.

Bob Braden

-----------[000245][next][prev][last][first]----------------------------------------------------
Date:      Tue, 27-Oct-87 12:11:38 EST
From:      minshall@OPAL.BERKELEY.EDU
To:        comp.protocols.tcp-ip
Subject:   Re: TCP requesting ARP flushes and other "layering violations"

>   Hmm, a recent note explaining how with 802.5, TCP may have to 
>   request that the lower levels occassionallly "re-ARP" and that
>   this is somehow inherently evil.  I would like to present an
>   alterntive view.

	Interestingly enought, this is an area where the source routing
scheme of IBM/ 802.5 is superior to the "proxy-ARP" routers (and maybe to
the TransLan ether bridge schemes).  With the latter approaches, if a
bridge/router which is not within "local broadcast" range fails, then there
is no way for the local TCP to request that the path be redetermined.
In the source route (802.5 variety) scheme, the local TCP merely causes a
new route to be determined via a new "all rings broadcast" XID.

	However, I do agree with Mike O'Dell that some method of improving
"local reliability" is a very desirable goal (and is one reason I like
token rings; given that they pass back "delivered" and/or "addressee
unknown" indications).  The flow of information though (as the token
ring case points out) needs to go both ways.  The upper layers need
to be able to say "hey, end-to-end seems to have fallen apart", and
the lower layers need to be able to say "hey, your local packet hasn't
made it because of reason XXX".  The 802.2/x standards address the
latter issue; you get it from the MAC layer in 802.5 (and maybe 802.4),
and from the LLC layer with 802.3 IFF you are a Type 2 802.2 user
(Type 2 ==> link level acks, "flow control", etc.).

	802.2/x doesn't mention the first problem at all (allowing the
upper layers to be able to say "hey, there is some end-to-end problem").
It might be an interesting addition.

Greg Minshall

-----------[000246][next][prev][last][first]----------------------------------------------------
Date:      Tue, 27-Oct-87 14:29:09 EST
From:      hedrick@TOPAZ.RUTGERS.EDU (Charles Hedrick)
To:        comp.protocols.tcp-ip
Subject:   Re:  TCP requesting ARP flushes and other "layering violations"

Actually, 4.3 already has a procedure whereby TCP can notify the lower
levels that a connection is failing.  It's not obvious to me that this
is a layering violation.  It seems to me that the TCP layer is the
only one that can know when things are timing out, and that having it
notify the layer that knows wha to do is perfectly appropriate.  What
would be wrong would be for the TCP code to directly manipulate the
routing database.  Currently this notification has an effect only for
routes that are going through gateways.  It marks the route as down.
In my view it would be perfectly appropriate that if a route is local,
the arp entry should be killed.  Indeed I have considered adding such
code, and may yet do so.  Our main problem with this mechanism is that
not all applications use protocols that can detect failure in this
way.  E.g. the domain system does not.  Of course named does retry,
but the retries are done at the application level.  Unless we have
every program that does this use an ioctl to notify the system that a
route is failing, we can't depend upon this mechanism.  This wouldn't
really be a layering violation, but it would be ugly.  I'm still not
sure what the right solution is, but we now have enough redundancy in
our network that it is worth coming up with one, and I am committed to
doing so soon.  I can't simply run routed on each machine, because
that will cause synchronized paging on diskless machines.

-----------[000247][next][prev][last][first]----------------------------------------------------
Date:      Tue, 27-Oct-87 16:27:24 EST
From:      chris@gargoyle.UChicago.EDU (Chris Johnston)
To:        comp.protocols.tcp-ip
Subject:   Re: Ethernet - Hyperchannel Gateway

I had a meeting last week with Hyperchannel sales people including
Dan Friegard(sp?) who is their IP product manager.

They have come out with some high performace products for connecting
ethernets.

The Ethernet Bridges route ethernet over 10 and 50 Mb/s hyperchannel
links.  (EN601 & EN603)  They claim that 10 Mb/s hyperchannel gets
better throughput than ethernet because they use collision avoidance
instead of collision detection.

Aside: Hyperchannel can can run at 10 & 50 Mb/s and can be
interconnected using T1, T2, T3, coax, fiber optic and satellite
links.  Four 50 Mb/s hyperchannels can be run in parallel for 200Mb/s
throughput.  Cray to Cray tops out at 20Mb/s.

The IP router (EN641) speaks EGP, RIP, and HELLO (gated).  And
handles IP, ICMP, and ARP.  It can have from 4-16 ethernets and 1-4
hyperchannels.  Peak performance is 2k packets/s per ethernet and 10k
packets/s through the router backplane.

All this information is from suits (salesman), reality may diverge...

Anyone have any real life, hands on opinions of these things?

cj
-- 
* -- Chris Johnston --        * UChicago Computer Science Dept
* chris@gargoyle.uchicago.edu * 1100 East 58th Street
* ...ihnp4!gargoyle!chris     * Chicago, IL  60637
* johnston@uchicago.BITNET    * 312-702-8440

-----------[000248][next][prev][last][first]----------------------------------------------------
Date:      Tue, 27-Oct-87 16:45:38 EST
From:      chris@gargoyle.UChicago.EDU (Chris Johnston)
To:        comp.protocols.tcp-ip
Subject:   Re: Codenoll Fiber modems

We have both Cisco gateways and Codenol fiber optic ethernet
tranceivers (modems).

We have had been using Codenol for 3 years now and have been very
pleased with the reliability.  We have a fiber optic back bone
connected at the center by a passive star.  Gateways are at the ends
of the fiber arms.  Coax and fiber ethernets radiate from the
gateways.

This is a very good topology (no loops).  And extremely reliable
technology (no midnight cable taps destroying your network).  The
gateways protect your backbone bandwidth from most net nonsense.

For us this has been a low cost, low maintainance, high reliability
configuration.

cj


-- 
* -- Chris Johnston --        * UChicago Computer Science Dept
* chris@gargoyle.uchicago.edu * 1100 East 58th Street
* ...ihnp4!gargoyle!chris     * Chicago, IL  60637
* johnston@uchicago.BITNET    * 312-702-8440

-----------[000249][next][prev][last][first]----------------------------------------------------
Date:      Tue, 27-Oct-87 17:09:01 EST
From:      jr@lf-server-2.BBN.COM (John Robinson)
To:        comp.protocols.tcp-ip
Subject:   Re: TCP requesting ARP flushes and other "layering violations"

In article  <8710261735.AA02484@uunet.UU.NET> mo@maximo.UUCP writes:
>Currently, this kind of feedback is considered a "layering
>violation" by some.  I suggest that either this notion of layering
>is wrong, or people have a very stilted view of the interaction between
>layers.
>
>	-Mike O'Dell

For what it's worth, ISO (hence ANSI) have a standard called
multi-link which allows two or more point-to-point links to function
as a single data link layer entity, so the notion that there should be
a way to provide backup or parallel paths at layer 2 is not at all
foreign to the layer-conscious.  X.75 and now X.25 incorporate
multi-link as well.  Anyone want to work on the "multi-link"
extensions to 802.2?  (Only half :-)
-- 
/jr
jr@bbn.com or jr@bbn.uucp

-----------[000250][next][prev][last][first]----------------------------------------------------
Date:      Tue, 27-Oct-87 20:06:04 EST
From:      karels%okeeffe@UCBVAX.BERKELEY.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: On broadcasts, congestion and gong

Please, check your facts before sending flames to mailing lists!
4.3BSD does NOT do IP forwarding on singly-homed hosts, although
the ipforwarding variable is not cleared.  Only hosts with multiple
hardware interfaces with IP addresses need to be configured consciously
to avoid packet forwarding; such hosts need to be configured carefully
in any case.  4.2's forwarding behavior was different, of course.
Neither decision was random.

The recent comments about escaped broadcast packets had a number
of inaccuracies.  In particular, unrecognized broadcast messages wouldn't
be treated any differently than a host on the same network as the broadcast;
thus, they wouldn't be sent using a default route unless there were no
route to the destination network.  This is very unlikely for a directly-
connected network.  4.2 didn't recognize many current broadcast addresses;
4.3 recognizes most attempts to concoct broadcast addresses on the local
networks.  Of course, broadcasts for nonexistent subnets (or packets for
hosts on nonexistent subnets) are sent using the default route.  That's
one reason that gateway that are advertised as defaults within subnetted
networks should never use default routes themselves.  (However, Kirton's
EGP probably doesn't check for net 0, the "default" net, on input.)

		Mike

-----------[000251][next][prev][last][first]----------------------------------------------------
Date:      Tue, 27-Oct-87 20:08:42 EST
From:      kumar@hpindda.UUCP
To:        comp.protocols.tcp-ip
Subject:   gateway perf. references

I would appreciate it if someone could give me references in literature
on the topic of characterizing gateway performance.
Thanks,
-krishna kumar
(hpda!hpindda!kumar@hplabs.hp.com)

-----------[000252][next][prev][last][first]----------------------------------------------------
Date:      Tue, 27-Oct-87 21:28:13 EST
From:      Mills@UDEL.EDU
To:        comp.protocols.tcp-ip
Subject:   Re:  On broadcasts, congestion and gong

Mike,

Your scenario about unrecognized broadcast packets and sneak paths via
default gateways is exactly what happened in the recent case where
net 128.220 RIP packets crashed and burned at linkabit-gw. Fortunately,
the RIP packets did not survive the airspace to NSNET client networks,
where the collateral damage might have been spectacular.

The recent turbulence when net 0.0.0.0 was accidently sqwawked to the
corespeakers and evidently was squawked right back to the Kirton
code on one or more systems seems to confirm your suspicion that
a "default" sneaked via EGP can run amok on the local net. This should
be fixed.

My comment about RFC-1009 compliance was also intended for you. Can
you remark on the compliance of 4.3, lack of it or reasons wherefore?

Dave

-----------[000253][next][prev][last][first]----------------------------------------------------
Date:      Tue, 27-Oct-87 22:40:41 EST
From:      JBVB@AI.AI.MIT.EDU ("James B. VanBokkelen")
To:        comp.protocols.tcp-ip
Subject:   Re: ..."layering violations"

One has to be at least a little suspicious of layering violations, if
for no other reason that if you go blithely installing hooks and odd
interdependencies, you will wind up with a tangled morass of code that
can't be enhance, or ported, or even maintained.  Layering usually
exists because the designers wanted modularity, and relatively clean
interface specifications on module boundaries.

Handling RIF-cache flush on TCP timeouts means adding another hook,
and dummy routines to the other low-level routing layers.  Maybe
some of the other routing layers could use it, too.  It certainly
represents more code, and more complexity (and work, and money for
initial purchase and support).

Even something like ICMP illustrates this:  Essentially all TCPs will
return an error to their caller whenb they receive a Reset.  An ICMP
Destination Unreachable message implies much the same thing, but many
TCP/IPs won't return an error to the caller.  I won't defend this, but
I certainly understand why:  Handling ICMP Destination Unreachable requires
a 2nd, parallel demultiplexing path through IP and into the TCP, and it
is not absolutely required during the initial rush to get on the air.

A seer employed by one large network user I know of has pronounced that
1 Mb of memory will be necessary to implement ISO.  I don't know if he/she/it
is right, but the pronouncement certainly made more than one manufacturer
jump...

jbvb

-----------[000254][next][prev][last][first]----------------------------------------------------
Date:      Wed, 28-Oct-87 02:13:55 EST
From:      chris@MIMSY.UMD.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: (milnet) problems with IMP connection hanging

Our problem has been diagnosed as a hardware failure.  Well, at
least that helps remind us that not *all* problems are software
(as I keep claiming when my own code crashes :-) ).

Chris

-----------[000255][next][prev][last][first]----------------------------------------------------
Date:      Wed, 28-Oct-87 11:42:01 EST
From:      skip@ubvax.UB.Com (Stayton D Addison Jr)
To:        comp.protocols.tcp-ip,comp.dcom.lans
Subject:   Re: A handful of 802.X questions

In article <12030@labrea.STANFORD.EDU> morgan@jessica.stanford.edu (RL "Bob" Morgan) writes:
>In the wake of the IP-over-802 draft RFC, I'd like to ask a few
>questions:
>
>1.  Is any network anywhere using 802.2 over 802.3?  Does anyone
>running a large Inter-Ethernet have any plans to move to 802.2 over
>802.3?
>

UB's Token Ring - Ethernet Data Link Bridge puts the 802.5 packets on the
802.3 network, 802.2 LLC headers and all.

My incomplete understanding is that HP's TCP/IP implementation uses 802.3,
not Ethernet.  I don't know whether they use the 802.2 LLC headers.  

>2.  Is there any published justification for the use of source routing
>in IBM's bridging of token-rings, given its apparent violation of the
>network/logical-link/MAC layering principles?  Can anyone anywhere
>defend it?
>

Without endorsing the concept, I can repeat what I understand to be the
primary argument.  For security or other considerations (error rates, etc),
a node may want to direct the flow of its packets thru the internetwork to 
avoid certain routes/links.  In practice, I doubt many nodes have or will
ever do this.  

>3.  Is anyone anywhere doing MAC-level interconnection of token-ring
>and Ethernet/802.3 networks?
>

Yes.  Ungermann-Bass has announced an 802.3-802.5 Data Link Bridge.  
It doesn't do anything with Ethernet frames, however; just 802.3 (unless
the frames are from Ungermann-Bass equipment).  The problem is mainly the
ETYPE field which can not be derived from anything in the 802.5 header.

>4.  Has the 802.1 committee published anything about what it is up to?
>

I don't attend the meetings, but I understand that rev B of the 802.1
internetworking draft standard was published in March.  It was to be voted
on about now, I think.  

>5.  Is SNAP an official part of 802.2?  Is there anything written down
>about it anywhere?  It's not in my copy of 802.2.
>

Don't know.

>I've got lots more, but those will do for now.
>
>In wonderment,
>
>- RL "Bob" Morgan
>  Networking Systems
>  Stanford University
>  morgan@jessica.stanford.edu


-- Skip Addison
   {lll-crg, decwrl, ihnp4}!amdcad!cae780!ubvax!skip
   or sun!amd!ubvax!skip

-----------[000256][next][prev][last][first]----------------------------------------------------
Date:      Wed, 28-Oct-87 14:27:38 EST
From:      lekash@ORVILLE.NAS.NASA.GOV.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: On broadcasts, congestion and gong


> Please, check your facts before sending flames to mailing lists!
> 4.3BSD does NOT do IP forwarding on singly-homed hosts, although
> the ipforwarding variable is not cleared.  Only hosts with multiple
> hardware interfaces with IP addresses need to be configured consciously
> to avoid packet forwarding; such hosts need to be configured carefully
> in any case.  4.2's forwarding behavior was different, of course.
> Neither decision was random.

Sorry, I didn't feel very flame like.  Yes, I know BSD only
forwards on multiply homed hosts.  Perhaps random was a
poor random word choice, you may substitute carefully
considered.

Please do not consider what a said as a condemnation of 4.3BSD.
Of the operating systems we have here, it is the most reliable
and useful for my work.  I thank you for this, very much.

As you say, only hosts with multiple interfaces need to be carefully
configured in that fashion.  I interpret
this to mean gateways, since a host with two network
interfaces is only 1 line in /etc/rc.local more difficult than
a host with only one.   When the aforementioned decision was
made, you probably had not envisioned the kind of things that are
occuring at sites like here.  Every workstation comes with an
ethernet, and then has a hyperchannel added for performance to/from
a supercomputer.  So we have about 50 hosts that have multiple
interfaces, and many more on the way.

I believe that a system should have a minimum amount of default 
rope with which to hang oneself.    And so, since gateways need
the careful configuration, a default of not being a gateway seems
appropriate.  

I did in fact configure many of them correctly, but
after a time one turns such things over to others.  New distributions
come out of vendors, and someone just mangles their workstation, to
bring it up a rev.  Of course, the vendor has distributed it as they
got it, and the aforementioned careful configuration is wiped
out, despite written instructions that manage to disappear, and
aren't necessary, because 'it works'.  The brokenness is then
only found the next time I happen to spend a half hour
groveling with netwatch, one of the lesser exciting tasks.

I happen to have noted difficulties because of some degree of
perceived inconsistency.  There exists the options GATEWAY
kernel configuration parameter, which I would
think a useful mnemonic hook to hang enabling ipforwarding and sending
of redirects, which it wasn't last I looked.  It does control 
sending destination unreachable.  It is fairly common for people
to do the minimum necessary to configure something, thus will 
forget to set IPFORWARDING and IPSENDREDIRECTS, as well as setting
GATEWAY.  I will admit it is annoying to have to deal with possible
modes of screwing up, but it then means less work for me later on.

						john

-----------[000257][next][prev][last][first]----------------------------------------------------
Date:      Wed, 28-Oct-87 14:45:30 EST
From:      mshiels@orchid.UUCP
To:        comp.protocols.tcp-ip,comp.dcom.lans
Subject:   Re: NCSA PC Telnet 2.0 - need bugs address


I also have a small complaint that it is not configurable enough.

You can't specify the IO port address??  Where is it by default??

Other than that the documentation is great.

-- 
  Michael A. Shiels (MaS Network Software)
  mshiels@orchid.waterloo.EDU
UUCP: ...path...!watmath!orchid!mshiels

-----------[000258][next][prev][last][first]----------------------------------------------------
Date:      Wed, 28-Oct-87 17:33:04 EST
From:      karn@faline.bellcore.com (Phil R. Karn)
To:        comp.protocols.tcp-ip
Subject:   Re: ..."layering violations"

> Even something like ICMP illustrates this:  Essentially all TCPs will
> return an error to their caller whenb they receive a Reset.  An ICMP
> Destination Unreachable message implies much the same thing, but many
> TCP/IPs won't return an error to the caller....

There is something to be said for this. ICMP unreachable messages are
often generated under transitory conditions.  Many TCPs (e.g., BSD UNIX)
bomb out when they get an ICMP unreachable message, even if a connection
has already handled many packets successfully. I find this *most*
annoying; TCP is supposed to provide reliable end-to-end communications,
not bomb out at the slightest provocation from the underlying Internet.

There are basically two classes of network applications: interactive
users and automatic daemons.  An interactive network command should always
leave it up to the user to decide whether he or she wants to give up or
be patient and see if the problem goes away. An automatic daemon is much
more patient, so as long as the TCP is careful to avoid wasting network
resources (e.g., by backing off) I see little reason it shouldn't just
keep trying "forever".  ICMP unreachable messages are very useful
debugging tools (or at least they would be if they actually contained
the source address of the complaining gateway instead of some nonsense
IP address like that of the original destination). But they shouldn't
affect TCP's operation without the consent of the user.

Even worse are TCP "keep alive" timers.  Not only do they waste network
resources, as far as I'm concerned they serve no useful purpose at all.
If I have a long-idle TCP connection, why should I care if the path
goes down temporarily? I certainly don't want my connection aborted
because of it.

Phil

-----------[000259][next][prev][last][first]----------------------------------------------------
Date:      Wed, 28-Oct-87 21:52:37 EST
From:      Mills@UDEL.EDU
To:        comp.protocols.tcp-ip
Subject:   Re:  problems with IMP connection hanging

Charles,

For completeness, I offer the observation that the fuzzball gateways attached
to ARPANET apparently do not have this problem. But then, they are pretty
crude gizmos and don't even count RFNMs. Interesting that they don't count
RFNMs...

Dave

-----------[000260][next][prev][last][first]----------------------------------------------------
Date:      Thu, 29-Oct-87 08:35:42 EST
From:      steve@BRILLIG.UMD.EDU (Steve D. Miller)
To:        comp.protocols.tcp-ip
Subject:   Re: problems with IMP connection hanging


   More on our PSN problems:  it seems that our ECU link to MILNET PSN 57
was loose at the PSN end; tightening the cable fixed the problem, at least
temporarily.  A short while later, our link was again down.  I fiddled
various things under the direction of the NOC, but nothing seemed to help.
They told me it was a software problem on our end (and I argued loudly,
'cause the software hasn't changed in an eternity), and they finally
suggested going to our backup ECU.  We plugged in and powered up, and all
seems well now.

   Our PSN problems were likely caused by hardware problems with the first
ECU, but things behaved so strangely that I won't be convinced of that until
some hardware guy verifies the ECU failure.  Strange...  but probably
unrelated to the problems others are having.

	-Steve

-----------[000261][next][prev][last][first]----------------------------------------------------
Date:      Thu, 29-Oct-87 08:39:00 EST
From:      CLYNN@G.BBN.COM.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: ..."layering violations"

Phil,	ICMP messages DO conatin an IP source address of the gateway
which sent the message - look in the IP header's Source Address field.
If the available software does not include such basic information
when the ICMP message is passed by IP to the ICMP "layer", complain to
the vendor.  I would also complain if the subnet address were not passed
up, or if the info wasn't available at the TCP or higher layers; I hope
that such additional information doesn't fall into the category which is
the subject of this message.  While you may find reset connections
"annoying", I suspect that others might add it to their list of "denial
of service attacks".  Remember ... Validate your input before you
process it!

-----------[000262][next][prev][last][first]----------------------------------------------------
Date:      29 Oct 1987 08:39-EST
From:      CLYNN@G.BBN.COM
To:        faline!karn@BELLCORE.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: ..."layering violations"
Phil,	ICMP messages DO conatin an IP source address of the gateway
which sent the message - look in the IP header's Source Address field.
If the available software does not include such basic information
when the ICMP message is passed by IP to the ICMP "layer", complain to
the vendor.  I would also complain if the subnet address were not passed
up, or if the info wasn't available at the TCP or higher layers; I hope
that such additional information doesn't fall into the category which is
the subject of this message.  While you may find reset connections
"annoying", I suspect that others might add it to their list of "denial
of service attacks".  Remember ... Validate your input before you
process it!
-----------[000263][next][prev][last][first]----------------------------------------------------
Date:      Thu, 29-Oct-87 10:17:03 EST
From:      T45@A.ISI.EDU (RSRE@SRI-NIC.ARPA, Malvern@SRI-NIC.ARPA, UK)
To:        comp.protocols.tcp-ip
Subject:   Re: NCSA PC Telnet 2.0 - need bugs address

Bruce,
      Many thanks for talking to Bill Fifer, it will help the Embassy
people. Yet again I have been defeated by arbitrary financial micro
management! I do not understand how the military expect to develop
highly automated and advance C3I systems, if they won't fund
demonstrators in supporting base technology like Packet Radio.
I hope you enterprise is thriving in these uncertain financial times

Cheers,
Brian.
-------

-----------[000264][next][prev][last][first]----------------------------------------------------
Date:      Thu, 29-Oct-87 10:37:00 EST
From:      snorthc@NSWC-OAS.ARPA
To:        comp.protocols.tcp-ip
Subject:   Re: Ethernet Bridge

You might want to consider the family of products by Bridge Communications
Inc.  They may not be the price leader in boxes to connect ethernets,
but I have never heard anyone malign their products.  Their products
seem to work, the manuals are readable and they really do give tech
support when in trouble.

Bridge is not without serious competition.  I suspect you will find
attractive boxes from Cisco and CMC at a slightly lower price.

I think UB might have such a gadget as well.

We are interested in such things ourselves, so anyone who has something
to contribute should consider themselves welcome.

	Stephen Northcutt (snorthc@nswc-g.arpa)

-----------[000265][next][prev][last][first]----------------------------------------------------
Date:      Thu, 29-Oct-87 10:52:23 EST
From:      case%utkvx4.DECNET@UTKCS2.CS.UTK.EDU (J. D. CASE)
To:        comp.protocols.tcp-ip
Subject:   history and philosophy

I am teaching a course on the TCP/IP protocol suite and internetworking.  We 
are using the RFC's as a text and sources for "workbook".  While going through
some RFC's, a question has come up that I desire help in answering.

My students would like to know what the thinking was behind including the
pseudo-header into the TCP checksum algorithm RFC 793 pp 16,17 and into the
UDP checksum algorithm RFC 768 p 2. 

Do any of you old crusty souls who were present in the discussions leading to
those documents remember some of the thinking that led to those decsions? 

Help appreciated in advance.

Jeff Case
case@utkcs2.cs.utk.edu

-----------[000266][next][prev][last][first]----------------------------------------------------
Date:      Thu, 29-Oct-87 14:06:28 EST
From:      karn@FALINE.BELLCORE.COM (Phil R. Karn)
To:        comp.protocols.tcp-ip
Subject:   Re: ..."layering violations"

>Phil,	ICMP messages DO conatin an IP source address of the gateway
>which sent the message - look in the IP header's Source Address field.

Yes, I know they're *supposed* to. I meant to say that I've seen many
gateways use the destination IP address of the original datagram as the
source address for the IP datagram containing the ICMP message, and this
makes it impossible to discern where the problem is.

One implementation around here returns every broadcast it sees with
a "port unreachable" ICMP message and puts the IP broadcast address in
the IP source field!

Phil

-----------[000267][next][prev][last][first]----------------------------------------------------
Date:      Thu, 29-Oct-87 17:16:11 EST
From:      edward@comp.vuw.ac.nz.UUCP
To:        comp.mail.misc,comp.protocols.tcp-ip
Subject:   can pmdf be run over tcp-ip?

We're just beginning to use pmdf & tcop ion Unix & VMS Vaxen.  Is
there  any way to run pmdf over tcp-ip instead of phone links? As
this is probably a simple query, please reply via email. Thanks.

-- 
Ed Wilkinson  ...!uunet!vuwcomp!edward  or  edward@comp.vuw.ac.nz

-----------[000268][next][prev][last][first]----------------------------------------------------
Date:      Thu, 29-Oct-87 17:38:24 EST
From:      cam@columbia-pdn.UUCP
To:        comp.protocols.tcp-ip
Subject:   Source routing via the 4.x socket interface

Folks,

Is a socket interface user permitted to specify a source route and if so
how is it accomplished?

Chris Markle - cam%columbia-pdn@acc-sb-unix.arpa - (301)290-8100

-----------[000269][next][prev][last][first]----------------------------------------------------
Date:      Thu, 29-Oct-87 18:37:00 EST
From:      jslove@starch.DEC.COM (J. Spencer Love, 617-841(DTN 237)-2751, SHR1-3/E29)
To:        comp.protocols.tcp-ip
Subject:   Change my subscription

Please remove "JSLove@MIT-Multics.ARPA" from the TCP-IP distribution
list, and substitute "JSLOVE%STARCH.DEC.COM@DECWRL.DEC.COM"
Thank you,
				-- SpeZZ !Z

-----------[000270][next][prev][last][first]----------------------------------------------------
Date:      Thu, 29-Oct-87 19:47:38 EST
From:      NIC@SRI-NIC.ARPA (DDN Reference)
To:        comp.protocols.tcp-ip
Subject:   ARPANET UPGRADE SCHEDULE - REVISED

This message is to advise you that there is a revised schedule for PSN
7.0 upgrade on the ARPANET.  Please note that there will be one more
weekend of testing. 



Date:			Event:

9/24/87-		Node group one sites will be upgraded to
9/26/87			PSN 7.0  ACTIVITY COMPLETE

10/1/87-		Node groups two through five will be upgraded 
10/3/87			to PSN 7.0  ACTIVITY COMPLETE

10/17/87		All ARPANET nodes will be running PSN 7.0 with
			the Old End-to-End protocol configured.
			ACTIVITY COMPLETE

10/24			Node group one will be cutover to the 
			New End-to-End; operation will be verified;
			and the nodes will be returned to the Old
			End-to-End.

10/31			The rest of the node groups will be cutover to
			the New End-to-End; operation will be verified;
			and the nodes will be returned to the Old
			End-to-End.

11/7			The entire ARPANET will be cutover to use the 
			New End-to-End protocol; operation will be
			verified; and the nodes will be returned to the
			Old End-to-End.

11/14			The entire ARPANET will be cutover to the 
			New End-to-End protocol; operation will be
			verified; and the New End-to-End protocol
			will be left running.

WARNING:  As with any new network software, there is the potential to
          impact host software.  Should this occur, please call the
	  BBN Hotline (800-492-4992) to report any difficulties.

For those that are interested, there will be a meeting to discuss
ARPANET network problems at the Mitre Wilson Building, 7600 Old
Springhouse Road, McLean, VA on 3 November 1987 from 10:30 - 12:30 in
the 4th Floor Conference Room. 
-------

-----------[000271][next][prev][last][first]----------------------------------------------------
Date:      Thu, 29-Oct-87 21:27:12 EST
From:      mdove@mips.UUCP (Mike Dove)
To:        comp.protocols.tcp-ip
Subject:   Wanted: TCP/IP source


NOTE: This is a posting for a friend, please respond directly to his
      address below...

We are looking for TCP/IP source to purchase or that is in the public
domain.  It is very important to us that is be a FULL implementation
of TCP/IP.  What I mean by this is that it has been thoroughly tested,
and believed to be a full implementation (for example, like BSD TCP/IP is).

Any pointer to where I might find what I am looking for would be
much appreciated.

Thanks,
Glenn Boozer
{decvax,ucbvax,ihnp4,sun}!decwrl!mips!leedat!root

-----------[000272][next][prev][last][first]----------------------------------------------------
Date:      Thu, 29-Oct-87 22:05:42 EST
From:      martillo@ATHENA.MIT.EDU
To:        comp.protocols.tcp-ip
Subject:   Separation of Layers


I seem to remember reading (perhaps in an article by Padlipsky) that
if a process receives a packet with expected source and destination
TCP ports, expected destination IP address but an unexpected IP source
address, such an incident should be considered an error.  I would
think that for a multihomed host selecting a new interface would be a
perfectly reasonable way to handle an interface error.  Also moving to
a new interface would be one way of handling load sharing.  The
recipient should be smart enough to start sending IP packets to the
new interface.  If IP and TCP are really separate layers, this
analysis makes sense to me.  Is this the way TCP/IP is implemented on
multihomed hosts?

Yaqim Martillo

-----------[000273][next][prev][last][first]----------------------------------------------------
Date:      Thu, 29-Oct-87 22:23:44 EST
From:      asjoshi@phoenix.UUCP
To:        comp.protocols.tcp-ip
Subject:   RE: KA9Q TCP/IP in Turbo C

Hello,

Sorry to clutter the net with this message. My apologies to those who are not
interested.

I have gotten a number of requests for the Turbo C port of Philip Karn's
TCP/IP package. I had not really expected that more than  a couple of people
would want it. The code is large and I don't have a diff utility on the AT
I am working on (in fact I don't even have the originals any more). I shall
upload the stuff over this weekend when uploading is easier (if it helps any
to know - out here first I have to load it to a IBM 3081 and then transfer
it to a VAX). So I shall ask people to be patient. I am also trying to see if 
it could be kept somewhere it could be FTP'd anonymously.

Sorry about not replying individually. Once again I would remind everyone
I have not checked all the functions that came in orginal code. MOst of them
should work since what I didn't need I did not touch, but some may not.





-- 
Amit Joshi         |  BITNET :  Q3696@PUCC.BITNET
                   |  USENET :  ...seismo!princeton!phoenix!asjoshi
"There's a pleasure in being mad ...which none but madmen know!" St.Dryden

-----------[000274][next][prev][last][first]----------------------------------------------------
Date:      Fri, 30-Oct-87 00:21:51 EST
From:      JBVB@AI.AI.MIT.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: PS/2 Model 80 network <19K memory


    ... Proteon P1300 Token Ring passing cards (which are now available
    for the model 80)

      Michael A. Shiels (MaS Network Software)

I'm sorry, but I must express skepticism about Proteon Micro Channel cards
being *available*.  Announced, maybe, but the only three network cards (for
common LAN media) you can actually *buy* for your Model 80, today, are the
IBM Token Ring Adapter (TOKREUI is the same as always), the Ungermann-Bass
NIC (not software compatible with the older, PC-bus NIC), and the 3Com 3C523
(not software-compatible with the 3C503, or 3C501).  Even the last is quite
hard to lay hands on.

jbvb

-----------[000275][next][prev][last][first]----------------------------------------------------
Date:      Fri, 30-Oct-87 06:57:00 EST
From:      CERF@A.ISI.EDU
To:        comp.protocols.tcp-ip
Subject:   Re: Separation of Layers

Yaqim,

I would be surprised if multi-homed hosts were implemented in such a way
as to make indistinguishable to the source which net was being used.
Generally, in the scenario you describe, the arriving packet with the 
unexpected source IP address might be thought an intruder by the recipient.
How could you tell the difference between a source which is attempting a
spoof of a multi-homed host and a true multi-homed host? Logical
addressing at the IP level seems a more solid way of proceeding - of
course, this leaves unspoken how the various physical addresses are
bound/validated to the logical IP level address - for some networks,
this is done automatically as in the DDN and ARPANET.

Vint Cerf

-----------[000276][next][prev][last][first]----------------------------------------------------
Date:      Fri, 30 Oct 87 12:43:52 -0500
From:      Craig Partridge <craig@NNSC.NSF.NET>
To:        munnari!comp.vuw.ac.nz!edward@uunet.uu.net
Cc:        tcp-ip@sri-nic.ARPA
Subject:   pmdf over tcp-ip

Ed,

    The answer is yes -- at least, recent versions of VMS PMDF can run
SMTP over TCP.

Craig

PS:  For those of you who don't know what PMDF is...  There are several
versions of PMDF (which in turn are based on MMDF).  VMS PMDF is Ned
Freed's version of PMDF which is loosely based on the original Unix PMDF
by Ira Winston, which in turn is based on early versions of MMDF (by
Dave Crocker).  The 'P' stands for Pascal -- it was originally a simple
mail system designed for use with CSNET's PhoneNet.  The VMS version
has grown to rival MMDF in size and complexity.
-----------[000277][next][prev][last][first]----------------------------------------------------
Date:      30 Oct 87 12:53:00 PST
From:      "Dave Crocker" <dcrocker@twg.arpa>
To:        "tcp-ip" <tcp-ip@sri-nic.arpa>
Subject:   Re:  supdup protocol
This is a couple of week's late, but Chris Kent cited a study of
splitting the front and back ends of an editor.  I, too, recall reading
that study quite a few years ago.  The basic concepts were quite 
straightforward, in terms that Chris described.

In particular, I remember the researcher (no, I haven't the foggiest idea
of who or where) was using 1200 baud and claimed an effective (subjective)
9600 baud for most activities.

In the early days of Interactive Systems, with an intelligent terminal
and tailored code added to it, they claimed highly effective interactions
with 1200 baud lines, using the INed editor.  This was circa 1978.

Dave
------
-----------[000278][next][prev][last][first]----------------------------------------------------
Date:      Fri, 30-Oct-87 13:18:48 EST
From:      dpowles@CCD.BBN.COM ("Drew M. Powles")
To:        comp.protocols.tcp-ip
Subject:   subnetting

We are doing a survey for a local client on IP implementations.  The
descriptions in the vendors guide don't really go into the detail
we're looking for in one respect: Subnetting support (ala RFC 950).  I
realize that both RFC 1011 and RFC 1009 require subnetting be
supported, but I have run into many folks in the 'biz that insist most
vendors do not implement nor plan to implement subnetting support.
For instance, does the Wollongong software implement RFC 950?  What
about Berkeley, or AT&Ts 3B TCP/IP?  What about the gateway vendors
(of course I know about BBN's) such as cisco Systems?  I do note the
Fuzzballs support it; the fuzzball description was one of the few that
called out subnetting specifically.  How about the interface and
front-end vendors like ACC?

Any and all help would certainly be appreciated.  You can send
directly to me, dpowles@bbn.com and I'll summarize to the community if
you'd like.

thanks!
Drew Powles
Columbia Professional Services
BBN Communications

-----------[000279][next][prev][last][first]----------------------------------------------------
Date:      Fri, 30 Oct 87 13:18:48 EST
From:      "Drew M. Powles" <dpowles@ccd.bbn.com>
To:        tcp-ip@sri-nic.arpa
Cc:        dpowles@ccd.bbn.com
Subject:   subnetting
We are doing a survey for a local client on IP implementations.  The
descriptions in the vendors guide don't really go into the detail
we're looking for in one respect: Subnetting support (ala RFC 950).  I
realize that both RFC 1011 and RFC 1009 require subnetting be
supported, but I have run into many folks in the 'biz that insist most
vendors do not implement nor plan to implement subnetting support.
For instance, does the Wollongong software implement RFC 950?  What
about Berkeley, or AT&Ts 3B TCP/IP?  What about the gateway vendors
(of course I know about BBN's) such as cisco Systems?  I do note the
Fuzzballs support it; the fuzzball description was one of the few that
called out subnetting specifically.  How about the interface and
front-end vendors like ACC?

Any and all help would certainly be appreciated.  You can send
directly to me, dpowles@bbn.com and I'll summarize to the community if
you'd like.

thanks!
Drew Powles
Columbia Professional Services
BBN Communications


-----------[000280][next][prev][last][first]----------------------------------------------------
Date:      Fri, 30-Oct-87 15:53:00 EST
From:      dcrocker@TWG.ARPA ("Dave Crocker")
To:        comp.protocols.tcp-ip
Subject:   Re:  supdup protocol

This is a couple of week's late, but Chris Kent cited a study of
splitting the front and back ends of an editor.  I, too, recall reading
that study quite a few years ago.  The basic concepts were quite 
straightforward, in terms that Chris described.

In particular, I remember the researcher (no, I haven't the foggiest idea
of who or where) was using 1200 baud and claimed an effective (subjective)
9600 baud for most activities.

In the early days of Interactive Systems, with an intelligent terminal
and tailored code added to it, they claimed highly effective interactions
with 1200 baud lines, using the INed editor.  This was circa 1978.

Dave
------

-----------[000281][next][prev][last][first]----------------------------------------------------
Date:      Fri, 30-Oct-87 18:06:13 EST
From:      estrin@OBERON.USC.EDU (Deborah L. Estrin)
To:        comp.protocols.tcp-ip
Subject:   Security and Access Restrictions

If you have a serious interest in security then simply checking the
IP addresses is not adequate because it is very easy to spoof IP
addresses.  In addition, you might find it cumbersome to have a static list
of individual IP addresses if the network is large and decentralized.

I dont know of any other existing mechanisms in tcp/ip but we are experimenting
with something called Visa. If you are interested I can send you a paper
describing the scheme. Its intent is to solve the exact problem that
you describe and I would be very interested in finding out if you
think it would actually do so!

In addition, pls let us know if you discover other options as a result
of your query.

Deborah Estrin
Computer Science Dept
University of Southern California

-----------[000282][next][prev][last][first]----------------------------------------------------
Date:      Fri, 30-Oct-87 21:22:18 EST
From:      alan@mn-at1.UUCP (Alan Klietz)
To:        comp.protocols.tcp-ip
Subject:   Re: Ethernet - Hyperchannel Gateway

In article <784@gargoyle.UChicago.EDU> chris@gargoyle.uchicago.edu (Chris Johnston) writes:
<[Hyperchannel Ethernet router] performance is 2k packets/s per ethernet and 10k
<packets/s through the router backplane.
<
<All this information is from suits (salesman), reality may diverge...
<
<Anyone have any real life, hands on opinions of these things?

I've never seen a Hyperchannel deliver a thousand blocks per second,
much less two thousand.. 

From our experience, the actual sustained rate is 600/sec, assuming
no associated data and no contention on a dedicated trunk.  [Measured
between two Crays on a one meter coax trunk.]

The problem is that the internal adapter protocol puts in large
time-delays w.r.t. the transmission rate for collision avoidance. 
The salesman must have been taking about virtual ethernet packets
collected into larger Hyperchannel blocks.

(By the way, I've never seen 20 Mbit/sec either.)

--
Alan Klietz
Minnesota Supercomputer Center (*)
1200 Washington Avenue South
Minneapolis, MN  55415    UUCP:  ..rutgers!meccts!mn-at1!alan
Ph: +1 612 626 1836              ..ihnp4!dicome!mn-at1!alan (beware ihnp4)
                          ARPA:  alan@uc.msc.umn.edu  (was umn-rei-uc.arpa)

(*) An affiliate of the University of Minnesota

-----------[000283][next][prev][last][first]----------------------------------------------------
Date:      Fri, 30-Oct-87 23:57:18 EST
From:      henry@utzoo.UUCP.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: Null TCP/SLIP msg and list-servers

> ... The message was sent because
> the launch button is right next to the kill button (a case in point for
> user interface designers and missile operators)...

But for missile operators, the "launch" and "kill" buttons are the same!

(Sorry, couldn't resist.)

				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,decvax,pyramid}!utzoo!henry

-----------[000284][next][prev][last][first]----------------------------------------------------
Date:      31 Oct 87 10:11:00 PST
From:      "Dave Crocker" <dcrocker@twg.arpa>
To:        "tcp-ip" <tcp-ip@sri-nic.arpa>
Subject:   Powles' Subnet Query
Drew,

Wollongong has a range of TCP products and they are not all up to the
same level of functionality.  However, almost all of them support a degree
of subnetting.  

In particular, the VMS product fully supports the feature, per RFC 950.

In my opinion, it is not reasonable for any newly-developed IP to omit
subnet support.  (However, it also is not reasonable to fault
an implementation that predates community acceptance of RFC 950.)

Dave Crocker
VP, Software Engineering
The Wollongong Group
------
-----------[000285][next][prev][last][first]----------------------------------------------------
Date:      31 Oct 87 11:08:00 PST
From:      "Dave Crocker" <dcrocker@twg.arpa>
To:        "tcp-ip" <tcp-ip@sri-nic.arpa>
Subject:   Ethernet-Hyperchannel gateway
This is a couple of weeks late, but...

The Wollongong Group's VMS TCP (WIN/TCP) supports ethernet and
hyperchannel and IP routing. 

Dave
------
-----------[000286][next][prev][last][first]----------------------------------------------------
Date:      Sat, 31-Oct-87 11:25:45 EST
From:      david@ukma.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: Ethernet Bridge

We're also looking around for ether gateway boxes ...

One that looks very very very interesting is the LANbridge 100.

But there are security concerns.  One of the people in our
group is going to squack over and over about how insecure we
are unless we're behind an IP gateway of some sort.  

What do people think about the security issues?  Right now, the
concern is someone creating a situation where one of our equiv
hosts is down, the bad-guy boots a machine that says he is
the now-down machine and creates an suid shell on another of
the equiv machines, then goes away.  The assumption is that if
we're hiding behind an ip gateway then the gateway can see
that these packets coming in from outside, claiming to be
from inside our net, aren't valid and will toss them.

If we're instead using the LANbridge then we don't have any way
of telling that the bad guy is coming in from elsewhere.

We made an informal survey of gateway techniques at the AT&T
users group meeting last spring in Colorado.  (I wasn't there
so don't know the details, but...) I was told that the people
fell into two groups.  One group used IP gateways and the other
used ether-level gateways.  Not one of the people using IP gateways
used them for purposes of security ...

Does everybody just ignore the security issue?  How do you sleep
at night?

Oh, we also realize the huge security hole that NFS is as well.

Somehow we sleep at nights with that one.  At least most of
us do, the squacker mentioned above probably doesn't.... :-)
-- 
<---- David Herron,  Local E-Mail Hack,  david@ms.uky.edu, david@ms.uky.csnet
<----                    {rutgers,uunet,cbosgd}!ukma!david, david@UKMA.BITNET
<---- I thought that time was this neat invention that kept everything
<---- from happening at once.  Why doesn't this work in practice?

-----------[000287][next][prev][last][first]----------------------------------------------------
Date:      Sat, 31-Oct-87 13:11:00 EST
From:      dcrocker@TWG.ARPA ("Dave Crocker")
To:        comp.protocols.tcp-ip
Subject:   Powles' Subnet Query

Drew,

Wollongong has a range of TCP products and they are not all up to the
same level of functionality.  However, almost all of them support a degree
of subnetting.  

In particular, the VMS product fully supports the feature, per RFC 950.

In my opinion, it is not reasonable for any newly-developed IP to omit
subnet support.  (However, it also is not reasonable to fault
an implementation that predates community acceptance of RFC 950.)

Dave Crocker
VP, Software Engineering
The Wollongong Group
------

-----------[000288][next][prev][last][first]----------------------------------------------------
Date:      Sat, 31-Oct-87 14:08:00 EST
From:      dcrocker@TWG.ARPA ("Dave Crocker")
To:        comp.protocols.tcp-ip
Subject:   Ethernet-Hyperchannel gateway

This is a couple of weeks late, but...

The Wollongong Group's VMS TCP (WIN/TCP) supports ethernet and
hyperchannel and IP routing. 

Dave
------

-----------[000289][next][prev][last][first]----------------------------------------------------
Date:      Sat, 31-Oct-87 16:07:38 EST
From:      martillo@ATHENA.MIT.EDU
To:        comp.protocols.tcp-ip
Subject:   Separation of Layers


If I were really worried about spoofing, I would hardly depend on IP
address consistency to guard against spoofing.  I would suggest that
this chimerical protection against spoofing violates the logical
distinction between the IP and TCP layers.  I would suggest in fact
that a host supporting several level 3s or IP like layers should
permit the passage of data packets up to the TCP protocol handler or
any other comparable level protocol handler and the TCP level protocol
handlers should not care from which of the level 3s the packet
originated.  Such logical distinction seems to be lost if level 4
worries about the remote IP address and if level 3 worries about level
4 ports.  At MIT we used subnet mask to provide some security for tftp
transfers.  I am not so sure this was such a good idea though it did
work for our purposes.

Yakim Martillo

-----------[000290][next][prev][last][first]----------------------------------------------------
Date:      Sat, 31-Oct-87 18:54:48 EST
From:      egisin@orchid.UUCP
To:        comp.mail.misc,comp.protocols.tcp-ip
Subject:   Re: can pmdf be run over tcp-ip?

In article <13051@comp.vuw.ac.nz>, edward@comp.vuw.ac.nz (Ed Wilkinson) writes:
> We're just beginning to use pmdf & tcop ion Unix & VMS Vaxen.  Is
> there  any way to run pmdf over tcp-ip instead of phone links? As
> this is probably a simple query, please reply via email. Thanks.

PMDF and PhoneNet are often confused.
PhoneNet is a transport protocol (for mail) over asynchronous lines.
There are a couple of implementations of it under Unix, and one is
called (inappropriately) "PMDF".

The PMDF package under VMS is a "mailer", like sendmail and MMDF on Unix.
It includes support for several transport protocols, including PhoneNet,
DecNet, and SMTP/TCP/IP.

So PMDF on VMS supports TCP/IP, but the TCP/IP itself is a separate package.

-----------[000291][next][prev][last][first]----------------------------------------------------
Date:      Sun, 1-Nov-87 02:44:44 EST
From:      karels@OKEEFFE.BERKELEY.EDU (Mike Karels)
To:        comp.protocols.tcp-ip
Subject:   Re: On broadcasts, congestion and gong

On RFC-1009 compliance of 4.3: I haven't looked at the RFC nearly
as closely after it was issued as before, and I'm not sure what
changes were made after the last draft.  (Not as many as I would
have liked, I guess; I sent in almost 10 pages of comments on the
last draft.)  However, 4.3 comes fairly close to compliance.  As
a host (esp. singly-homed), it is quite conservative about what it
will respond to, and it won't forward packets.  As a gateway, it
does nearly everything as the RFC specifies, with some minor
exceptions.  For example, it uses both host and net redirects, with
host redirects used for if the route used is for a host or subnet
and net redirects otherwise; the RFC says to use only host redirects.
(Or was that for unreachables?  The opposite conclusion was drawn
for the two.) Nearly all of the IP and ICMP options and types are
supported, with the exceptions of security and TOS (but who knows
what to do with these?).  I should make a pass over the RFC soon;
we plan to release a current copy of the networking code soon
with Van's TCP changes as well as the bug fixes since the first release.

Most things are automatically configured.  John Lekashman mentioned
various options controlling forwarding, gateway function, and generation
of redirects; these are set only for performance reasons (more buffers
and bigger hash tables for gateways) or to cause unusual combinations,
such as machines that forward packets but won't send redirects.
Hosts with one interface won't forward; hosts with multiple interfaces
assume that they are gateways unless ipforwarding is disabled (either
at compile time, or by patching a variable).  Sorry, John, but I have
to disagree that configuring multi-homed hosts is only one line
different in the startup file; multi-homed hosts *have* to be more
carefully configured, if only because some form of routing needs to be
done to choose the correct outgoing interface.

		Mike

-----------[000292][next][prev][last][first]----------------------------------------------------
Date:      Sun, 1-Nov-87 02:59:53 EST
From:      karels@OKEEFFE.BERKELEY.EDU (Mike Karels)
To:        comp.protocols.tcp-ip
Subject:   Re: ..."layering violations"

Two comments on your recent message.  First, about TCP behavior
when ICMP unreachables are received: I definitely agree that TCP
ought not to quit when it receives an unreachable.  However, in Unix
and probably most other systems, it's hard to report "soft" errors
to a network client.  In 4.3, I chose to return a single error
on the next send or receive, but the TCP connection remains open.
Unfortunately, most network applications carefully check for errors
on each send/receive, and they give up on the first error.
(4.2 aborted the connection when ICMP errors were received,
and thus the application had no chance to keep trying.)

I also agree that you're right to distinguish between interactive
network users and automatic daemons.  However, it's precisely for
the daemons that are willing to wait patiently forever that "keep alive"
messages are needed.  Although the telnet client will give up and close
the connection manually, there needs to be a way to prevent systems
from accumulating useless, disconnected telnet servers and other such
trash.  Most application-level programs don't have their own keep-alive
or are-you-there to detect network failure.  For those reasons, we use
TCP-level keepalives (which are also not well provided-for at this level)
only on network servers that don't have their own time-out scheme.

		Mike

-----------[000293][next][prev][last][first]----------------------------------------------------
Date:      Sun, 1-Nov-87 03:44:46 EST
From:      karels@OKEEFFE.BERKELEY.EDU (Mike Karels)
To:        comp.protocols.tcp-ip
Subject:   Re: Source routing via the 4.x socket interface

In 4.3, source routing is set with setsockopt; see the manual page
for ip(4C).  (4.2 didn't support IP options, and most 4BSD vendors
don't yet either.)

		Mike

END OF DOCUMENT