The 'Security Digest' Archives (TM)

Archive: About | Browse | Search | Contributions | Feedback
Site: Help | Index | Search | Contact | Notices | Changes

ARCHIVE: TCP-IP Distribution List - Archives (1987)
DOCUMENT: TCP-IP Distribution List for May 1987 (243 messages, 110782 bytes)
SOURCE: http://securitydigest.org/exec/display?f=tcp-ip/archive/1987/05.txt&t=text/plain
NOTICE: securitydigest.org recognises the rights of all third-party works.

START OF DOCUMENT

-----------[000000][next][prev][last][first]----------------------------------------------------
Date:      Fri, 1-May-87 10:18:20 EDT
From:      stefan@wheaton.UUCP (Stefan Brandle)
To:        comp.protocols.tcp-ip
Subject:   need info about DECnet -- documentation and/or software


Our campus is getting DECnet'd, which is no doubt a good thing.  The terminals
will all eventually be going onto terminal servers -- also probably good, but
it raises the question of what to do for computers not on DECnet.
Specifically, we want to run BSD 4.3 on a uVAX.  It has tcp/ip available, and a
companion uVAX will be running Ultrix 2.0 w. DECnet.

Now for the questions:

1)  Anyone know of software that would allow the the terminal servers to 
contact the uVAX under 4.3?  And other DEC machines not under DECnet?

2)  Is technical documentation on DECnet available, or is that kept hush-hush?

3)  Any suggestions on how we could write our own software to do this?

File transfer (etc.) would be nice, but it is the terminal server stuff that is
the big sticking point.

Thanks for any answers.  I'll summarize if there's interest.

Stefan Brandle
-- 
--------------------------------------------------------------------------------
Stefan Brandle                                 UUCP:  ihnp4!wheaton!stefan      
Wheaton College                                "But I never claimed to be sane!"
---------------------------------------------- MA Bell: (312) 260-4992 ---------

-----------[000001][next][prev][last][first]----------------------------------------------------
Date:      Fri, 1-May-87 17:06:48 EDT
From:      karn@FLASH.BELLCORE.COM (Phil R. Karn)
To:        comp.protocols.tcp-ip
Subject:   Re: tcp smooth rtt function

My approach to the small-SRTT problem is to let it do what it wants, but
bound the timer to the minimum non-zero value. I.e., if the clock ticks at a
1 Hz rate,

	rto = min(beta*srtt,1);	/* rto is retransmission time-out */


Phil

-----------[000002][next][prev][last][first]----------------------------------------------------
Date:      Sat, 2-May-87 02:10:46 EDT
From:      mike@BRL.ARPA (Mike Muuss)
To:        comp.protocols.tcp-ip
Subject:   Re:  Ethernet Suffering

Two Sun-3/50 processors blasting to each other with a TCP connection
can achieve ~2-3 Mbits/sec user-to-user throughput (tested with the TTCP
program), and seem to use about 25% of the ethernet bandwidth as
monitored on another Sun-3/50, which has unknown (to me) measurement
accuracy.  In our experiences this has had no noticable impact on other
users of the Ethernet.  Adding a second pair of Sun-3/50s running the
same test doubled the loading on the Ethernet, as you would expect.

Current wisdom suggests that there should be no more than one file
server and 8 diskless Sun-3s per Ethernet for good Ethernet performance
when all the Suns are busy.  At BRL, we presently have one Ethernet with
14 Sun-3/50s and and 4 Sun-2/50s running off of one fileserver (a Gould
PN9050 giving both ND and NFS service), as well as a variety of other
machines (more Goulds and 2 Alliant FX/8s) that communicate with NFS
on a more occasional basis.  We find that head contention on the
file server is the performance limit now, not the Ethernet.  However,
once the filesever is beefed up a bit, the Ethernet will be next, so
the Ethernet will be split into two, with a pair of level-3 IP gateways
between them.

Hope this information helps.
	Best,
	 -Mike

-----------[000003][next][prev][last][first]----------------------------------------------------
Date:      Sat, 2 May 87 07:03:35 EDT
From:      jqj@gvax.cs.cornell.edu (J Q Johnson)
To:        arpa.tcp-ip
Subject:   Re: Ethernet Suffering
Charles Hedrick notes that 20 to 40 diskless SUNs is a resonable load on
an Ethernet.  Although our experience at Cornell is consistent with this
estimate, one should be a bit careful:  small software and usage changes
can make for big changes in behavior.  For example, on our main Ethernet
(about 25 diskless SUNs plus 75 other machines, total less than 25% load)
we observe that at least 1/2 of the SUN load is ND traffic.  ND is not
efficient in its use of Ethernet bandwidth, and I would expect the total 
load offered by the SUNs to drop, perhaps precipitously, when SunOS4.0
arrives.  Similarly, slightly better caching strategies in clients can
make a big difference, as can adding a bit more memory (we do wish our 
3/50s had 6MB!).

Perhaps most important, don't attempt to generalize from diskless SUNs
to PC-ATs (or even to diskless VAXstations).  The PCs won't be paging
across the network, don't run a multitasking OS, have typically smaller
program sizes than Suns and longer process lifetimes, etc.

All the above points to being able to support lots of diskless workstations
on your network.  On the other hand, it would be foolish to design a
network that didn't make provisions for saturation.  If you don't put
in bridges or gateways initially, at least locate your servers near
their clients so you can get the benefit of installing bridges later if
you need to.  Leave your PCs with a couple of empty slots so you can add
more memory (for a RAM disk or whatever) later if need be.  And so on.
Don't assume that any load analysis you do today will still be valid in
1989.
-----------[000004][next][prev][last][first]----------------------------------------------------
Date:      Sat, 2-May-87 10:42:00 EDT
From:      Kodinsky@MIT-MULTICS.ARPA
To:        comp.protocols.tcp-ip
Subject:   Re: tcp/ip/IBM/ProNET

Speeds iun excess of 100 Kbytes/Second are not unheard of for TCP/IP on
VM.  Using KNET/VM on a VM system (true it is a 3090/200) I have seen
speeds peak at about 150 Kbytes/sec.  With averages about 85 or 90.
Also, this was not sunday mornging, but the middle of a weekday
afternoon.  Other KNET/VM users have reported peaks above 100 Kbytes/sec

Also at spartacus we have peaked at about 55 Kbytes per secoond -
running on a Nixdorf 8890/50 (4331 level machine) and all of our
terminals and disk drives on the same channel - and only one head of
string unit for our disks (for the non S/370 people read that as
extreeeeeemellly sloooooooowwwwwwww).

Now I admit a biased point of view (I am the KNET/VM Development
Manager) and the above is not the full story - on our system during the
middle of the day at abd load times (command response times on the order
of 10's of seconds) I have seen rates drop below 1Kb/second.

My point is that 100KB/Sec has been broken, with production, generally
available products, on production systems experiencing production loads
(not overloads!!!!!).

One final caveat - the speeds that other KNET users may see depend on
many variables and the above speeds should not be used as a guarantee of
the speeds you see ("The milage you get may vary....").

Frank Kastenholz Manager, KNET/VM Development Spartacus/Fibronics

-----------[000005][next][prev][last][first]----------------------------------------------------
Date:      Sun, 3-May-87 14:47:08 EDT
From:      steve@BRILLIG.UMD.EDU (Steve D. Miller)
To:        comp.protocols.tcp-ip
Subject:   Net Unreachable versus Host Unreachable


   I noticed yesterday that ucbvax was sending me Net Unreachable messages
for hosts on a subnet of ucb-ether, although ucb-ether itself was indeed
reachable.  Since I don't know that ucb-ether is subnetted (well, OK, I
know, but my kernel doesn't), I could make wrong decisions based on that
information.  This seemed wrong to me, so I added a quick hack to the 4.3
ip_forward() so that, when ip_output() returns ENETUNREACH or ENETDOWN, it
checks to see if the destination address of the unforwardable packet is on a
known subnetted network, and sends a Host Unreachable message (instead of a
Net Unreachable message) if the destination network is indeed subnetted.

   I then started thinking about the suggestion in RFC 985 that states that,
because subnetting information isn't visible outside the subnetted network,
Host Unreachable messages should always be sent in the place of Network
Unreachable messages.  However, it seems to me that the entity sending the
unreachability message will fall into one of the following two classes:

	1)  The entity is a gateway for the destination network, and
	as such should either know the subnetting scheme for the
	network or should forward the packet to another gateway with
	such knowledge.  In this case, I think that the Host Unreachable
	message should be used so that false unreachability information
	doesn't creep out onto the network.

	2)  The entity is a gateway that knows (inasmuch as anything ever
	does) that the entire destination network is down.  In this case,
	the Network Unreachable message should be sent, since this message
	in this case conveys the maximum amount of correct information.

   Since it seems to me that there is no ambiguity here, and since
there might be software Somewhere Out There that can use the Network
Unreachable information to avoid sending extra packets out onto the
Internet, it might be better to follow the scheme above.  In fact,
the reason that I stumbled across this behavior is because I'm testing
such an application (the ICMP caching beast I described a few months ago).

   Does this scheme make sense to everyone, or am I missing something
obvious?

	-dateat

-----------[000006][next][prev][last][first]----------------------------------------------------
Date:      Sun, 3-May-87 19:18:00 EDT
From:      rms@ACC-SB-UNIX.ARPA (Ron Stoughton)
To:        comp.protocols.tcp-ip
Subject:   re: tcp/ip/IBM/ProNET

I would like to comment briefly on David Young's suggestion and
John Shriver's reply regarding using ACS 9310's and a ProNET
gateway to interface an IBM system to ProNET.  The purpose here
is not to beat our chest, but to clarify what the 9310 can and
cannot do.

First, there was an inference that the ACS 9310 could run at the
full speed of the Ethernet.  As John correctly pointed out, this
is not quite true, or is at least misleading.  While it is true
that the channel interface will run at the full speed of the block
multiplexer channel, and the Ethernet interface will run at the
full 10Mbps speed of the Ethernet, the *sustained* throughput
capacity is somewhat less.  We have measured a sustained packet
rate of 3600+ packets/sec at the channel interface, and 2700+
packets/sec at the Ethernet interface.  However, the software
glue which binds these together drops the total capacity to 350
packets/sec (it's amazing what programmers can do).  One should
note that the main bottleneck is the memory-to-memory transfer
rate across an internal system bus which interconnects these two
subsystems.  Incorporating a hardware DMA would significantly
improve system throughput.

[I should point out that the 9310 does more than just simple
pass-through between the channel and the network.  For example,
all ARP processing is done in the interface, and for historical
reasons, IMP emulation is performed there as well.]

Second, whether or not the 9310 is faster than the DACU (it is)
should not be the only consideration.  In particular, the effi-
ciency of the channel protocol can significantly impact system
performance.  I am not intimately familiar with the DACU, but I
believe it emulates 3250 commands to transfer data.  Like other
IBM control units, this is half-duplex and requires attention
interrupts.  The 9310 uses two independent subchannels to provide
a full-duplex interface.  Someone in our office once looked at
some code which interfaced to a DACU and counted 10 channel inter-
actions to transfer some data and read a response.  Hopefully
the Wiscnet driver was not done in this manner, but nevertheless,
a halfduplex interface will certainly result in more interrupts.

Third, John's assertion that even if the hardware could run at the
full speed of the network that the host software could not, is
certainly true.  However, I do not agree that the 100Kbytes/sec
barrier has not been broken or threatened.  Lacking anything faster
than a VAX 750, we did some file transfers to ourselves (a 4341-2
running ACCES/MVS) by looping within the 9310.  We measured 50K
bytes/sec in each direction.  While this is not a 100kbytes/sec
file transfer, it is exactly equivalent to two simultaneous 50K
bytes/second transfers, or an aggregate of 100Kbytes/sec through
TCP/IP and FTP.  It should also be pointed out that the 9310
averaged less than 40% busy which is consistent with its 350
packets/second capacity.  Extrapolating, and assuming no coalesc-
ing of TCP acks, it should be possible to transfer an aggregate
of 262Kbytes/sec.

In an attempt to find out why we could not drive the 9310 to
saturation we repeated the above tests, but looping at the EXCP
level instead of in the 9310.  In otherwords, output packets were
copied from the output buffer into an input buffer instead of
being transferred across the channel (to the 9310).  Even with the
additional buffer copy, we measured an aggregate of 260Kbytes/sec.
The difference (160Kbytes/sec) is apparently attributable to IOS
interrupt processing and MVS dispatching overhead.  This was
surprising to me, but not inconsistent with MVS's reputation of
being an I/O klutz, particularly on small mainframes.  Additional
interrupt processing because of half-duplex handshaking would only
make matters worse.

I tend to agree with John that we should avoid benchmark wars since
the numbers are often meaningless without proper context.  Also,
such tests are often conducted under optimum conditions which may
not apply to the garden variety file transfer.  For example, rates
are usually quoted for binary transfers, whereas most file transfers
are in ASCII.

In regard to using a 9310 + ProNET gateway to interface a VM system
to ProNET-80, our sales people are certainly not going to refuse to
take your money.  However, I am not sure the performance benefits
are necessarily acheivable at this time.  For example, what is the
performance threshold of the Wiscnet software, and what is at the
other end of the network connection (you can't send data faster than
the other end is willing to receive it)?  The deciding factor may be
the amount of CPU resources a user is willing to expend to acheive a
given level of throughput.

Some more definitive information may be forthcoming.  We are having a
new driver developed for Wiscnet which should be much more efficient
than the driver which comes with 5798-DRG (or now 5798-FAL).  As part
of this effort some benchmarking will be done.  Anyone interested in
the results should contact me and I will send you the information.

If I have offended anyone's sensibilities by appearing commercial, I
apologize.  This was not the intent.

Ron Stoughton
ACC

-----------[000007][next][prev][last][first]----------------------------------------------------
Date:      Mon, 4-May-87 10:58:00 EDT
From:      dms@HERMES.AI.MIT.EDU (David M. Siegel)
To:        comp.protocols.tcp-ip
Subject:   Ethernet Suffering

Here at the MIT AI Lab we have found that our diskless sun workstations
put a much heavier load on an Ethernet than Hedrick and Johnson noted.
Using a network analyzier, the 18 diskless Suns we have on one ether
can run the cable at 50 percent of capacity for extended periods of
time. Peek 5 second usage often jumps to 70 percent. All our machines
have 8 Meg of RAM, though some of them run Sun Common Lisp. Much of
the traffic is ND packets. Based on this, we are planning on having no
more than 12-15 Suns are one ethernet. Each server will have its own
"client" subnet.

-----------[000008][next][prev][last][first]----------------------------------------------------
Date:      Mon, 4-May-87 16:50:06 EDT
From:      Mills@UDEL.EDU
To:        comp.protocols.tcp-ip
Subject:   An old ticker fades away

Folks,

It has become necessary to discontinue TCP/TIME service in the various
fuzzballs scattered over the known universe. The UDP-based services
UDP/NTP (see RFC-958) and UDP/TIME will continue ticking indefinately.
As mentioned several times to this list over the last few years, TCP/TIME
happens to be rather resource-intensive in the fuzzball implementation
as compared with other services usually considered more vital. The UDP-based
services are much more accurate and less intrusive.

Dave

-----------[000009][next][prev][last][first]----------------------------------------------------
Date:      Mon, 4-May-87 18:09:49 EDT
From:      timk@babbage (Tim Krauskopf, NCSA)
To:        comp.protocols.tcp-ip
Subject:   Re: PC/IP problem

---------------------------------------------


From Tim Krauskopf, NCSA:

In response to the screen blank out problem with PC/IP.

I felt that I should comment on this because I am probably the only person
to ever patch the PC/IP program to take care of this problem without
recompiling -- it is possible to patch the .COM file.

The problem is that the PC/IP scrolling routines use the little-known
hardware scrolling feature of the first two video adapters available from
IBM.  If you poke the correct registers, you can change the memory byte
address which the adapter uses for the upper left corner of the screen.
At the end of the memory buffer, 4K for the monochrome, 16K for the color,
the video controller will "wrap-around" to the beginning of the RAM buffer
when it displays the screen.

The problems occur when using "Print-screen" and the newer video adapters.
Print-screen starts at the beginning of the buffer, no matter where the
video controller thinks the beginning of the screen is.  PC/IP added the
F10/F10 command to copy the contents of the video screen back to the
beginning of the buffer, for use with print-screen.  With the EGA and
other video adapters, there is more than 16K of memory, and the hardware
wrap-around feature does not exist, though the hardware scroll feature
still does.  When PC/IP reaches the end of the 16K of memory that it
thinks it is using, it wraps back to zero, but the EGA keeps going -- it
has 32K or more to use.  The result, a blank screen until F10/F10 restores
it.  The fix: patch out their scroll routines with the BIOS ones.

Sorry to bother those of you who don't care, but this fix is quick,
so . . . to fix this problem in the latest release (Mar 86):

Use debug on telnet.com, at offset AD15:
-a ad15
cs:AD15  MOV AX,0601
         XOR CX,CX
         MOV DX,174F
         MOV BH,07
         INT 10
         XOR BX,BX
         JMP AD3A

-a AD75
cs:AD75  MOV AX,0701
         XOR CX,CX
         MOV DX,174F
         MOV BH,07
         INT 10
         XOR BX,BX
         JMP AD9D
-w
-q

It worked for me.

Tim Krauskopf
National Center for Supercomputing Applications (NCSA)
University of Illinois

timk%newton@uxc.cso.uiuc.edu  (ARPA)
14013@ncsavmsa                (BITNET)

-----------[000010][next][prev][last][first]----------------------------------------------------
Date:      Mon, 4-May-87 23:45:47 EDT
From:      hedrick@TOPAZ.RUTGERS.EDU (Charles Hedrick)
To:        comp.protocols.tcp-ip
Subject:   Re:  Ethernet Suffering

Note that the 2-3 Mbits/sec of Ethernet traffic you report is with a
test program designed to test the network only.  However in actual
use, the majority of the high-speed Ethernet traffic is generated by
file serving.  In that case, it is limited by the speed the disks, and
the amount of lookahead done by the protocols.  I would be extremely
surprised to see the current generation of Sun file server deliver
more than 1Mbit/sec of sustained throughput.  Much to my surprise, I
find that replacing Eagles with super-Eagles does not seem to increase
the throughput available in my tests noticably. Note that these tests
involved a mix of operations, including file creating, reading,
removing, and renaming, and that the files were small or moderate in
size.  I.e.  we tried to duplicate the sorts of I/O that a typical
student mix would generate.  I have to believe that fast sequential
operations on large files would get more with a super-Eagle.  Some
other results:
  - one Eagle with one controller seemed to use about 2/3 of the CPU
	in a 3/180.
  - a second Eagle on the same controller added very little in throughput
  - a second Eagle on a second controller added about 50% in capacity.
	It seems that this was limited by CPU capacity
  - a 280 with super-Eagle did not have noticably more performance than
	a 180 with Eagle.  However we assume that the 280 would
	be able to handle two disks and controllers without running
	out of steam.  (We were unable to test this because we didn't
	have the right hardware configuration.)  It's not clear whether
	this would be cost-effective, though, when compared against
	using one Sun 3/140S per disk [a configuration which however
	is not supported by Sun.  Indeed I'm not sure that the 140S
	is even on the price sheet.]I wgets

-----------[000011][next][prev][last][first]----------------------------------------------------
Date:      Tue, 5-May-87 03:44:51 EDT
From:      mike@BRL.ARPA (Mike Muuss)
To:        comp.protocols.tcp-ip
Subject:   Re:  Ethernet Suffering

I agree that the TTCP only measured memory-to-memory throughput. That
was the intent -- to see how much data could be shoveled. I did not
intend to suggest it was a generic benchmark. Note that TTCP was using
TCP, mind you, not NFS or ND.

In our environment, we do a lot of network-based 24-bit RGB graphics,
which means whacking .75Mbytes (lores) or 3 Mbytes (hires) for each
image.  Often they are computed and displayed without ever touching
a disk.  So the TTCP test was not uninteresting.

Our Gould 9000 fileserver, which serves the collection of Suns I
mentioned, can be seen at busy times handling 200 packets/second
in both the transmit and receive directions (peak).  Many of them
result in disk transactions, although the ratio can be deceptive.
Eg, 1 pkt arrives asking for 8kbytes of data, which is read with
one disk I/O, and returned in 8 packets.  1 disk I/O, 9 packets.

Hope you find these random statistics of interest.
	Best,
	 -M

-----------[000012][next][prev][last][first]----------------------------------------------------
Date:      Tue, 5-May-87 21:15:08 EDT
From:      martillo@ATHENA.MIT.EDU
To:        comp.protocols.tcp-ip
Subject:   Ethernet Terminal Concentrators


Does anybody have figures for obtained packet rates for asynchronous
terminal concentrators over ethernet?

I mean the configuration where the host has an ethernet interface and
lives on the same LAN as the asynchronous terminal concentrator which
supports say 8-16 terminals.

I think DEC has a product of this type called LAT (LAN Asynchronous
Terminal?).  I would suspect that because protocol layers 3 and 4
would not need to be implemented that better performance would be
obtained than with telnet or rlogin based terminal concentrators
(although internetting would be impossible).  I would still be
interested in figures for telnet or rlogin style terminal
concentrators as well.  Also while the maximum packet rate at the
concentrator is of interest, I would really like to know what sort of
packet rates people consider good on the host side.


Yakim Martillo

-----------[000013][next][prev][last][first]----------------------------------------------------
Date:      Wed, 6-May-87 04:15:08 EDT
From:      root@TOPAZ.RUTGERS.EDU (Charles Hedrick)
To:        comp.protocols.tcp-ip
Subject:   Re:  Ethernet Terminal Concentrators

I just took a look at some statistics from three of our Cisco terminal
servers.  Since it's 3am, packet rates wouldn't show much, but I have
some other data which is probably in the long run more useful.  This
shows the total number of packets in and out for each session, the
total number of bytes, and the number of packets with data.  By
looking at the fraction of packets with data, you can get a feeling
for how much you are paying for TCP ack and window updates.  Note
however that even with LAT, there surely has to be some sort of
acknowledgements, so it can't be 100% efficient either.  The original
data contains a lot more info about the innards of the TCP protocol
handling, but I have omitted it as it didn't seem relevant to this
discussion.  It looks like well over half (typically over 2/3) of the
packets received have data (i.e. are not just acks), and the amount of
data per packet is fairly high.  It looks like around half, or maybe
slightly less of the packets sent have data, and as you would expect
it is about one char per packet.  (A couple of the connections have
been idle for long periods, so you see packets that aren't doing
much.)  It looks to me like this is about as efficient as one can hope
for.  It looks like acks for stuff we send typically gets piggybacked
on the echo, but the stuff that comes back requires separate acks.
This is what you'd expect with a simple model of what is going on, and
is presumably going to be the same for any protocol that is designed
to use unreliable networks.  The number of characters per packet also
seems good.  Obviously I can't compare this with LAT without seeing
data on LAT, but there doesn't seem to be a lot of room for
improvement.

dialup to Sun (4.2 IP, 4.3 beta TCP), though one gateway
Rcvd: 4694 (out of order: 0), with data: 4374, total data bytes: 160181
Sent: 6456 (retransmit: 1), with data: 3988, total data bytes: 4220

dialup to Pyramid (4.3 TCP/IP, with telnetd in the kernel)
Rcvd: 1981 (out of order: 0), with data: 1634, total data bytes: 119314
Sent: 1626 (retransmit: 0), with data: 824, total data bytes: 900

hardwired to Sun, through one gateway
Rcvd: 2472 (out of order: 0), with data: 1265, total data bytes: 114771
Sent: 2489 (retransmit: 0), with data: 175, total data bytes: 191

hardwired to Pyramid
Rcvd: 12195 (out of order: 0), with data: 7116, total data bytes: 88935
Sent: 9044 (retransmit: 0), with data: 6957, total data bytes: 7206

hardwired to Sun, through one gateway
Rcvd: 671 (out of order: 0), with data: 406, total data bytes: 10684
Sent: 834 (retransmit: 0), with data: 340, total data bytes: 349

hardwired to DEC-20
Rcvd: 14579 (out of order: 0), with data: 410, total data bytes: 54371
Sent: 492 (retransmit: 0), with data: 207, total data bytes: 249

-----------[000014][next][prev][last][first]----------------------------------------------------
Date:      Wed, 6 May 87 10:54:11 PDT
From:      nagle@cdrsun.stanford.edu (John B. Nagle)
To:        TCP-IP@NIC.SRI.COM
Subject:   Remote printer server for IBM VM systems?

       Does there exist software for IBM VM systems that will allow such
systems running the Wisconsin TCP/IP software to act as remote printers for
other systems?  If you've got something, please let me know.

					John Nagle
					Center for Design Research, Stanford
-----------[000015][next][prev][last][first]----------------------------------------------------
Date:      6 May 87 08:35:00 EDT
From:      "MAARTEN NEDERLOF" <salomon@wharton-10.arpa>
To:        "tcp-ip" <tcp-ip@sri-nic.arpa>
Subject:   Ethernet Remote Bridge query...
I've been noticing that messages of late seem to be dealing with the topic
of Ethernet performance under a large number of diskless workstations, and
the like, and it prodded me to ask my question here, even though my needs
are not specifically TCP-IP.

We have the need for a remote bridge that can channel as much of the full
bandwidth of a baseband Ethernet as possible.  We have seen remote microwave
bit repeaters, but the distance we are considering is about 9 miles, and 
regardless of the communications medium, light isn't fast enough to allow
us to maintain one segment.

Does anyone in this forum have any information on a bridge that can use
a 10MB communications channel (carved from a T3 line) and still give us
minimal timing delay end to end across it?  We have an Ethernet based
image transfer system that is highly sensitive to timing delays on the
network, so relay speed is of the essence.

Any recommendations or comments would be appreciated.

Maarten Nederlof
Salomon Brothers Inc.
SALOMON@WHARTON-10.ARPA
------
-----------[000016][next][prev][last][first]----------------------------------------------------
Date:      Wed, 6-May-87 13:54:11 EDT
From:      nagle@CDRSUN.STANFORD.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   Remote printer server for IBM VM systems?


       Does there exist software for IBM VM systems that will allow such
systems running the Wisconsin TCP/IP software to act as remote printers for
other systems?  If you've got something, please let me know.

					John Nagle
					Center for Design Research, Stanford

-----------[000017][next][prev][last][first]----------------------------------------------------
Date:      Wed, 6-May-87 13:54:12 EDT
From:      hedrick@TOPAZ.RUTGERS.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   off RIP packet

Our Arpanet gateway, 10.1.0.89, is receiving RIP (Unix routed) packets
from 26.12.0.122.  This seems to have started yesterday.  Initially,
the packets caused our routing tables to get into a loop.  However I
have now fixed things so that we ignore them.  But does anybody know
what is going on?  We are still receving them.  I sent something to
root at that site and haven't yet gotten an answer.  The site seems to
be using the Wollongong System V TCP/IP implementation.  It is
possible that the packets are arriving via either Arpanet or NSFnet.
However a packet watch on the NSFnet side suggests that they are not
coming that way.  (Unfortunately I have no way to do packet watches on
the Arpanet side.)

Normally RIP packets are broadcast on all connected Ethernet
interfaces.  I thought the Arpanet didn't do broadcasting, and that
broadcasts certainly wouldn't go through the "mail bridges" between 26
and 10.  Am I wrong?

-----------[000018][next][prev][last][first]----------------------------------------------------
Date:      Wed, 6-May-87 13:56:00 EDT
From:      CERF@A.ISI.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: Ethernet Terminal Concentrators

Yakim,

LAT stands for the name of the protocol used by Digital to
support terminal to host access over an ethernet. Dunno what
rates you can expect, but we were supporting 35-50 concurrent
virtual  terminals from a microvax II across an ethernet to  
a bunch of hosts.

Oh, each VT was operating between 1200 and 2400
bps but the protocol to the microvax was line at a time X.25.

Vint

-----------[000019][next][prev][last][first]----------------------------------------------------
Date:      6 May 1987 13:56-EDT
From:      CERF@A.ISI.EDU
To:        martillo@ATHENA.MIT.EDU
Cc:        TCP-IP@SRI-NIC.ARPA
Subject:   Re: Ethernet Terminal Concentrators
Yakim,

LAT stands for the name of the protocol used by Digital to
support terminal to host access over an ethernet. Dunno what
rates you can expect, but we were supporting 35-50 concurrent
virtual  terminals from a microvax II across an ethernet to  
a bunch of hosts.

Oh, each VT was operating between 1200 and 2400
bps but the protocol to the microvax was line at a time X.25.

Vint
-----------[000020][next][prev][last][first]----------------------------------------------------
Date:      Wed, 06 May 87 16:54:53 +0100
From:      Jon Crowcroft <jon@Cs.Ucl.AC.UK>
To:        tcp-ip@sri-nic.arpa
Subject:   Ethernet Suffering

The figures here are approx:

Manchester University run a net of 60 odd suns. They have 10
diskless 3/50s per 3/260  server with a 400 Mb eagle.

Each server-client set has
it's own thin ethernet. All the servers are backboned on an
ethernet. with 4Meg on each diskless client, 8 Meg on server,
the servers and ethernet just about cope if no more than 5Meg
virtual mem is used in each
client (ie 1Meg swapping). I don't know whether the bottleneck
is ethernet or server cpu/disk speeds.

Most of the ether traffic is ND/NFS, which is much less a
respecter of bandwidths and delays than tcp traffic, and wreaks
havoc with bridges and gateways unless you handwind down the
read/write transfer sizes. Hence the client/server ratio and
separate ethers.

Does anyone know of any affordable ethernet/ethernet IP
gateway/subnet router that can take 8 Kbytes worth of IP back to back
from several (~10) hosts at once?

Jon
-----------[000021][next][prev][last][first]----------------------------------------------------
Date:      Wed, 6-May-87 23:57:49 EDT
From:      karn@FLASH.BELLCORE.COM.UUCP
To:        comp.protocols.tcp-ip
Subject:   question about FTP data channels

Question: what assumptions may an FTP client make about the address and
port (i.e., socket) that will be used when the server opens a data connection?

When my implementation sends a RETRieve command, it posts a listen for
the data connection. Rather than accepting anyone who connects, it
listens specifically for a SYN carrying the IP address of the server,
and TCP port 20. This works fine except in the case of a multi-homed FTP
server. If I have established my control connection to the "far side"
IP address, many such hosts will use the IP address associated with the
"nearer" interface when initiating the data channel, and my TCP will
refuse it.

I guess I'm answering my own question here, namely that I have to be
prepared to accept any IP address in the incoming connection, since I have
no way of knowing if the server will use an address different than the
one I used on the control connection.  Comments?

Phil

-----------[000022][next][prev][last][first]----------------------------------------------------
Date:      Thu, 7-May-87 01:50:34 EDT
From:      jon@CS.UCL.AC.UK.UUCP
To:        comp.protocols.tcp-ip
Subject:   Ethernet Suffering


The figures here are approx:

Manchester University run a net of 60 odd suns. They have 10
diskless 3/50s per 3/260  server with a 400 Mb eagle.

Each server-client set has
it's own thin ethernet. All the servers are backboned on an
ethernet. with 4Meg on each diskless client, 8 Meg on server,
the servers and ethernet just about cope if no more than 5Meg
virtual mem is used in each
client (ie 1Meg swapping). I don't know whether the bottleneck
is ethernet or server cpu/disk speeds.

Most of the ether traffic is ND/NFS, which is much less a
respecter of bandwidths and delays than tcp traffic, and wreaks
havoc with bridges and gateways unless you handwind down the
read/write transfer sizes. Hence the client/server ratio and
separate ethers.

Does anyone know of any affordable ethernet/ethernet IP
gateway/subnet router that can take 8 Kbytes worth of IP back to back
from several (~10) hosts at once?

Jon

-----------[000023][next][prev][last][first]----------------------------------------------------
Date:      Thu, 7-May-87 08:42:00 EDT
From:      SYSTEM@CRNLNS.BITNET.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re:  Ethernet Terminal Concentrators

Charles,

Thanks for the Ethernet utilization statistics for your Cisco
terminal servers. It's consistant with what I've seen for
one host based TELNET program.
Unfortunately, though, it's not the whole story.

The problem that we've encountered is the high
CPU overhead used by at least one TELNET implementation under VMS.
I am enclosing a copy of a brief test that I did recently.
It's certainly not definitive, but it gives one food for thought.

Selden E. Ball, Jr.
(Wilson Lab's network and system manager)

Cornell University                 NYNEX: +1-607-255-0688
Laboratory of Nuclear Studies     BITNET: SYSTEM@CRNLNS
Wilson Synchrotron Lab              ARPA: SYSTEM%CRNLNS.BITNET@WISCVM.WISC.EDU
Judd Falls & Dryden Road          PHYSnet/HEPnet/SPAN:
Ithaca, NY, USA  14853             LNS61::SYSTEM = 44283::SYSTEM (node 43.251)

P.S. The Ethernet utilization figures were obtained by running
a patched version of VMS Monitor with the command
 $ MONITOR ETHERNET
I can send you a copy of the patch if you don't have it.
S.
-------------------------------------------------------------------
The following is a report of a test done using CMU/TEK TCP/IP
             by S.Ball on April 23rd and 24th, 1987

Executive summary:
=================
If we run TCP/IP for production use, we will need a front end.

CMU/TEK TCP/IP software uses an excessive amount of cpu resources
for terminal support both outbound, when accessing another system,
and inbound, when the local system is hosting a session.

Environment:
============
2 VAX-11/750s (LNS53 and CLE750) with FPA and 5 Megabytes of memory,
running VMS 4.4 and connected with DEUNA Ethernet interfaces.
The CMU TCP/IP package being tested consisted of
FINGER V2.4, SMAIL V2.5, TELNET V3.0, and IP/ACP V6.0.
Only TELNET and IP/ACP were actually involved in this test.

Each of the tests was run for only about a minute, so the percentages
aren't accurate to better than about 5% or worse.
Unfortunately, that size of error is unimportant.

TELNET i/o test
---------------
I used a 9600 baud terminal connected to a DEC LAT-11 terminal server
on Ethernet.  Past studies have shown the LAT protocol to be
comparable to DMF-32 connections in terms of its CPU use.

First I logged into LNS53 (3 others were logged in doing nothing),
and then did a TELNET to CLE750 (where 1 other was logged in doing nothing)
and gave the command "TYPE DOC:*.*;*". Our DOC: directory contains
many text files of various sizes.

results:
--------
(the actual numbers fluctuated +/- 5% or so, presumably due to disk
file open overhead)

The transfer used 100% of the cpu on (remote) CLE750
                  ====
(20% kernel, 80% user, <5% interrupt)

User mode programs on on CLE750 were the TELNET server using about 50%,
IP_ACP using about 15%, and TYPE using about 15%.

It used 50% of the cpu on (local) LNS53 (15% kernel, 35% user, <5% interrupt)
        ===
User mode programs on LNS53 were TELNET and IP_ACP, using approximately
equal fractions of the cpu, but with large fluctuations.

Ethernet use went from 10Kbytes/sec to about 15Kbytes/sec.

The Ethernet packet size averaged about 100 bytes,
presumably 1 per record of terminal output.
But, if we assume half of the i/o increase was Lat from LNS53 to the
LAT-11, and half was TELNET from CLE750 to LNS53, this implies, since
the terminal i/o was < 1 Kbyte/sec x 2 = < 2 Kbytes/sec, that there was
> 3 Kbytes/sec of overhead somewhere. Some of the excess may
have been due to other systems doing Ethernet i/o at the same time.

For comparison:
==============

Using DECnet SET HOST
---------------------
I used the same 9600 baud terminal connected to a DEC LAT-11
terminal server on Ethernet.

I logged into LNS53 (1 other user was running a cpu bound job),
I did a SET HOST to CLE750 (where 1 other was logged in doing nothing),
and used the command "TYPE DOC:*.*;*"

On LNS53, there was no observable degredation in my terminal output
due to the other job, but the other job averaged > 75% of the cpu.

In contrast to TELNET use, CLE750 averaged  > 85% idle.
Kernel and Interrupt modes fluctuated from 2% to 10% each,
apparently dominated by disk file open operations.

Unfortunately, the increased load on Ethernet wasn't observable:
it was already fluctuating between 35 and 45 Kbytes/sec.

Using a direct LAT connection
-----------------------------
Again I used the 9600 baud terminal connected to a DEC LAT-11 terminal
server on Ethernet.

I logged into CLE750 (there was 1 other user logged in doing nothing),
and gave the command "TYPE DOC:*.*;*"

CLE750 averaged > 85% idle.
Kernel and Interrupt modes fluctuated from 2% to 10% each,
apparently dominated by disk file open operations.

Ethernet use went from about 11 Kbytes/sec to maybe 12.5 Kbytes/sec.

-----------[000024][next][prev][last][first]----------------------------------------------------
Date:      Thu, 7 May 87 08:42 EDT
From:      <SYSTEM%CRNLNS.BITNET@wiscvm.wisc.edu>
To:        TCP-IP@SRI-NIC.ARPA
Subject:   Re:  Ethernet Terminal Concentrators
Charles,

Thanks for the Ethernet utilization statistics for your Cisco
terminal servers. It's consistant with what I've seen for
one host based TELNET program.
Unfortunately, though, it's not the whole story.

The problem that we've encountered is the high
CPU overhead used by at least one TELNET implementation under VMS.
I am enclosing a copy of a brief test that I did recently.
It's certainly not definitive, but it gives one food for thought.

Selden E. Ball, Jr.
(Wilson Lab's network and system manager)

Cornell University                 NYNEX: +1-607-255-0688
Laboratory of Nuclear Studies     BITNET: SYSTEM@CRNLNS
Wilson Synchrotron Lab              ARPA: SYSTEM%CRNLNS.BITNET@WISCVM.WISC.EDU
Judd Falls & Dryden Road          PHYSnet/HEPnet/SPAN:
Ithaca, NY, USA  14853             LNS61::SYSTEM = 44283::SYSTEM (node 43.251)

P.S. The Ethernet utilization figures were obtained by running
a patched version of VMS Monitor with the command
 $ MONITOR ETHERNET
I can send you a copy of the patch if you don't have it.
S.
-------------------------------------------------------------------
The following is a report of a test done using CMU/TEK TCP/IP
             by S.Ball on April 23rd and 24th, 1987

Executive summary:
=================
If we run TCP/IP for production use, we will need a front end.

CMU/TEK TCP/IP software uses an excessive amount of cpu resources
for terminal support both outbound, when accessing another system,
and inbound, when the local system is hosting a session.

Environment:
============
2 VAX-11/750s (LNS53 and CLE750) with FPA and 5 Megabytes of memory,
running VMS 4.4 and connected with DEUNA Ethernet interfaces.
The CMU TCP/IP package being tested consisted of
FINGER V2.4, SMAIL V2.5, TELNET V3.0, and IP/ACP V6.0.
Only TELNET and IP/ACP were actually involved in this test.

Each of the tests was run for only about a minute, so the percentages
aren't accurate to better than about 5% or worse.
Unfortunately, that size of error is unimportant.

TELNET i/o test
---------------
I used a 9600 baud terminal connected to a DEC LAT-11 terminal server
on Ethernet.  Past studies have shown the LAT protocol to be
comparable to DMF-32 connections in terms of its CPU use.

First I logged into LNS53 (3 others were logged in doing nothing),
and then did a TELNET to CLE750 (where 1 other was logged in doing nothing)
and gave the command "TYPE DOC:*.*;*". Our DOC: directory contains
many text files of various sizes.

results:
--------
(the actual numbers fluctuated +/- 5% or so, presumably due to disk
file open overhead)

The transfer used 100% of the cpu on (remote) CLE750
                  ====
(20% kernel, 80% user, <5% interrupt)

User mode programs on on CLE750 were the TELNET server using about 50%,
IP_ACP using about 15%, and TYPE using about 15%.

It used 50% of the cpu on (local) LNS53 (15% kernel, 35% user, <5% interrupt)
        ===
User mode programs on LNS53 were TELNET and IP_ACP, using approximately
equal fractions of the cpu, but with large fluctuations.

Ethernet use went from 10Kbytes/sec to about 15Kbytes/sec.

The Ethernet packet size averaged about 100 bytes,
presumably 1 per record of terminal output.
But, if we assume half of the i/o increase was Lat from LNS53 to the
LAT-11, and half was TELNET from CLE750 to LNS53, this implies, since
the terminal i/o was < 1 Kbyte/sec x 2 = < 2 Kbytes/sec, that there was
> 3 Kbytes/sec of overhead somewhere. Some of the excess may
have been due to other systems doing Ethernet i/o at the same time.

For comparison:
==============

Using DECnet SET HOST
---------------------
I used the same 9600 baud terminal connected to a DEC LAT-11
terminal server on Ethernet.

I logged into LNS53 (1 other user was running a cpu bound job),
I did a SET HOST to CLE750 (where 1 other was logged in doing nothing),
and used the command "TYPE DOC:*.*;*"

On LNS53, there was no observable degredation in my terminal output
due to the other job, but the other job averaged > 75% of the cpu.

In contrast to TELNET use, CLE750 averaged  > 85% idle.
Kernel and Interrupt modes fluctuated from 2% to 10% each,
apparently dominated by disk file open operations.

Unfortunately, the increased load on Ethernet wasn't observable:
it was already fluctuating between 35 and 45 Kbytes/sec.

Using a direct LAT connection
-----------------------------
Again I used the 9600 baud terminal connected to a DEC LAT-11 terminal
server on Ethernet.

I logged into CLE750 (there was 1 other user logged in doing nothing),
and gave the command "TYPE DOC:*.*;*"

CLE750 averaged > 85% idle.
Kernel and Interrupt modes fluctuated from 2% to 10% each,
apparently dominated by disk file open operations.

Ethernet use went from about 11 Kbytes/sec to maybe 12.5 Kbytes/sec.
-----------[000025][next][prev][last][first]----------------------------------------------------
Date:      Thu, 7-May-87 09:28:24 EDT
From:      swb@DEVVAX.TN.CORNELL.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   off RIP packet

When we first put together the gatedaemon we discovered people sending
RIP packets point-to-point over Arpanet and Milnet.  I believe they
were EGPing with a core gateway or two, and *in addition* sending
point-to-point RIP packets to all of the thus-discovered external
gateways, just to be sure packets to them would not go through an
extra hop.
						Scott

-----------[000026][next][prev][last][first]----------------------------------------------------
Date:      7 May 87 17:41:00 PST
From:      <art@acc.arpa>
To:        "tcp-ip" <tcp-ip@sri-nic.arpa>
Cc:        ineng-tf@gateway.mitre.org
Subject:   More on TCP timers

With the recent discussion of TCP retransmission timers, I thought that
I would add my two cents worth.

I have been looking at the the behavior of TCP on low speed (9600 baud),
low delay (IMP-and-back or point-to-point) links.

The usual timing algorithms:

	srtt = alpha*srtt + (1-alpha)rtt	[alpha approx. .9]
	rx_timeout = beta*srtt			[beta approx. 2]

seem to be oriented to an environment where the round trip delay is
smoothly changing and has LOW VARIANCE.  I believe that neither of
these points is really true in the internet.  Also, most TCP implementations
assume that the measurement of rtt is INDEPENDENT of MESSAGE SIZE.
This is definitely NOT true in the situations I have been looking at.

My model of network delay has three components:
1) Inherent or Minimum Delay (speed of light, packet processing times)
2) Queueing or Waiting Times (at link level or in gateways)
3) Transmission Times (how long it takes to send at link level)

Part 1 should be relatively constant over a path between two end points.
Part 2 is where congestion shows up.  Part 3 is DIRECTLY RELATED to PACKET
SIZE.  I believe that Part 1 is generally insignifcant except for satellite
hops.  Part two is related to congestion and is usually affected by Part 3.
Part 3 is insignificant for high speed LANs, BUT NOT WHEN LOW SPEED LINKS
ARE TRAVERSED.  In fact, even in the absence of congestion, Part 3 can cause
considerable variation in round trip times.

Let's examine a pair of hosts talking to the same IMP over 9.6KB links.
Packets must first traverse the Host1-IMP link then the Host2-IMP link.
The ACKs must follow the reverse path.  Without congestion, the time taken
is mostly frame transmission times.  When the TCP connection is first
established, the packets will tend to be small (approx. 40 bytes) and the
transmssion times small.  The measured rtt will be very small (probably
bounded by the resolution of the timer).  If a sequence of full packets
(approx. 1000 bytes) is now sent, the retransmission timeout will be based
on the srtt derived from the small packets, but the transmission time will
have increased by a FACTOR of 25!  TCP will tend to retransmit before
the frame has a chance to get to the other end.  Depending on timer resolution
and bounds, the first segment (hopefully only the first) can be retransmitted
several times.  Waiting until the packet has been transmitted by lower layers
(not all implementations can do it) will help some for hosts directly connected
to slow links, but consider a LAN-connected host going to a gateway which
exits via a low speed link.  This problem will persist until srtt is adjusted
enough to allow sufficient time (some implementations ignore rtts when
retransmitting, thus delaying updating srtt).  Unfortunately, a burst of short
packets can drive srtt back down very quickly (accumulating N samples using long
packets takes much longer than using short packets).  And retransmitting long
segments is the worst thing to do when congestion is because of a low speed
exit link.

The retransmission timer is usually set to beta*srtt (within bounds), where
beta (typically approx. 2) is supposed to accomodate any variance.  But the
variance can be much larger than typical betas.  I propose that a possible
solution is to accumulate a smoothed variance estimate which changes much
more slowly than srtt: svar=gamma*svar+|(1-gamma)*(srtt-rtt)| (gamma >> alpha).
The smoothed variance would then be used adjust the retransmission timer
to accomodate the observed variance on the path.  One could just boost the
beta value to a much higher value, but that would be inefficient when packets
are lost on a low variance path.

I believe that congestion tends to build up quickly and dissipate slowly.
Therefore it seems desirable for srtt to increase quickly but decrease more
slowly.  Maybe if rtt < srtt then alpha=large (0.9?) else if rtt > srtt then
alpha=small (0.5?).

Also, I wounder if it makes sense to have retransmission timers which are
much less than IP Time-to-Live (some TCPs pass TTL values to IP).  Of
course TTL is really used more as a hop count than a real time value.

My view of an ideal TCP would be one that could estimate average minimum
delay, average variance and average THROUGHPUT.  Segments would be metered
out at the rate that the network appears to be able to accept them (rather
than blasting a window's worth at a congested gateway) and the retransmission
timer could account for observed delay and transmission time (using segment
size and throughput).  Also, some function of delay and throughput may be
useful in dynamically adjusting segment size.

Any comments?

						Art Berggreen
						Art@ACC.ARPA

------
-----------[000027][next][prev][last][first]----------------------------------------------------
Date:      Thu, 7-May-87 16:47:17 EDT
From:      PADLIPSKY@A.ISI.EDU (Michael Padlipsky)
To:        comp.protocols.tcp-ip
Subject:   Re: question about FTP data channels

Phil--

You haven't answered your own question if you believe, as I do,
that it's a bug for a Host to use different Internet Addresses
during the same "conversation".

cheers, map
-------

-----------[000028][next][prev][last][first]----------------------------------------------------
Date:      7 May 1987 16:47:17 EDT
From:      Michael Padlipsky <PADLIPSKY@A.ISI.EDU>
To:        karn@FLASH.BELLCORE.COM (Phil R. Karn)
Cc:        tcp-ip@SRI-NIC.ARPA, tcp-ip-rebroadcast@SRI-NIC.ARPA
Subject:   Re: question about FTP data channels
Phil--

You haven't answered your own question if you believe, as I do,
that it's a bug for a Host to use different Internet Addresses
during the same "conversation".

cheers, map
-------
-----------[000029][next][prev][last][first]----------------------------------------------------
Date:      Thu, 7-May-87 18:33:00 EDT
From:      SYSTEM@CRNLNS.BITNET
To:        comp.protocols.tcp-ip
Subject:   Re:  Ethernet Terminal Concentrators

Thanks for the information about WIN/VX performance.

My tests of the CMU/TEK TCP/IP package were using CMU's "shared DEUNA"
code.  DECnet and LAT were also using the same hardware.
CMU's software uses DEC's XE driver, creating additional logical devices.
Obviously, I'll have to try to do another test using a separate DEUNA.

Selden

-----------[000030][next][prev][last][first]----------------------------------------------------
Date:      Thu, 7 May 87 18:33 EDT
From:      <SYSTEM%CRNLNS.BITNET@wiscvm.wisc.edu>
To:        tcp-ip@sri-nic.arpa
Subject:   Re:  Ethernet Terminal Concentrators
Thanks for the information about WIN/VX performance.

My tests of the CMU/TEK TCP/IP package were using CMU's "shared DEUNA"
code.  DECnet and LAT were also using the same hardware.
CMU's software uses DEC's XE driver, creating additional logical devices.
Obviously, I'll have to try to do another test using a separate DEUNA.

Selden
-----------[000031][next][prev][last][first]----------------------------------------------------
Date:      Thu, 7-May-87 18:52:57 EDT
From:      ROODE@BIONET-20.ARPA (David Roode)
To:        comp.protocols.tcp-ip
Subject:   Re:  Ethernet Terminal Concentrators

The numbers you quote are extremely similar to those sent by David
Kashtan over a year ago concerning the affect of context switching on
performance of Telnet service.  Kashtan is the implementor of a
package which includes IP and TCP and TELNET and which has now become
available from SRI International.  This package has been in existence
for a very long time, and his previous message, if memory serves me,
concerned some experiments he did with a non-Kernel implementation of
the above protocols.  The problem is that the CPU can be consumed by the
context switching needed for character at a time I/O as common in
Telnet.  Kashtan's experience was that a single 9600 baud Telnet
connection could consume nearly an entire 780.  Again, this is subject
to my recall.  However, his implementation handles things very
differently and runs in Kernel mode.  As a result it is much more
efficient.  We have it here on MicroVaxes and it is the primary means
for users to use the machine, i.e. there are no hardwire terminal
ports to speak of, so everyone comes in via cisco boxes.  We note no
problems with the Telnet connections to the microvax consuming excess
resources, although we typically only have 6-8 at a time of peak
usage.
-------

-----------[000032][next][prev][last][first]----------------------------------------------------
Date:      Thu, 7-May-87 21:00:00 EDT
From:      RCLee@HI-MULTICS.ARPA.UUCP
To:        comp.protocols.tcp-ip
Subject:   Query: gateways, routers/bridges, and VMS

Honeywell is planning on replacing its end-host Multics connection
(HI-Multics.ARPA) with a DDN/Ethernet gateway.

The primary candidates for the gateway system are:
    1. cisco Systems AGS-1A1E1M
    2. CMC DRN-3200

While both look good to me, I am leaning towards the AGS due to the initial
price of the 1822L setup (cheaper), the ease of converting to DDN-X.25 at a
later date (cheaper, plug-and-play vs.  unit exchange), and the expansion
capbility.

For connecting our remote Ethernets (aprox. 6), I am considering:
    1. Bridge Communications GS/3-TCP router with conversion to the IB/3 IP
       bridge when it gets released later this year.
    2. cisco Systems AGS-1E3S2T1M TCP router

While initially I was leaning to cisco, Bridge is currently in the lead.
The main reasons are that all of the remote Ethernets are Bridge LANs and
Honeywell has an excellant working relationship with Bridge.

Now for the Questions  (You knew there were going to be questions, didn't
you?):

1.  For people who are gateway administrators, how much time do you spend
taking care of the gateway?  1/4 time? 1/2 time?  Full time??!!!!!

2.  Assuming that upgrade to DDN-X.25 wasn't an issue, which box would you
purchase?  What other stand-alone 1822L/Ethernets gateways would you
consider?

3.  Given the choice between a router and a bridge which would you choose?
To me,  the protocol independence of the bridge offers a real selling point
since out LANs contain TCP-based hosts, XNS-based terminal servers, and
DECnet'ed VMS systems.  However, I have seen messages on this list about
problems with both bridges and routers.  Assuming that we don't mix
metaphors, i.e., connecting 2 Ethernet segments with both a bridge and a
router/gateway, can packet looping problems occur?

4.  Does any one know of nameserver/domain software for VAX/VMS?
Conferencing systems that allow transactions via SMTP based mail?

--Randy Lee
  rclee@HI-Multics.ARPA
  ...!umn-cs!hi-csc!rclee

-----------[000033][next][prev][last][first]----------------------------------------------------
Date:      Thu, 7-May-87 21:11:03 EDT
From:      melohn@SUN.COM.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re:  Ethernet Terminal Concentrators

Local Area Terminal is a DEC propritary protocol.

Hypothetically, if you were to combine the I/O from several terminal
sessions into a single message between each server and host pair, you
might very well exceed the figures Charles is seeing with Cisco
Terminal concentrators, espcially when multiple sessions are
communicating with the same host. If you limited the frequency of how
often you sent messages to each host to some value (like 80ms), and
required the host to in most cases send a message only in reply to a
message from the server, the amount of traffic would be further
reduced.

-----------[000034][next][prev][last][first]----------------------------------------------------
Date:      Thu, 7-May-87 21:21:49 EDT
From:      karels%okeeffe@UCBVAX.BERKELEY.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: off RIP packet

Your friend at 26.12.0.122 has added you to his list of external routed
gateways.  This is obviously nonsensical as you don't share a net.
I can't understand how this got your routing into a loop, as any routes
derived from him should be rejected (that address isn't reachable as
a next-hop gateway).

		Mike

-----------[000035][next][prev][last][first]----------------------------------------------------
Date:      Thu, 7-May-87 22:10:25 EDT
From:      leres@ucbarpa.Berkeley.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: Ethernet Terminal Concentrators


I'm not familiar with the cmu/tek vms tcp/ip. Does it use the Salkind
epacket/elink code? The (low) performance you quote in your posting
leads me to believe so.

Salkind's code was the first to share the deuna with decnet. Basically,
a process (elink) hangs reads on the deuna and on a raw internet
socket. When a packet comes in off the wire, elink gets it and then
writes it onto the raw socket. It goes through the internet code, comes
out via the indriver, and arrives in the users buffer. An out bound
packet goes the other way; from user process to internet kernel to
elink process to deuna driver. At the time it was first implemented,
elink was great; you could have decnet and internet on your machine
without buying two ethernet interfaces. But obviously, having data pass
through the elink process isn't very efficient.

Decnet uses an (undocumented) internal interface to the ethernet driver
called the alt start facility. This facility allows kernel use of the
ethernet driver. I recently rewrote epacket for the Wollongong Group's
WIN/VX product. Now, packets go directly from the ethernet driver to
the internet code so things runs faster and uses less cpu time. If I
login to a 9600 baud hardwired terminal on a 780 and telnet to a 750
and do "help /nopage * * *", the 780 is 95% idle and the 750 is 60%
idle. On occasion, I have achieved 85 Kbytes/sec between the VMS 780
and a 4.3 Unix 780.

I've heard that Kashtan rewrote epacket to use the internal "ffi"
interface to the ethernet driver for SRI's Multinet product. I'd expect
it to also be much faster than an elink process implementation.

I don't sell or distribute these products. If you are interested in
them, please contact TWG or SRI.

		Craig

-----------[000036][next][prev][last][first]----------------------------------------------------
Date:      7 May 87 18:30:29 GMT
From:      nbires!opus!atkins@ucbvax.Berkeley.EDU  (Brian Atkins)
To:        tcp-ip@sri-nic.arpa
Subject:   NetBIOS over TCP/IP, Internetworking
I would like info on any solutions to the Internetworking problem with
NetBIOS over TCP/IP.  I have RFC.100[12], but am looking for a simpler
solution, involving broadcast ("B") nodes only.

Thanks

Brian Atkins		atkins@nbires.UUCP or atkins@nbires.NBI.COM
NBI Inc., P.O. Box 9001, Boulder CO 80301	(303) 938-2986
-----------[000037][next][prev][last][first]----------------------------------------------------
Date:      Thu, 7-May-87 22:53:00 EDT
From:      WANCHO@SIMTEL20.ARPA
To:        comp.protocols.tcp-ip
Subject:   off RIP packet

Chuck,

The route daemon, routed, was experimentally turned on, eventually
discovered to be a mistake, and since turned off.  They were coming
from an Ethernet host on the other side of an Ethernet-to-X.25 IP
router connected to 26.12.0.122.  The host, an AT&T 3B5, does indeed
run AT&T SYS V Release 2.0.1 with the Wollongong TCP/IP.

The reason you did not receive a direct reply is the reason that
routed was turned on in the first place.  It was thought that routed
would solve the startup overhead of having to process route add
commands for every gateway in the world.  There obviously must be a
better way - it's only because there's no definitive instructions that
trial-and-error was used.  If you know the correct way to solve the
problem, please let me know, and I'll pass the word.

--Frank

-----------[000038][next][prev][last][first]----------------------------------------------------
Date:      Fri, 8 May 87 09:44:30 pdt
From:      imagen!apolling!geof@decwrl.DEC.COM (Geof Cooper)
To:        tcp-ip@sri-nic.ARPA
Subject:   Ringnets vs. Ethernets
From my (albeit quick) scanning of the messages about Suns on an
Ethernet, it seems that accepted practise is to put  10-20 Sun's
on a single ethernet and gateway from there.  Larger numbers of
diskless suns burden the network excessively.

It is interesting to note that we at IMAGEN have about 50 diskless
apollos (I lost count somewhere around 50) and 11 or 12 500MB disk
servers.  All these reside on a single 12 Mb/s token ring.  We
have seen the servers get overworked during the peak hours (yes,
virginia, our WORKSTATIONS slow down a bit from 2-4 every afternoon)
we haven't noticed the network getting congested.  Oh yes, of the
50 apollos, most are 1.5 MB systems -- so they spend a LOT of time
swapping.

The merits of apollo and sun systems (and apollo's ringnet, which
is far less reliable than our local Ethernet) notwithstanding, it
is instructive to note that the theoretical justifications for
choosing token contention over aloha contention seem to work in
practise in this case.

- Geof
-----------[000039][next][prev][last][first]----------------------------------------------------
Date:      Fri, 08 May 87 09:44:47 -0400
From:      Mike Brescia <brescia@CCV.BBN.COM>
To:        WANCHO@SIMTEL20.ARPA
Cc:        Charles Hedrick <hedrick@TOPAZ.RUTGERS.EDU>, tcp-ip@SRI-NIC.ARPA, admin@HUACHUCA-EM.ARPA, brescia@CCV.BBN.COM
Subject:   starting up a system (was: off RIP packet)

     the startup overhead of having to process route add
     commands for every gateway in the world.

For a first cut, I thought that on your local net you could do a 'route add'
your own gateway as the default, and on milnet or arpanet, do a 'route add' of
your two assigned arpanet-milnet gateways (primary and secondary, as in DDN
mgt bull 23 or its successor).

You want more efficient routes?  That's a second level issue.

    Mike
-----------[000040][next][prev][last][first]----------------------------------------------------
Date:      8 May 87 07:08 EDT
From:      ARPANETMGR @ DDN1.arpa
To:        tcp-ip @ sri-nic.arpa
Cc:        BSTeele @ bbn.com, FTardo @ bbn.com
Subject:   Arpanet Ops Center host line down temporarily


THE ARPANET OPS CENTER 800 # WILL BE DOWN A FEW HOURS SOMETIME ON SATURDAY
9 MAY (MOST LIKELY THE AFTERNOON).  SINCE WE DO NOT KNOW THE EXACT
DOWN TIME WE RECOMMEND THE FOLLOWING PROCEDURE SHOULD YOU NEED TO CALL
ARPANET OPERATIONS:

    FIRST CALL THE 800 492-4992 NUMBER.  IF THERE IS NO ANSWER OR IT
    IS CONTINUOUSLY BUSY --

    CALL 617 497-3070.

  THIS IS A TEMPORARY REQUIREMENT AND THE 800 NUMBER SHOULD BE BACK
  IN SERVICE SUNDAY, 10 MAY.

-----------[000041][next][prev][last][first]----------------------------------------------------
Date:      Fri, 08 May 87 10:13:55 -0400
From:      Mike Brescia <brescia@CCV.BBN.COM>
To:        Mike Karels <karels%okeeffe@UCBVAX.Berkeley.EDU>
Cc:        Charles Hedrick <hedrick@TOPAZ.RUTGERS.EDU>, tcp-ip@SRI-NIC.ARPA, brescia@CCV.BBN.COM
Subject:   Re: off RIP packet

>>     Your friend at 26.12.0.122 ...

I have spoken on the telephone with the host administrator there.  He is
appraised of the fact that he needs to redo the RIP configuration, and also
acquire EGP and properly configure it.

    Mike
-----------[000042][next][prev][last][first]----------------------------------------------------
Date:      Fri, 8 May 87 09:43:22 MDT
From:      Frank J. Wancho <WANCHO@SIMTEL20.ARPA>
To:        brescia@CCV.BBN.COM
Cc:        hedrick@TOPAZ.RUTGERS.EDU, tcp-ip@SRI-NIC.ARPA, admin@HUACHUCA-EM.ARPA, WANCHO@SIMTEL20.ARPA
Subject:   Re: starting up a system (was: off RIP packet)
Mike,

I should clarify the nature of the Ethernet-to-PS connection
as a "port extender," which, on a Class A PS, interprets the
otherwise unused third octet as host addresses on the Ethernet.
In other words, hosts on the Ethernet appear to be directly
connected hosts.

I have already done the 'route add' commands for the two assigned
arpanet-milnet gateways with no change in connectability.  We still
see 'network unreachable' in trying to connect to hosts on any
other network.  This implies that ICMP is not implemented in the
code we have.  That is an issue we will take up with the vendors
involved and drop any further discussion here.

Our thanks to all who replied.

--Frank
-------
-----------[000043][next][prev][last][first]----------------------------------------------------
Date:      Fri, 8 May 87 11:34 PDT
From:      Kevin Carosso <KVC@ENGVAX.SCG.HAC.COM>
To:        tcp-ip@sri-nic.ARPA
Subject:   Re: Ethernet Terminal Concentrators
> Thanks for the information about WIN/VX performance.
>
> My tests of the CMU/TEK TCP/IP package were using CMU's "shared DEUNA"
> code.  DECnet and LAT were also using the same hardware.
> CMU's software uses DEC's XE driver, creating additional logical devices.
> Obviously, I'll have to try to do another test using a separate DEUNA.

No, no, no!  The performance problem with the CMU code is NOT due to the
existence of some mythical "shared DEUNA code".  Under VMS the
DEUNA/DELUA/DEQNA/DEBNT/etc drivers always present the ethernet controller as a
shared device. I wish people would quit refering to the "shared DEUNA" as if it
were something special. I also wish other people would quit selling, at
additional cost mind you, the "option" of sharing your DEUNA!  If (under
VAX/VMS using a DEC supported ethernet device) it ain't shared then it ain't
done right!  Please note, this does NOT mean that if it IS shared it IS
done right.  The Wollongong "shared DEUNA option" as outlined in the
Product Profile for part number A-204-071 is a prime example of that.
DECnet and LAT both use the same "shared DEUNA" and exhibit perfectly
acceptable performance.

The performance problems we see with the CMU code are due to the fact that the
code runs in USER mode as an ACP (separate process).  In addition, the TELNET
server runs in USER mode as a separate process.  There is a lot of context
switching and buffer copying going on.  Similarly, the Wollongong shared DEUNA
option uses a separate process between the network code and the DEUNA device,
introducing context switches and buffer copies.

If the Wollongong code were to use the FFI or alternate start IO entry point to
the DEUNA driver directly from the kernel, they could share the controller and
buy back the performance edge that their kernel code has.  From the followup
note to TCP-IP it sounds like they are indeed doing something like this.

        /Kevin Carosso                 kvc@engvax.scg.hac.com
         Hughes Aircraft Co.           kvc%engvax@oberon.usc.edu

ps.  Myself and Ned Freed wrote the original DEUNA and DEQNA module for
     the Tektronix TCP/IP which CMU picked up and reworked.
-----------[000044][next][prev][last][first]----------------------------------------------------
Date:      8 May 1987 09:17-EDT
From:      CLYNN@G.BBN.COM
To:        karn@FLASH.BELLCORE.COM
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: question about FTP data channels
Phil,
	The server FTP is broken; it is required to use as its address
the address which the client used when establishing the control
connection.  Please report it to the FTP vendor so that they can
fix their code.

Charlie
-----------[000045][next][prev][last][first]----------------------------------------------------
Date:      Fri, 8-May-87 10:49:00 EDT
From:      mishkin@apollo.uucp (Nathaniel Mishkin)
To:        comp.protocols.tcp-ip
Subject:   Re: Ethernet Suffering

I found all this discussion about loaded ethernets pretty interesting.
Having used Apollos (both in and out of Apollo Computer Inc.) for the
last ~5 years, I've become pretty familiar with the vices and virtues
(much of the former) of token ring networks and often wondered why we
wouldn't just be better off with ethernet.  I think the recent discussion
in this group highlights some of the virtues of token ring networks.
I was fairly astonished to hear read one basically can run no more than
(based on the various estimates) 8-15 diskless workstations (of some
manufacture) on a single ether.  I shudder to think of the cost (in money
and performance) of *requiring* routers/bridges and internetwork topology
for a relative small "work group".  You just don't have these problems
in a token ring.  Token rings guarantee fair access to the medium and
as a result can run successfully with consistently higher average loads.

And forget diskless workstations for a minute.  How about doing file system
backups over the net?  There's a fine bit of load; and it's not bursty
like diskless workstations.  In our multi-hundreds of gigabyte environment,
backups (like love) are forever.

I also thought the comment about how improved caching would help matters
was interesting.  Of course, proper caching requires correct cache
validation to ensure that you're reading valid data.  Not all distributed
file systems implement such correctness guarantees.  For example, Apollo's
distributed file system does, but NFS doesn't.
-- 
                    -- Nat Mishkin
                       Apollo Computer Inc.
                       Chelmsford, MA
                       {wanginst,yale,mit-eddie}!apollo!mishkin

-----------[000046][next][prev][last][first]----------------------------------------------------
Date:      Fri, 8-May-87 12:35:05 EDT
From:      lamaster@ames.UUCP (Hugh LaMaster)
To:        comp.protocols.tcp-ip
Subject:   Re:  Ethernet Terminal Concentrators


In article ... SYSTEM@CRNLNS.BITNET writes:

>
>The problem that we've encountered is the high
>CPU overhead used by at least one TELNET implementation under VMS.
>The following is a report of a test done using CMU/TEK TCP/IP
 
:
 
>CMU/TEK TCP/IP software uses an excessive amount of cpu resources
 
>2 VAX-11/750s (LNS53 and CLE750) with FPA and 5 Megabytes of memory,
>running VMS 4.4 and connected with DEUNA Ethernet interfaces.
 
>The transfer used 100% of the cpu on (remote) CLE750
>                  ====
>(20% kernel, 80% user, <5% interrupt)
>
>User mode programs on on CLE750 were the TELNET server using about 50%,
>IP_ACP using about 15%, and TYPE using about 15%.



One part of the results surprised me, based on my experiments.  I have
performed essentially the same experiments, using Wollongong TCP/IP on VMS.
a terminal uses about 5-6% in user state.  However, during the same period,
total system usage (user + interrupt + kernel) was about 15%.  This was true
with both telnet and a direct wired terminal.  Two points:  1)  Wollongong
TCP/IP is clearly more efficient than the TCP/IP you used; and 2)  I saw
higher overhead from direct wired terminals than you did (I am wondering why
right now :-)  ).  Anyway, I was concerned about telnet overhead myself, but
found no observable difference between network and direct wired.  This
surprised me, I admit; Wollongong has improved a great deal in the last
releases.

An interesting part to me was discovering just how expensive terminal I/O is
on VMS.  7 9600 baud terminals at full output saturated a 785...
I think it is definitely a question which should be addressed more
carefully by the terminal server vendors.  Some implementations work
fine, others seem to be a problem.  Most of the vendors don't seem to
have looked at this as carefully as I would have expected.  

-----------[000047][next][prev][last][first]----------------------------------------------------
Date:      Fri, 8 May 87 13:32:41 EDT
From:      hedrick@topaz.rutgers.edu (Charles Hedrick)
To:        RCLee@hi-multics.arpa
Cc:        tcp-ip@sri-nic.arpa
Subject:   Re: Query: gateways, routers/bridges, and VMS
For an Arpanet gateway and a single gateway connecting several networks,
I would think you would spend a few days setting each of them up,
a few day every few weeks changing something, and a few hours a week
looking into problems.  This will make you the de facto network
administration, since any time anyone has a network problem, they will
claim it is something your gateway did.  So it depends upon the size
and complexity of your network.  But I'd bet that it would be full time
for somewhere between a week and a month, and then 1/4 time.  But the
1/4 time will include lots of firefighting, which will require you to
drop everything and fix a critical problem, and will always happen at
the worst possible time for you.
-----------[000048][next][prev][last][first]----------------------------------------------------
Date:      8 May 87 17:00:00 EST
From:      "NRL::HERMAN" <herman%nrl.decnet@nrl.arpa>
To:        "tcp-ip" <tcp-ip@sri-nic.arpa>
Subject:   Binary file transfer
Hello out there in netland,

I want to send a binary file over the net, however the person to whom I
want to send the file does not have access to FTP.  Is it possible to do
it?

I access arpanet using MAILER in the VMS mail utility, and the destination
node is @relay.cs.net.

Thank in advance for any information.

					Charles Herman
------
-----------[000049][next][prev][last][first]----------------------------------------------------
Date:      Fri, 8-May-87 18:19:15 EDT
From:      mark@mimsy.UUCP (Mark Weiser)
To:        comp.protocols.tcp-ip
Subject:   Re: Ethernet Suffering

In article <34bd5209.c366@apollo.uucp> mishkin@apollo.UUCP (Nathaniel Mishkin) writes:
>...I think the recent discussion
>in this group highlights some of the virtues of token ring networks.
>I was fairly astonished to hear read one basically can run no more than
>(based on the various estimates) 8-15 diskless workstations (of some
>manufacture) on a single ether. 

I think this is a misinterpretation of the comments.  I have seen
Apollo networks exhibiting extremely poor performance when too many
diskless nodes were accessing a single server.  (Too many did not
seem to be all that many--I saw this at the Brown demonstration
classroom, when all the diskless clients were trying to start at
once.)  I think that the question is:  what does it mean to 'run
no more than...'.  Sure you can run more than 8-15, but the
performance will look worse.  If you are used to a local disk, then
you can 'feel'  the decrement with more than 8-15 diskless workstations
on the ethernet.  On the other hand, if you are willing to accept
low-performance transients (as the Brown folks evidently were on their
Apollos during startup), then you can do more.

Another angle: there are lots of reasons why performance could be different
between these two systems.  It is premature to point the finger at the
0/1 networking levels without more information.

-mark
-- 
Spoken: Mark Weiser 	ARPA:	mark@mimsy.umd.edu	Phone: +1-301-454-7817
After May 15, 1987: weiser@parcvax.xerox.com

-----------[000050][next][prev][last][first]----------------------------------------------------
Date:      Fri, 08 May 87 15:28:42 +0100
From:      Jon Crowcroft <jon@Cs.Ucl.AC.UK>
To:        tcp-ip@sri-nic.arpa
Subject:   tcp on OSx 3.1

We have a Pyramid 98X running release 3.1 OSx (4.2BSD + Sys V
derived).

TCP between it and other machines (Sun 2s, 3s, PDP 11/44s vaxes
microvaxes, lrts, HLH Orion, PCs running PC/IP etc etc) works
fineish over a single Ethernet.

When we interpose a Cisco IP Router between two segments of our
ethernet (subnetting), the TCP breaks. It breaks spectacularly
in that any bulk transfers (ftp/rcp/etc) lose connections after
a few kbytes transfer. Note that the Cisco has 3Com ethernet
interfaces, and does not cope well with back to back packets,
especially large ones. It forwards packets as it can receive
them, but the later ones in a burst get dropped.

Suns et al (mostly standard Berkeley TCPs) all work in this
situation, with a certain amount of window juggling  and
retransmissions. The Pyramid TCP seems to retransmit everything
whenever a packet is dropped, and also seems to advertise a
receive window of 80kbytes unvaryingly.

Anyone seen similar behaviour, and got any fixes or
suggestions?
-----------[000051][next][prev][last][first]----------------------------------------------------
Date:      Fri, 8-May-87 22:13:00 EDT
From:      CERF@A.ISI.EDU
To:        comp.protocols.tcp-ip
Subject:   Re: question about FTP data channels

Phil,

don't you have to accommodate the case of third party transfers,
so that the control address is distinct from sender and receiver
of the file transfer? If that's so, then you can't very well bind
to the source address of the control process.

Vint

-----------[000052][next][prev][last][first]----------------------------------------------------
Date:      Sat, 9-May-87 08:27:17 EDT
From:      kluger@roundhouse.UUCP (Larry Kluger Sun Europe)
To:        comp.protocols.tcp-ip
Subject:   Re: Dec's LAT protocol

Speaking of LAT, has anyone seen a spec for it?

(I'm only interested in public documentation. A dec part number,
article reference/citation, etc.)

I've heard conflicting rumours that LAT is and is not a secret.

Thanks,

Larry Kluger
Sun Microsystems Europe

ps. There is sunshine in England today! This is news!!

-----------[000053][next][prev][last][first]----------------------------------------------------
Date:      Sun, 10-May-87 10:56:22 EDT
From:      merlin@hqda-ai.UUCP (David S. Hayes)
To:        comp.mail.misc,comp.protocols.tcp-ip
Subject:   Dial-up TCP/IP (was interactive SMTP over phone lines)

In article <16608@amdcad.AMD.COM>, bandy@amdcad.AMD.COM (Andy Beals) writes:

> I don't know about you, but the last time I dialed-up a
> computer, I got line-noise.  Interactive smtp is nice but you
> need to have an error-free datastream between them.  So, anyone
> for writing a point-to-point dialup tcp/ip?

     This discussion seems headed to the general possibility of
doing TCP/IP over a dialup phone line, so I'm moving it to
comp.protocols.tcp.  

     Ideally, we'd like to be able to write a daemon that can
listen for non-local IP requests, check against a list of dial-up
machines and their addresses, dial the phone, and then act as a
pass-through.  All this should be done without any kernel
modifications.

     I don't know if this can be done with current 4.x BSD
kernels.  Used to be that all BSD sites had source, but with the
rise of the workstation vendors, that's not true anymore.

     Perhaps raw sockets would provide a means to do this, but
it's been quite a while since I did anything with raw sockets.
Anyone else have any ideas?
-- 
David S. Hayes, The Merlin of Avalon	PhoneNet:  (202) 694-6900
UUCP:  *!seismo!sundc!hqda-ai!merlin	ARPA:  merlin%hqda-ai.uucp@brl.arpa

-----------[000054][next][prev][last][first]----------------------------------------------------
Date:      Sun, 10-May-87 18:36:49 EDT
From:      connery@bnrmtv.UUCP (Glenn Connery)
To:        comp.protocols.tcp-ip
Subject:   Re: Ethernet Suffering

In article <34bd5209.c366@apollo.uucp>, mishkin@apollo.uucp (Nathaniel Mishkin) writes:
> I was fairly astonished to hear read one basically can run no more than
> (based on the various estimates) 8-15 diskless workstations (of some
> manufacture) on a single ether...  You just don't have these problems
> in a token ring...

Since you are not comparing equivalent systems this kind of interpretation
of the results seems rather unwarranted.  The discussion to date has
pointed out that the Suns are doing paging of the virtual memory over the
Ethernet.  Depending upon the way things are set up this could be a huge
load for the network to handle, regardless of the efficiency of the
access protocol.
-- 

Glenn Connery, Bell Northern Research, Mountain View, CA
{hplabs,amdahl,3comvax}!bnrmtv!connery

-----------[000055][next][prev][last][first]----------------------------------------------------
Date:      Mon, 11-May-87 10:00:00 EDT
From:      mishkin@apollo.uucp (Nathaniel Mishkin)
To:        comp.protocols.tcp-ip
Subject:   Re: Ethernet Suffering

In article <6603@mimsy.UUCP> mark@mimsy.UUCP (Mark Weiser) writes:
>In article <34bd5209.c366@apollo.uucp> mishkin@apollo.UUCP (Nathaniel Mishkin) writes:
>>...I think the recent discussion
>>in this group highlights some of the virtues of token ring networks.
>>I was fairly astonished to hear read one basically can run no more than
>>(based on the various estimates) 8-15 diskless workstations (of some
>>manufacture) on a single ether. 
>
>I think this is a misinterpretation of the comments.  I have seen
>Apollo networks exhibiting extremely poor performance when too many
>diskless nodes were accessing a single server.

I think there's some confusion here:  *I* was not talking about the number
of diskless workstations that could be booted off a single server.  Maybe
other people were.  It seemed that people were talking about the number
of diskless workstations that could be on a single local network (e.g.
ether or ring).

Further, let me make it clear that when I said I gave the range "8-15"
I was merely quoting the numbers that had appeared in the earlier articles
to which I was following up.  (I.e. I should not be considered an authority
on the performance characteristics of other manufacturer's workstations :)
Unless I was misreading, these quotes were from articles that seemed
to be discussing the number of diskless workstations per ether, not per
disked server.  I'll leave it to the real authorities to clear things up.

>Another angle: there are lots of reasons why performance could be different
>between these two systems.  It is premature to point the finger at the
>0/1 networking levels without more information.

Fair enough.  I was just trying to provide some more information that I thought
was relevant.
-- 
                    -- Nat Mishkin
                       Apollo Computer Inc.
                       Chelmsford, MA
                       {wanginst,yale,mit-eddie}!apollo!mishkin

-----------[000056][next][prev][last][first]----------------------------------------------------
Date:      Mon, 11-May-87 10:35:00 EDT
From:      mishkin@apollo.uucp (Nathaniel Mishkin)
To:        comp.protocols.tcp-ip
Subject:   Re: Ethernet Suffering

In an earlier posting of mine, I unjustly sullied the capabilities of
the NFS protocol in the area of caching.  My cursory reading the the
NFS Protocol Spec (which doesn't explicitly discuss caching issues) failed
to catch the frequent "attributes" return parameters that one is, I take
it, to use in cache management if one is to have an efficient NFS
implementation.

Open mouth; extract foot.
-- 
                    -- Nat Mishkin
                       Apollo Computer Inc.
                       Chelmsford, MA
                       {wanginst,yale,mit-eddie}!apollo!mishkin

-----------[000057][next][prev][last][first]----------------------------------------------------
Date:      Mon, 11-May-87 11:51:45 EDT
From:      kent@DECWRL.DEC.COM
To:        comp.protocols.tcp-ip
Subject:   Re: Dial-up TCP/IP (was interactive SMTP over phone lines)

This topic seems to come up every year or so. I have some old messages
(from 83) where a few of us discussed this topic. The raw sockets sort
of approach seems good, as far as it goes.

The real problem seems to be that the IP protocol family (especially
TCP)  believe in short packet lifetimes (30-60 seconds for TCP open
timeout, ~255 seconds total packet lifetime), and this is hard to
achieve in a dialnet environment, where it can take 30 seconds to
convince a dialer to dial a number and connect the call, much less get
logged in, switch the dial port to be an IP link, etc. On top of that,
protocols like SMTP count on being able to reach the destination host
directly; there's no concept of uucp-ish store and forward in the
protocols. This makes opens take even longer; several hosts in
succession may have to dial and open connections, depending on the
diameter of the net.

There are even desirable configurations where it is impossible to have
everyone connected at the same time -- say neither hosts A nor B have
dialers, but are both periodically polled by C. A and B can never be
connected at the same time, thus can't exchange IP packets.

That seems to be where the idea of dialup is incompatible with the IP
networking model...

Cheers,
chris

-----------[000058][next][prev][last][first]----------------------------------------------------
Date:      Mon, 11-May-87 12:25:26 EDT
From:      jas@MONK.PROTEON.COM (John A. Shriver)
To:        comp.protocols.tcp-ip
Subject:   Ethernet Suffering

We are looking at several effects here.  One is server saturation
proper-how fast its disks and protocols can run.  The next is
saturation of the server interface.  The third is saturation of the
LAN itself.  All three are sensitive to the LAN technology.

Server protocol performance can be effected relatively easily by LAN
packet size.  If you've got big packets (4K instead of 1.5K), you'll
take less interrupts and context switches.

Saturation of the server interface is to a great degree a matter of
good design.  Having enough buffering, a clean programming interface,
and an ability to pipeline can definitely help receive/transmit more
data.

However, having any level of data link flow or congestion control can
really help.  Most CSMA networks have no way to know if a packet was
really received at the server, or was dropped for lack of a buffer.
Some CSMA networks (DEC's CI) do this, and it helps a lot.  (Ethernet
does not.)  All of the Token-Ring networks (IBM's, our ProNET, ANSI's
FDDI standard) have this, in the "frame copied" bit that comes back
around from the recipient.  This makes the possibility of lost packets
due to server congestion dramatically lower, which really speeds
things up.  The data link can implement flow control & retransmission
much faster than the transport code.

The LAN itself can have dramatically different total capacity, which
matters when you want 3 servers on one LAN, not just one.  On 10
megabit networks, you can get more total data through, with less
delay, on a Token-Ring than a CSMA/CD network.  While vendors will
disagree on where CSMA/CD congests terminally (somewhere between 4 and
7 megabits/second), it is true that Token-Ring can really deliver all
10 megabits/second.

Moreover, at speeds beyond 10 megabits/second, CSMA/CD does not scale,
and you almost have to go Token-Ring.  (You can go CSMA/CA, but it can
degenerate into a Token-Bus.)  The FDDI standard is a Token-Ring, as
is the ProNET-80 product.

-----------[000059][next][prev][last][first]----------------------------------------------------
Date:      Mon, 11-May-87 12:38:00 EDT
From:      CERF@A.ISI.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: Dial-up TCP/IP (was interactive SMTP over phone lines)

Chris,

How about running full TCP/IP on the dial-up link to deal with
the problem of store/forward - this limits the application of IP
to pt-pt applications such as mail, where the store/forward is 
outside the context of the initial TCP connection.

Vint

-----------[000060][next][prev][last][first]----------------------------------------------------
Date:      11 May 1987 12:38-EDT
From:      CERF@A.ISI.EDU
To:        kent@SONORA.DEC.COM
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: Dial-up TCP/IP (was interactive SMTP over phone lines)
Chris,

How about running full TCP/IP on the dial-up link to deal with
the problem of store/forward - this limits the application of IP
to pt-pt applications such as mail, where the store/forward is 
outside the context of the initial TCP connection.

Vint
-----------[000061][next][prev][last][first]----------------------------------------------------
Date:      Mon, 11-May-87 13:19:26 EDT
From:      kent@DECWRL.DEC.COM
To:        comp.protocols.tcp-ip
Subject:   Re: Dial-up TCP/IP (was interactive SMTP over phone lines)

Vint,

That was the plan -- to run all the protocols intact. The original
impetus for the idea was to punt explicit store-and-forward mail
systems (like uucp). I'm not in favor of introducing any new mechanisms
into the Internet that require explicit routing by the user.

We already have people doing dial-up IP from home machines, and Dave
Mills has had fuzzies doing it for much longer than that (I booted my
first fuzzball long distance at 1200 baud in 1983, and it was old hat
then). The problem seems to be making multi-hop configurations work
with existing TCP/SMTP implementations.

chris

-----------[000062][next][prev][last][first]----------------------------------------------------
Date:      Mon, 11-May-87 13:22:19 EDT
From:      minshall@OPAL.BERKELEY.EDU
To:        comp.protocols.tcp-ip
Subject:   Re: Dial-up TCP/IP (was interactive SMTP over phone lines)

How I would do TCP/IP from a micro over a phone line:

    1.  The user dials up their favorite IP host (TIP?).
    2.  The user logs in.
    3.  The user invokes, simultaneously on both sides, programs
	which place a remote procedure call interface over the
	wire, with reliability.  The program on the IP host is,
	basically, just a user application.
    4.  Then, the remote procedure call interface "gives" the
	user programs on his micro, the ability to access the
	hosts standard TCP/IP services.

Ttalking to a 4.2 system, if after establishing the dial-up
logon, I have a program which says:

    {
	...
	s = socket(....);
	if (s == -1) {
	    ...
	}
	if (connect(s, ...) == -1) {
	    ...
	}
	if (recv(s, ...) == -1) {
	    ...
	}
    }

all of the calls "socket", "connect", "recv", etc., would be performed via
remote procedure call to the IP host.

(Better, possibly, would be to implement a standard set of TCP-over-dial
remote calls (like LU6.2 verbs) that everyone would recognize once
the system dependent call setup/login/invocation was performed.)

Note that in this way, we get out of having to assign a new IP
address to the client machine, and the user gets to move his/her
TCP/whatever programs down to his/her micro.

-----------[000063][next][prev][last][first]----------------------------------------------------
Date:      Mon, 11-May-87 13:29:00 EDT
From:      CERF@A.ISI.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: Dial-up TCP/IP (was interactive SMTP over phone lines)

Chris,

If TCP is configured to be pretty persistent, and if there is only the
one dial-up, low speed, circuit switched hop, I would think one can
get the IP to work - assuming you can avoid dial-up for each packet!

On the other hand, to introduce dial-up links virtually anywhere in
the topology of the Internet, and low speed, to boot, without a reliable
link protocol [forcing recovery to be via TCP] seems pretty hard to
achieve.

Vint

-----------[000064][next][prev][last][first]----------------------------------------------------
Date:      11 May 1987 13:29-EDT
From:      CERF@A.ISI.EDU
To:        kent@SONORA.DEC.COM
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: Dial-up TCP/IP (was interactive SMTP over phone lines)
Chris,

If TCP is configured to be pretty persistent, and if there is only the
one dial-up, low speed, circuit switched hop, I would think one can
get the IP to work - assuming you can avoid dial-up for each packet!

On the other hand, to introduce dial-up links virtually anywhere in
the topology of the Internet, and low speed, to boot, without a reliable
link protocol [forcing recovery to be via TCP] seems pretty hard to
achieve.

Vint
-----------[000065][next][prev][last][first]----------------------------------------------------
Date:      Mon, 11-May-87 13:36:54 EDT
From:      elvy%mbcrr@HARVARD.HARVARD.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   dialup tcp

We have had TCP/IP running over telephone lines for some time, now, but
it doesn't exactly conform to your specs.  A server runs on each end of
the telephone line, alternately (randomly) waiting for a call or attempting
to make one.  Once a connection is established, another process is run.
When that process finishes, or when the connection is lost, the server
hangs up the phone and attempts to re-establish the connection.

As the "process", we use a program that keeps the line open (the server
sets stdin and stdout to be the telephone line before execing the new process)
and keeps it in SERIAL TCP line discipline (see kernel mods I sent to
this mailing list in January 1984).  In addition, we ifconfig the serial
interface and route packets over it.

Crude in concept, perhaps, but we have had a reliable Internet connection
running more-or-less all the time for the past year over a distance that
precludes normal cable connections and for a fraction of the cost of
leased lines.

Marc (elvy@mbcrr.harvard.edu)

-----------[000066][next][prev][last][first]----------------------------------------------------
Date:      Mon, 11 May 87 13:36:54 EDT
From:      elvy%mbcrr@harvard.harvard.edu (Marc Elvy)
To:        "tcp-ip@sri-nic.arpa"@harvard.harvard.edu
Subject:   dialup tcp
We have had TCP/IP running over telephone lines for some time, now, but
it doesn't exactly conform to your specs.  A server runs on each end of
the telephone line, alternately (randomly) waiting for a call or attempting
to make one.  Once a connection is established, another process is run.
When that process finishes, or when the connection is lost, the server
hangs up the phone and attempts to re-establish the connection.

As the "process", we use a program that keeps the line open (the server
sets stdin and stdout to be the telephone line before execing the new process)
and keeps it in SERIAL TCP line discipline (see kernel mods I sent to
this mailing list in January 1984).  In addition, we ifconfig the serial
interface and route packets over it.

Crude in concept, perhaps, but we have had a reliable Internet connection
running more-or-less all the time for the past year over a distance that
precludes normal cable connections and for a fraction of the cost of
leased lines.

Marc (elvy@mbcrr.harvard.edu)
-----------[000067][next][prev][last][first]----------------------------------------------------
Date:      Mon, 11-May-87 15:11:00 EDT
From:      mishkin@apollo.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: Ethernet Suffering

[[This is a reposting of my response to Mark Weiser's article.  This
  one is slightly different from my earlier one.  If I spend any more time
  trying to figure out how to (successfully) cancel a previously posted
  article, I think I'll go insane.  Sorry for the noise.  --mishkin]]

In article <6603@mimsy.UUCP> mark@mimsy.UUCP (Mark Weiser) writes:
>I think this is a misinterpretation of the comments.  I have seen
>Apollo networks exhibiting extremely poor performance when too many
>diskless nodes were accessing a single server.

I think there's some confusion here.  *I* was talking about the number
of diskless workstations per ethernet, not per server.  I thought that's
what other people were talking about too.

Further, I want to make clarification:  When I referred to 8-15 as being
the maximum number of diskless workstations (per ethernet), I was *merely*
quoting the numbers that appeared in the articles to which I was following
up.  (I.e. I don't claim to be an expert on the performance characteristics
of other manufacturers' equipment.)  I'll let the real experts clear things
up.

>Another angle: there are lots of reasons why performance could be different
>between these two systems.  It is premature to point the finger at the
>0/1 networking levels without more information.

Climbing further out of the hole which I seem to have been digging myself
into:  I agree with you.  I was not trying to make a definitive comparison
between rings and ethers.  I was simply trying to add some more information
to the discussion.  A number of people here (at Apollo) have said to
me "Come on, this really can't be an ethernet saturation problem."  Others
have extolled ring networks in even other ways that I can barely
understand.

Finally, before I shrink away, I feel obliged to point out that lest
anyone get the wrong impression, Apollo believes that both ring and ether
networks are fine ideas.  These days, one can buy Apollo's DN3000s with
either or both of a ring or ethernet controller and all your Apollo
workstations can communicate (and share files) over complex
ring/ether/whatever internetwork topologies.

-- 
                    -- Nat Mishkin
                       Apollo Computer Inc.
                       Chelmsford, MA
                       {wanginst,yale,mit-eddie}!apollo!mishkin

-----------[000068][next][prev][last][first]----------------------------------------------------
Date:      11 MAY 87 20:00-PDT
From:      Iglesias%UCIVMSA.BITNET@wiscvm.wisc.edu
To:        TCP-IP@SRI-NIC.ARPA
Subject:   Wollongong TCP/IP and subnets
Received: from ORION by UCICP6 with PMDFs; 11 May 1987 19:49:16
Received: from localhost by orion.uci.edu id a016906; 11 May 87 19:46 PDT
Date: Mon, 11 May 87 19:46:43 -0700
From: Mike Iglesias <iglesias@orion.uci.edu>

Does anyone know if Wollongong's TCP/IP will support a subnet mask of
something other than 8 bits.  It appears to be a fixed mask from
reading the documentatin.

Also, is there any way to set the broadcast address?


Thanks,

Mike Iglesias
University of California, Irvine
-----------[000069][next][prev][last][first]----------------------------------------------------
Date:      Mon, 11-May-87 19:50:00 EDT
From:      dplatt@teknowledge-vaxc.ARPA (Dave Platt)
To:        comp.unix.wizards,comp.protocols.tcp-ip
Subject:   IP fragmentation, and how to avoid it


I've run into some problems on a Sun 3/52 workstation running SunOS
3.2 that I've been told may involve IP packet fragmentation.  The
primary symptom is that SMTP mail deliveries "hang up" and abort with
a read timeout.

Background: my Sun is sitting on a 10 Mbit Ethernet with the default
ifconfig for the Ethernet board;  the MTU for the Ethernet interface
is 1500 bytes.  The system is configured so that packets destined for
IP addresses not on our net are sent to our Vax 8650 (Ultrix 1.2),
which ipforwards them to the Internet TIP.  The MTU for the Vax's
"imp0" interface is 1006 bytes.

Problem: if a process on the Sun establishes a TCP connection with a
peer running on a host somewhere on the Internet (e.g. an SMTP
server), and then sends a large burst of data, the Sun will typically
queue up about 4k of data in the TCP buffers at one time.  This
apparently results in the sending of an IP packet that approaches the
Sun's 1500-byte MTU; when the packet passes through the Vax on its way
to the IMP, it is apparently fragmented.  Some system or gateway seems
to drop the fragmented IP packet on the floor.  The Sun's TCP never
receives an acknowledgement for the TCP segment, retries the
transmission periodically, and eventually aborts the connection.

The problem typically occurs in the later stages of an SMTP session.
The Sun's SMTP mailer is able to connect with its peer on another
Internet host, go through the "MAIL FROM" and "RCPT TO" steps, and
receive permission to send the message body.  If the message is short
(< 1k bytes), everything works fine;  if it's too long, then the
timeout occurs.

This problem appears to occur only when the host I'm trying to connect
with lies on a local-area net... and not all LANs are affected.  I've
been told that certain gateways are incapable of reassembling
fragmented IP packets;  other gateways seem to work just fine.

Question for the gurus:  is there any way to reconfigure my Sun's le0
interface so that its MTU doesn't exceed that of the 8650?  If so, how
do I do it?  Or, is there a better solution to the problem?  Or,
finally, have I totally misunderstood the problem?

advTHANKSance,

                Dave Platt

Internet:  dplatt@teknowledge-vaxc.arpa
Usenet: {hplabs|sun|ucbvax|seismo|uw-beaver|decwrl}!teknowledge-vaxc.arpa!dplatt
Voice: (415) 424-0500

-----------[000070][next][prev][last][first]----------------------------------------------------
Date:      Tue, 12-May-87 00:16:00 EDT
From:      WANCHO@SIMTEL20.ARPA.UUCP
To:        comp.protocols.tcp-ip
Subject:   route add (was RIP)

Thanks to all who responded, and to Mike Muuss with the "correct"
answer: to insert the two lines for the default gateways (nominally
the "mailbridge homing" gateway hosts) with the destination field of
"0".  This was NOT mentioned in *any* documentation we have.

This class of missing information is exactly the sort of thing I would
like to see published in an online document available to any newly
ordained system administrator and network liaison.  Let's call it the
Internet Systems Administrator's Handbook: a collection of hints, tips,
and common sense security items (yours and the network community) for
operating in the Internet environment.  If such a document existed, we
would have known not to run routed and *why*.  If you already have
such a document in your local community, or have material to submit
toward the compilation of such a document, maybe the NIC would be
interested in collecting and organizing it...

--Frank

-----------[000071][next][prev][last][first]----------------------------------------------------
Date:      Tue, 12-May-87 11:10:33 EDT
From:      dms@HERMES.AI.MIT.EDU (David M. Siegel)
To:        comp.protocols.tcp-ip
Subject:   Dial-up TCP/IP (was interactive SMTP over phone lines)

Speaking of dial-up IP... I am looking into hooking up a home Sun-3 to
IP via a modem. I was planning on using Serial Line IP (slip) for this
purpose, but came up with a few problems:

1) The SLIP package doesn't support logging in to the IP connection,
so anyone could dial up to the modem and connect to the network. 

2) The SLIP host that's on the Ethernet end needs to know the IP
address of the dialup host, so if different hosts can dialup to the IP
connection, something must be done to handshake down the dialup host's
Ethernet address.

Question: has anyone done something like this, and is SLIP the right
way to go?

-Dave

-----------[000072][next][prev][last][first]----------------------------------------------------
Date:      Tue, 12-May-87 13:40:51 EDT
From:      minshall@OPAL.BERKELEY.EDU
To:        comp.protocols.tcp-ip
Subject:   Re: Dial-up TCP/IP (was interactive SMTP over phone lines)

Chris,
	Yes, there are two issues here.  Circuit-switched IP is one.

	The other is, for me, how does one extend IP service to a
large number of micros, many of which are sitting in someone's home,
and connected via a dial-up phone line.

	My bias is that these do not need to have IP addresses assigned,
and so can piggy-back off some existing host.  Still, I think these
machines need various TCP/IP *client* services (FTP, telnet, SMTP maybe),
and I think the remote procedure call is valuable there.  Yes, it needs
a reliable data link layer, flow control, in-sequence delivery, etc.

	The user sitting at home would like to move files back and forth
between the home system and hosts on the IP network, share file systems
with other hosts, login over the network, etc.  The RPC mechanism seems,
to me, an interesting way of accomplishing that.

Greg Minshall

-----------[000073][next][prev][last][first]----------------------------------------------------
Date:      Tue, 12-May-87 16:54:27 EDT
From:      minshall@OPAL.BERKELEY.EDU
To:        comp.protocols.tcp-ip
Subject:   Re: Dial-up TCP/IP (was interactive SMTP over phone lines)

Chris,
	Currently, a user logged in to one of our IP hosts can now
run any/all of the client protocols they would like.  When they
do this, they run the protocol using the IP host's CPU cycles,
and things they do that reference files (ftp, "screen capture" on
telnet) they reference files located on the IP host system.

	What I would like is for any user with an account on
any of our IP hosts to be able to dial in (from home), or otherwise
connect, and run those same clients from their micro (thus affecting
the files in the micro, and using, mostly, CPU cycles from the
micro).  One of the advantages here is that the user need not submit
a "request for an IP address" form to the local administration.
Nor do they need to know what IP address they are talking from.

	I am the first to admit that the RPC mechanism solves only the
problem I am posing.  Still, I think it is an interesting problem.

	As to your point about validation schemes, I tend to differ.
I agree that the .rhosts scheme has limited usability (and can be
something of a security hole).  However, I think that the most
interesting schemes will be those relying on authentication, keys,
etc. (and NOT on knowledge of the remote host name).

	Again, my purpose is to allow micros to become clients.  This
is also my bias.  Not to prompt any heated discussion, but I am dubious
of the desirability of running SMTP servers on large numbers of micros.
Here I think the POP-like protocols are more interesting.  In those
(relatively fewer) cases where it is necessary to run a machine with
servers, then a separate IP address (and separate protocols) would be
the way to go.

	I should make clear that I am not even seriously proposing
any work in this area at this time.  It is just an idea I've been
kicking around for a while, and the recent discussion was enough
to finally prod me into airing it a bit.

Greg

-----------[000074][next][prev][last][first]----------------------------------------------------
Date:      Tue, 12-May-87 17:16:34 EDT
From:      kent@DECWRL.DEC.COM
To:        comp.protocols.tcp-ip
Subject:   Re: Dial-up TCP/IP (was interactive SMTP over phone lines)

Greg,

Every time this issue comes up, I wonder why people don't want to
assign addresses to micros. Is it just a problem of scale? I realize
that most of the micros won't be hooked up at the same time. But, if
you're going to have server SMTP service on the micro, that probably
means that each micro needs a unique name. If you don't assign it a
permanent number, then you have a real headache at startup, interacting
with the nameserver database, etc.

If you're just talking about having client services on the micros, then
you can get away with random names/numbers. But home micros are getting
powerful enough that people might want to have server SMTP sitting there.

Another authorization problem arises for home micros that want to be
remote file system clients. It seems very difficult to use the flexible
protection/authentication mechanisms that are appearing if you don't
know exactly which host is the client -- the 4.2 .rhosts mechanism
breaks down pretty quickly. The contents of .rhosts either becomes all
possible dialup hostnames, or is essentially useless.

chris

-----------[000075][next][prev][last][first]----------------------------------------------------
Date:      Tue, 12-May-87 17:18:16 EDT
From:      ROODE@BIONET-20.ARPA (David Roode)
To:        comp.protocols.tcp-ip
Subject:   Re: dialup tcp

There must be something else to your story, or else the tariffs differ
greatly from what I am used to.  In California, a switched
telephone line costs ~$15 per month with the FCC surcharge,
but keeping it connected 24 hours a day would cost in
usage charges an additional $300 per month.   Leased lines
can be had for $40 per month over distances up to
5 miles or so at least.  Leased lines are 4-wire circuits
and as a result I'd say the modems tend to be slightly more
economical than those suitable for the dial network.

What makes your situation different?
-------

-----------[000076][next][prev][last][first]----------------------------------------------------
Date:      Tue, 12-May-87 17:26:29 EDT
From:      kent@DECWRL.DEC.COM
To:        comp.protocols.tcp-ip
Subject:   Re: Dial-up TCP/IP (was interactive SMTP over phone lines)

The question isn't really how to do TCP on a phone line to allow a
micro in, but rather how to build something like a circuit-switched IP
network. As evidenced by the mail on the list, lots of people have had
user-initiated dialup IP for some time. Most implementations just run
IP over the wire (with something like SLIP or compressed SLIP as the
data link layer), with acceptable performance. The remote procedure
interface is interesting, but you need to build some sort of reliably
data link layer under it ... I wonder if it really buys you anything in performance?

chris

-----------[000077][next][prev][last][first]----------------------------------------------------
Date:      Tue, 12-May-87 17:38:03 EDT
From:      ROODE@BIONET-20.ARPA (David Roode)
To:        comp.protocols.tcp-ip
Subject:   mx-existence and header etiquette

Phase-in timetables for defacto standards are generally informal.
What's the feeling about the use of host names that are only valid
where there is support for both a host name resolver and MX name
server domain entries?  It seems reasonable for forwarding hosts to
show up in the From: header as a courtesy to those hosts who do not
yet support MX-existence.  Is this support a "required" or an
"optional" part of name resolvers?  At least considering that on some
of the networks composing the internet name server use is optional,
some period of visibilty for forwarding makes sense.

Apparently RELAY.CS.NET does follow this principle, but
not all the hosts relaying UUCP hosts' mail to the Internet
do.
-------

-----------[000078][next][prev][last][first]----------------------------------------------------
Date:      Tue, 12-May-87 17:39:27 EDT
From:      kent@DECWRL.DEC.COM
To:        comp.protocols.tcp-ip
Subject:   Re: Dial-up TCP/IP (was interactive SMTP over phone lines)

We use SLIP (with some header compression) for dialups from some home
machines. Basically, the user logs in and then runs a shell script that
does ifconfig and slipconf on the dialup line, and does a similar thing
on their home machine. This supplies the IP address and avoids the
problem of having an IP port hanging out for anyone to dial into.

chris

-----------[000079][next][prev][last][first]----------------------------------------------------
Date:      Tue, 12-May-87 18:29:20 EDT
From:      Iglesias@UCIVMSA.BITNET
To:        comp.protocols.tcp-ip
Subject:   Wollongong TCP/IP and subnets

Received: from ORION by UCICP6 with PMDFs; 11 May 1987 19:49:16
Received: from localhost by orion.uci.edu id a016906; 11 May 87 19:46 PDT
Date: Mon, 11 May 87 19:46:43 -0700
From: Mike Iglesias <iglesias@orion.uci.edu>

Does anyone know if Wollongong's TCP/IP will support a subnet mask of
something other than 8 bits.  It appears to be a fixed mask from
reading the documentatin.

Also, is there any way to set the broadcast address?


Thanks,

Mike Iglesias
University of California, Irvine

-----------[000080][next][prev][last][first]----------------------------------------------------
Date:      Tue, 12-May-87 19:00:59 EDT
From:      sato@SCDSW1.UCAR.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   Announcement of a Summer Supercomputing Institute


      Announcing the 1987 NCAR Summer Supercomputing Institute
		10-21 August 1987, Boulder, Colorado
		   Scientific Computing Division
	       National Center for Atmospheric Research

     The University Corporation for Atmospheric Research (UCAR) is pleased
to announce that a Supercomputing Institute will be held this summer at the
National Center for Atmospheric Research (NCAR) located in Boulder, Colorado.
The Institute is being sponsored by the National Science Foundation's
Division of Advanced Scientific Computing (NSF/DASC) with assistance from
the Physical Oceanography Program of NSF.  The Institute will be
managed and coordinated by NCAR's Scientific Computing Division (SCD).


BACKGROUND

     The Institute is a two-week intensive training experience designed
to provide an understanding of how supercomputing capabilities can augment
scientific research.  Learning how to apply supercomputing methods to a
variety of investigations that require large-scale computation is a
key objective of the Institute.  To meet this objective, the Institute's
curriculum has been carefully arranged to give each participant an
opportunity to explore new approaches to using supercomputing technology.
Lectures by internationally recognized experts in the field of supercomputing
and laboratory sessions are geared to maximize the learning experience by
providing real-world applications based upon current research efforts
employing supercomputers.
     A maximum of 25 senior graduate students, post-doctoral fellows, and
junior faculty will be selected from national, accredited universities and
research institutions that confer advanced degrees in the atmospheric and
physical oceanographic sciences, solar physics, and related disciplines.
Applications from individuals attending any institution meeting this
requirement will be considered.

INSTITUTE CURRICULUM

     The 1987 Supercomputing Institute will cover the following topics:
	-  Operating Systems and Machine Configurations
	-  Vectorization and Optimization Techniques
	-  Parallel Processing
	-  Numerical Techniques
	-  Software Availability and Quality
	-  Communications and Networking
	-  Graphics

INSTITUTE BENEFITS

     Successful candidates will have all travel, per diem, and accommodation
expenses paid for by the Institute.  In addition, computing time will be
made available to all participants on the CRAY X-MP/48 supercomputer at NCAR.
Support services consistent with all institute-related computing will also
be provided.

APPLICATION REQUIREMENTS

     To be considered for admission to the 1987 Supercomputing Institute,
an applicant should have a minimum graduate GPA of 3.5 (postdocs and junior
faculty excepted), provide two letters of recommendation indicating the
applicant's research capabilities and the potential for their research to
advance disciplinary knowledge, and an abstract of not more than 250 words
relating how the applicant's research endeavors (either planned or underway)
would benefit from applying supercomputing technology to their investigations.
Successful candidates should have knowledge of FORTRAN 77 and be working on
research projects that already use supercomputers or anticipate using them in
the near future.
     Applicants will be notified of their acceptance into the Institute
by 1 July 1987.

APPLICATION DEADLINE

     Individuals who meet the above requirements are encouraged to apply
for admission to the 1987 Summer Supercomputing Institute.  Please
complete the attached application form and mail it, along with all
supporting materials to:

     Richard K. Sato
     1987 NCAR Summer Supercomputing Institute
     Scientific Computing Division
     National Center for Atmospheric Research
     PO Box 3000
     Boulder, CO  80307

IMPORTANT NOTE:  All materials must be received by 15 June 1987.


			    APPLICATION FORM

Name:________________________________________________________________________

Address:_______________________________City:_____________State:______Zip:____

University/Institution Name:__________________________________________________

Department:__________________________________________________________________

Telephone (Home):_____________________________ Work:_________________________

I am a ____ Graduate Student  ____ Postdoctoral Fellow ___ Junior Faculty
       ___ Other:_____________________

I am a U.S. citizen: _____Yes   _____ No, I am a citizen of:_________________

Please include two letters of recommendation, an abstract describing your
research, and your current GPA (if applicable) with this application

      THIS APPLICATION MUST BE RECEIVED NO LATER THAN 15 JUNE 1987

-----------[000081][next][prev][last][first]----------------------------------------------------
Date:      Tue, 12-May-87 21:01:00 EDT
From:      CERF@A.ISI.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: Dial-up TCP/IP (was interactive SMTP over phone lines)

Chris,

One problem with assigning numbers or names to micros is that they
don't bind to any telephone or other addresses - since in principle
a micro is mobile. So you need some kind of authentication to assure
that the caller is who he says he is. This isn't insurmountable, but it
is like the X.32 (dial-up X.25 ) problem.

Vint

-----------[000082][next][prev][last][first]----------------------------------------------------
Date:      12 May 1987 21:01-EDT
From:      CERF@A.ISI.EDU
To:        kent@SONORA.DEC.COM
Cc:        minshall%opal.Berkeley.EDU@UCBVAX.BERKELEY.EDU, tcp-ip@SRI-NIC.ARPA
Subject:   Re: Dial-up TCP/IP (was interactive SMTP over phone lines)
Chris,

One problem with assigning numbers or names to micros is that they
don't bind to any telephone or other addresses - since in principle
a micro is mobile. So you need some kind of authentication to assure
that the caller is who he says he is. This isn't insurmountable, but it
is like the X.32 (dial-up X.25 ) problem.

Vint
-----------[000083][next][prev][last][first]----------------------------------------------------
Date:      Tue, 12-May-87 23:48:10 EDT
From:      Mills@UDEL.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re:  Dial-up TCP/IP (was interactive SMTP over phone lines)

Greg (?),

You are making assumptions that a willing 4.x is available to front the
application and that a front-end protocol will be designed for it (them).
I prefer not to need that horsepower and use IP directly over the
micro-IMP (or whatever) link, even if encapsulated in SLIP, hello or
other suitable envelope. Address management and authentication is an
interesting issue - see for example the schemes used by the fuzzballs
(RFC-891) and (a different one) the MIT PC/IP gateway. For authentication
I offer a variant of the Needham-Schroeder scheme (e.g. RFC-1004). My
point is that we shouldn't have to climb all the way up the protocol stack
to connect a PC to the Internet.

Dave

-----------[000084][next][prev][last][first]----------------------------------------------------
Date:      Wed, 13-May-87 00:05:47 EDT
From:      Mills@UDEL.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re:  Dial-up TCP/IP (was interactive SMTP over phone lines)

Vint,

I can't resist observing that, in the nine years or so I have been running
dial-up 1200-bps IP links (mostly over metropolitan areas), the incidence of
packets lost to transmission errors has been much less than those due to
the circuit simply going on-hook. The nice thing about TCP and the latter
is that you can redial during the user-timeout interval, while TCP is
retransmitting, and things just go on as before. Now consider the amateur
TCP/IP packet-radio experiments being conducted by Phill Karn and several
others (including myself), where the packet loss can be as high as one in
three on the radio channel. I would agree from experience that the channel
protocol needs to be reworked entirely before expecting TCP retransmissions
to save the day.

Obviously, Phill and friends are leapfrogging the dial-in issue. You want
ubiquitous access? Look ma, no telephones or wires even.

Dave

-----------[000085][next][prev][last][first]----------------------------------------------------
Date:      Wed, 13-May-87 04:14:40 EDT
From:      hedrick@topaz.RUTGERS.EDU (Charles Hedrick)
To:        comp.unix.wizards,comp.protocols.tcp-ip
Subject:   Re: IP fragmentation, and how to avoid it

Now and then we run into machines that can't reassemble.  Note that
the 1006 limit on imp0 isn't a problem with the VAX.  It is the
limit allowed by the Arpanet.  There are more elegant solutions,
but if you don't have source, here is a program that will let you
change the MTU on the fly.  We have used it on both Pyramid and
Sun, changing only the name of the kernel variable.  I.e.
the string "_il_softc", which is the name appropriate for il0 on
the Pyramid.  I just checked and it looks like _le_softc will work
for a Sun 3/50.  At least this will let you see whether your problem
is really a reassembly problem.  You should try "mtu 1006" or
maybe some slightly smaller number.  (We typically use 900 for
testing.)

#include <sys/types.h>
#include <sys/stat.h>
#include <a.out.h>
#include <stdio.h>

struct nlist nl[2];

short mtu;
int kmem;
struct stat statblock;
char *kernelfile;

main(argc,argv)
char *argv[];
{
	if (argc < 2) {
		fprintf(stderr,"usage: mtu <n> {<kernelfile>}\n");
		exit(2);
	}

	if ((kmem = open("/dev/kmem",2))<0) {
		perror("open /dev/kmem");
		exit(1);
	}
	if (argc > 2) {
		kernelfile = argv[2];
	} else {
 kernelfile = "/vmunix";
	}
	if (stat(kernelfile,&statblock)) {
		fprintf(stderr,"%s not found.\n",kernelfile);
		exit(1);
	}
	initnlistvars(atoi(argv[1]));
	exit(0);
}

initnlistvars(on)
register int on;
{
	nl[0].n_un.n_name = "_il_softc";
	nl[1].n_un.n_name = "";
	nlist(kernelfile,nl);
	if (nl[0].n_type == 0) {
		fprintf(stderr, "%s: No namelist\n", kernelfile);
		exit(4);
	}
	(void) lseek(kmem,(nl[0].n_value)+6,0);
	if (read(kmem,&mtu,2) != 2) {
		perror("read kmem");
		exit(5);
	}
	fprintf(stderr,"mtu was: %d is now: %d\n",mtu,on);
	(void) lseek(kmem,(nl[0].n_value)+6,0);
	mtu = on;
	if (write(kmem,&mtu,2) != 2) {
		perror("write kmem");
		exit(6);
	}
}

-----------[000086][next][prev][last][first]----------------------------------------------------
Date:      Wed 13 May 87 09:26:03-PDT
From:      JERRY@STAR.STANFORD.EDU
To:        iglesias%UCIVMSA.BITNET@WISCVM.WISC.EDU, tcp-ip@SRI-NIC.ARPA
Subject:   RE: Wollongong TCP/IP and subnets
You current release of Wollongong's software does only allow fixed
8 bit subnets.  These are enable with the ifconfig "local" argument.
To enable the IP broadcast address that is compatible with 4.3 and
is the standard use the ifconfig "ipbrdcst" argument.

The next release of Wollongong's software will be May 27 and will be
4.3 compatible so that full subnet support as well as the ability
to set the IP broadcast address will be available.

Jerry
-----------[000087][next][prev][last][first]----------------------------------------------------
Date:      Wed, 13 May 87 10:26:41 mdt
From:      Dick Sato <sato%scdpyr.UCAR.EDU%ncar.csnet@RELAY.CS.NET>
To:        tcp-ip%sri-nic.arpa@RELAY.CS.NET
Cc:        sato%scdpyr.UCAR.EDU%ncar.csnet@RELAY.CS.NET
Subject:   Announcement of a Summer Supercomputing Institute
      Announcing the 1987 NCAR Summer Supercomputing Institute
		10-21 August 1987, Boulder, Colorado
		   Scientific Computing Division
	       National Center for Atmospheric Research

     The University Corporation for Atmospheric Research (UCAR) is pleased
to announce that a Supercomputing Institute will be held this summer at the
National Center for Atmospheric Research (NCAR) located in Boulder, Colorado.
The Institute is being sponsored by the National Science Foundation's
Division of Advanced Scientific Computing (NSF/DASC) with assistance from
the Physical Oceanography Program of NSF.  The Institute will be
managed and coordinated by NCAR's Scientific Computing Division (SCD).


BACKGROUND

     The Institute is a two-week intensive training experience designed
to provide an understanding of how supercomputing capabilities can augment
scientific research.  Learning how to apply supercomputing methods to a
variety of investigations that require large-scale computation is a
key objective of the Institute.  To meet this objective, the Institute's
curriculum has been carefully arranged to give each participant an
opportunity to explore new approaches to using supercomputing technology.
Lectures by internationally recognized experts in the field of supercomputing
and laboratory sessions are geared to maximize the learning experience by
providing real-world applications based upon current research efforts
employing supercomputers.
     A maximum of 25 senior graduate students, post-doctoral fellows, and
junior faculty will be selected from national, accredited universities and
research institutions that confer advanced degrees in the atmospheric and
physical oceanographic sciences, solar physics, and related disciplines.
Applications from individuals attending any institution meeting this
requirement will be considered.

INSTITUTE CURRICULUM

     The 1987 Supercomputing Institute will cover the following topics:
	-  Operating Systems and Machine Configurations
	-  Vectorization and Optimization Techniques
	-  Parallel Processing
	-  Numerical Techniques
	-  Software Availability and Quality
	-  Communications and Networking
	-  Graphics

INSTITUTE BENEFITS

     Successful candidates will have all travel, per diem, and accommodation
expenses paid for by the Institute.  In addition, computing time will be
made available to all participants on the CRAY X-MP/48 supercomputer at NCAR.
Support services consistent with all institute-related computing will also
be provided.

APPLICATION REQUIREMENTS

     To be considered for admission to the 1987 Supercomputing Institute,
an applicant should have a minimum graduate GPA of 3.5 (postdocs and junior
faculty excepted), provide two letters of recommendation indicating the
applicant's research capabilities and the potential for their research to
advance disciplinary knowledge, and an abstract of not more than 250 words
relating how the applicant's research endeavors (either planned or underway)
would benefit from applying supercomputing technology to their investigations.
Successful candidates should have knowledge of FORTRAN 77 and be working on
research projects that already use supercomputers or anticipate using them in
the near future.
     Applicants will be notified of their acceptance into the Institute
by 1 July 1987.

APPLICATION DEADLINE

     Individuals who meet the above requirements are encouraged to apply
for admission to the 1987 Summer Supercomputing Institute.  Please
complete the attached application form and mail it, along with all
supporting materials to:

     Richard K. Sato
     1987 NCAR Summer Supercomputing Institute
     Scientific Computing Division
     National Center for Atmospheric Research
     PO Box 3000
     Boulder, CO  80307

IMPORTANT NOTE:  All materials must be received by 15 June 1987.


			    APPLICATION FORM

Name:________________________________________________________________________

Address:_______________________________City:_____________State:______Zip:____

University/Institution Name:__________________________________________________

Department:__________________________________________________________________

Telephone (Home):_____________________________ Work:_________________________

I am a ____ Graduate Student  ____ Postdoctoral Fellow ___ Junior Faculty
       ___ Other:_____________________

I am a U.S. citizen: _____Yes   _____ No, I am a citizen of:_________________

Please include two letters of recommendation, an abstract describing your
research, and your current GPA (if applicable) with this application

      THIS APPLICATION MUST BE RECEIVED NO LATER THAN 15 JUNE 1987


-----------[000088][next][prev][last][first]----------------------------------------------------
Date:      Wed, 13-May-87 12:26:03 EDT
From:      JERRY@STAR.STANFORD.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   RE: Wollongong TCP/IP and subnets

You current release of Wollongong's software does only allow fixed
8 bit subnets.  These are enable with the ifconfig "local" argument.
To enable the IP broadcast address that is compatible with 4.3 and
is the standard use the ifconfig "ipbrdcst" argument.

The next release of Wollongong's software will be May 27 and will be
4.3 compatible so that full subnet support as well as the ability
to set the IP broadcast address will be available.

Jerry

-----------[000089][next][prev][last][first]----------------------------------------------------
Date:      Wed, 13-May-87 12:52:16 EDT
From:      matt@oddjob.UChicago.EDU (Matt Crawford)
To:        comp.protocols.tcp-ip,comp.dcom.lans,comp.sources.wanted
Subject:   IP to DECNET translation ????

Help!  We want to connect our campus IP networks to SPAN, a wide
area decnet network run by NASA.  At the moment our sole SPAN
node is a PDP-11 with a 9600 baud leased line.  We will shell
out for a microvax if we can put some sort of software on it
that will provide access from TCP as transparently as possible.

Is there anything which will give us better access than just
logging in to the span node?  Some people will want to transfer
very large data sets and buying enough scratch disk for them
would be an obstacle.  Most users don't want to learn VMS (and
DECNET/Ultrix only supports ethernet as an interface).

What we wish for is a protocol translator.  Is there one?
________________________________________________________
Matt	     University		matt@oddjob.uchicago.edu
Crawford     of Chicago     {astrovax,ihnp4}!oddjob!matt

-----------[000090][next][prev][last][first]----------------------------------------------------
Date:      Wed, 13-May-87 14:08:33 EDT
From:      whna@cgcha.UUCP
To:        comp.sys.att,comp.protocols.tcp-ip
Subject:   Connecting an AT&T 3B2/400 to an Internet

How can an AT&T 3B2/400 UN*X system be connected to

	a) an Ethernet/Cheapernet LAN
	b) an X.25 network
	c) a serial line carrying SLIP or internetwork routing protocol

for the purpose of TCP/IP internetworking (telnet, ftp, smtp, etc.)?

Any solutions, hints and tips for each of the cases listed above are welcome.

Please send e-mail to
Heinz Naef, c/o CIBA-GEIGY AG, R-1032.5.62, P.O.Box, CH-4002 Basel, Switzerland
UUCP: whna@cgcha.UUCP - BITNET: whna%cgcha.UUCP@cernvax.BITNET

-----------[000091][next][prev][last][first]----------------------------------------------------
Date:      Wed, 13-May-87 17:27:51 EDT
From:      jonab@CAM.UNISYS.COM (Jonathan P. Biggar)
To:        comp.protocols.tcp-ip
Subject:   Re: IP fragmentation, and how to avoid it

Don't change the MTU on your network interface.  What you want to do
is change tcp to never send segments that are larger than the mtu of
the Arpanet.  If you change the MTU on your interface, you will mess up
any ND or NFS access you may have.

Jon Biggar
jonab@cam.unisys.com

-----------[000092][next][prev][last][first]----------------------------------------------------
Date:      Wed, 13-May-87 17:51:12 EDT
From:      jerry@oliveb.UUCP (Jerry F Aguirre)
To:        comp.protocols.tcp-ip
Subject:   subnet and supernet?

4.3BSD provides for a "subnet" mask to make the network part of an
address larger, stealing bits from the host part.  Usefull if you have
a type A or B address and wish to have several separate networks.  I am
faced with the opposite problem; type C address merging into one large
network.

Does anyone know if it works to use the subnetmask to make the host part
of the address larger?  (Superneting?) Could I use a type C address and
a netmask of:

	0xFFFF0000	(255.255.0.0)

or even:

	0xFF000000	(255.0.0.0)

so that the routing would assume that the other hosts shared the same
network?  Has anyone tried this?  Any reason it shouldn't be done?

-----------[000093][next][prev][last][first]----------------------------------------------------
Date:      Wed, 13-May-87 20:47:54 EDT
From:      ejnorman@uwmacc.UUCP (Eric Norman)
To:        comp.protocols.tcp-ip
Subject:   Re: Wollongong TCP/IP and subnets

In article <8705122226.AA12601@ucbvax.Berkeley.EDU>,
Iglesias@UCIVMSA.BITNET asks:

> Does anyone know if Wollongong's TCP/IP will support a subnet mask of
> something other than 8 bits.  It appears to be a fixed mask from
> 
> Also, is there any way to set the broadcast address?

Wollongong's WIN/VX 3.0 will be available in a few weeks (so they imply).
It is based on 4.3BSD.  We had a Beta release.  I know for a fact that
the broadcast address can be made all zeroes instead of the default all
ones since I did it.  Although I didn't do any testing of subnet masking,
there is no reason for me to believe it differs from 4.3BSD.

Eric Norman
Internet:     ejnorman@unix.macc.wisc.edu
UUCP:         ...{allegra,ihnp4,seismo}!uwvax!uwmacc!ejnorman
Life:         Detroit!Alexandria!Omaha!Indianapolis!Madison!Hyde
  
"Remember; this is just an exhibition; please, no wagering."
		-- David Letterman
--

-----------[000094][next][prev][last][first]----------------------------------------------------
Date:      Wed, 13-May-87 22:00:18 EDT
From:      ejnorman@uwmacc.UUCP (Eric Norman)
To:        comp.protocols.tcp-ip,comp.dcom.lans
Subject:   Re: IP to DECNET translation ????

In article <3770@oddjob.UChicago.EDU>,
matt@oddjob.uchicago.edu (Matt Crawford) pleads for mercy with:

> Help!  We want to connect our campus IP networks to SPAN, a wide
> 
> Is there anything which will give us better access than just
> logging in to the span node?  Some people will want to transfer

I just copied a file from a DECnet host to an IP host using
a MicroVAX running Ultrix as an intermediary with

  rsh ultrix-host dcat decnet-node::vms-file > unix-file

All I needed was an .rhosts file on the Ultrix beast.  A proxy
account on the DECnet node does not seem to allow referencing a
file relative to your home directory; i.e., the following failed:

  rsh ultrix-host dcat decnet-node::login.com

However, this does work after an rlogin to the Ultrix host:

  dcat decnet-node::login.com | rsh unix-host put vms-login-file

where my .cshrc on unix-host aliases "put" to "cat - >".

Now, methinks you would want to write some simple shell scripts
to do all that, but I reckon you would want to do that anyway
so that folks don't have to learn VMS.

> What we wish for is a protocol translator.  Is there one?

Heffalumps.  See RFC875; such a critter operating at the network
or transport layer would probably be considered miraculous.
However, up at the presentation or application layer it seems at
least partly possible.  Gatewaying mail from DECnet squawkers to
IP barkers isn't difficult, albeit without complete protocol
translation.  The above would be another example of the possibilities.

I think one of the nice things about the .rhosts or proxy account
approach is that they allow you hide the fact that you're rising higher
up the protocol stack.

Eric Norman
Internet:     ejnorman@unix.macc.wisc.edu
UUCP:         ...{allegra,ihnp4,seismo}!uwvax!uwmacc!ejnorman
Life:         Detroit!Alexandria!Omaha!Indianapolis!Madison!Hyde
  
"And on the eighth day He said, 'Oops'."	-- me
--

-----------[000095][next][prev][last][first]----------------------------------------------------
Date:      Thu, 14-May-87 04:58:32 EDT
From:      CCECCG@NUSVM.BITNET (Chew Chye Guan)
To:        comp.protocols.tcp-ip
Subject:   NEW ARPANET SITE LIST

I was told that from April 1 that a lot of the sites have been renamed.  Can i
have the latest copy of the ARPANET site list? Thanks.  If you don't have idea
can you refer me to the correct user in ARPANET. Thanks

-----------[000096][next][prev][last][first]----------------------------------------------------
Date:      Thu, 14-May-87 05:22:45 EDT
From:      henry@utzoo.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: Dec's LAT protocol

> Speaking of LAT, has anyone seen a spec for it?
> ...
> I've heard conflicting rumours that LAT is and is not a secret.

I asked this one a few months ago in comp.dcom.lans.  The answer, from
people at DEC, was that LAT is company confidential and there is no
public spec for it.  A stupid, shortsighted policy, but that's DEC for you.

				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,decvax,pyramid}!utzodualdual

-----------[000097][next][prev][last][first]----------------------------------------------------
Date:      Thu, 14-May-87 06:45:04 EDT
From:      andy@cheviot.UUCP
To:        comp.dcom.lans,comp.protocols.tcp-ip,comp.unix.questions
Subject:   TCP/IP connections for UTS

I'm trying to find out about hardware/software products that provide
ethernet TCP/IP access to Amdahl's UTS running on an Amdahl 5860,
together with the application level stuff like FTPand Telnet. Public
domain preferred obviously but all solutions considered. Please mail
me and I will summarise to the net.
Thanks.



-- 
SENDER 	: Andy Linton 			PHONE	: +44 91 232 9233
ARPA	: andy%cheviot.newcastle.ac.uk@cs.ucl.ac.uk
JANET	: andy@uk.ac.newcastle.cheviot
UUCP	: andy@cheviot.UUCP

-----------[000098][next][prev][last][first]----------------------------------------------------
Date:      Thu 14 May 87 11:31:19-PDT
From:      JERRY@STAR.STANFORD.EDU
To:        matt@ODDJOB.UCHICAGO.EDU, tcp-ip@SRI-NIC.ARPA
Subject:   RE: IP to DECNET translation ????
I don't know if this will help, but Van Jacobsen at LBL developed an
interface called DBRIDGE that allows IP packets to be sent over DECNET.
The way it works is the two hosts on either side of a DECNET link
install this software that uses DECNET mailboxes to communicate between
them.  The packets that they are transferring are IP packets.  Then
a dummy interface is put into the kernel that knows to give IP
packets to the dbridge process for transmission over DECNET.

Both Wollongong and SRI products make this software available.

Jerry
-----------[000099][next][prev][last][first]----------------------------------------------------
Date:      Thu, 14-May-87 10:33:24 EDT
From:      jas@MONK.PROTEON.COM (John A. Shriver)
To:        comp.protocols.tcp-ip
Subject:   IP fragmentation, and how to avoid it

The SunOS TCP will choose to put 1024 bytes of data in each packet
unless the socket receive high water mark is lower (so_rcv.sb_hiwat).
This is straight out of the 4.2BSD VAX code, without any change.  (At
least as of SunOS 3.2.)  Indeed, this will result in the IP packets
being fragmented on the ARPANET, which is a lose.  IP fragment
reassembly is far less robust than TCP reassembly.

This code is fixed in 4.3BSD, where it sends large packets only to
hosts on the same net (LAN), and otherwise limits istelf to 576 byte
packets.  The same code also allows the data to open up beyond 1024
bytes if you have a LAN with large MTU.  This can dramatically
increase local TCP performance.

Bother your Sun technical support contact to encourage them to fix
this.  It involves adding one subroutine (tcp_mss()), and tweaking
tcp_output().

As for tweaking the MTU, I don't think that it will hurt NFS, as it is
already sending 8192 byte UDP packets that are being fragmented by the
IP layer.  I have no idea what effect it will have on ND, since ND is
proprietary.  However, better to fix the problem (TCP) than to have to
crock around it (MTU).

-----------[000100][next][prev][last][first]----------------------------------------------------
Date:      Thu, 14-May-87 14:31:19 EDT
From:      JERRY@STAR.STANFORD.EDU
To:        comp.protocols.tcp-ip
Subject:   RE: IP to DECNET translation ????

I don't know if this will help, but Van Jacobsen at LBL developed an
interface called DBRIDGE that allows IP packets to be sent over DECNET.
The way it works is the two hosts on either side of a DECNET link
install this software that uses DECNET mailboxes to communicate between
them.  The packets that they are transferring are IP packets.  Then
a dummy interface is put into the kernel that knows to give IP
packets to the dbridge process for transmission over DECNET.

Both Wollongong and SRI products make this software available.

Jerry

-----------[000101][next][prev][last][first]----------------------------------------------------
Date:      Thu, 14-May-87 17:53:24 EDT
From:      randy@DBNET.CS.WASHINGTON.EDU (William Randy Day)
To:        comp.protocols.tcp-ip
Subject:   Using SLIP to link two ethernets?


Imagine two ethernets separted by several miles. The hosts on both subnets
are speaking TCP/IP. Has anyone given any thought to using SLIP and a
dedicated phone line to link the two subnets? How easy would this be
to cobble together?

Randy Day.
ARPA: randy@dbnet.cs.washington.edu
UUCP: {decvax|ihnp4}!uw-beaver!uw-june!randy
CSNET: randy%washington@relay.cs.net

-----------[000102][next][prev][last][first]----------------------------------------------------
Date:      Thu, 14-May-87 20:55:36 EDT
From:      roy@phri.UUCP
To:        comp.protocols.tcp-ip,comp.protocols.appletalk
Subject:   Opinions wanted on Kinetics Fastpath and Mac-IP


	We're considering buying a Kinetics Fastpath ethernet-appletalk box
so we can share our LaserWriters between our Macs and our BSD Unix machines
(Vax and Suns).  Any opinions, good or bad, on the Fastpath would be
appreciated.

	What about their IP implementation on the Mac?  Supposedly they
have telnet running on the Mac, which means they have TCP/IP running.  How
complete is this?  Does it do the whole schear (ICMP, named, etc) or is it
just enough to get telnet working?  If it doesn't run named, how does it
do hostname->[w.x.y.z] mapping?  Are you supposed to maintain a host table
on your Mac?

	It seems to me that the real cheap way to get our Macs talking to
our Unix machines would be to implement SLIP on the Mac and plug the modem
port into the back of a Sun which also talks SLIP (right now we run kermit
over such a link; workable, but ugh!).  Then, all you gotta do is port
telnet (apparently already done) and lpr/lpd to the Mac and you have bare
bones terminal emulation and line printer access for free (no hardware
costs anyway).  Am I crazy for thinking this might work?

--
Roy Smith, {allegra,cmcl2,philabs}!phri!roy
System Administrator, Public Health Research Institute
455 First Avenue, New York, NY 10016

-----------[000103][next][prev][last][first]----------------------------------------------------
Date:      Thu, 14-May-87 21:39:25 EDT
From:      montnaro@sprite.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: Dial-up TCP/IP (was interactive SMTP over phone lines)


I'm sure this is naive of me, but what's a fuzzie (or fuzzball) as referred
to recently in the dial-up SMTP discussion? Replies by mail please.

-- 
         Skip|  ARPA:      montanaro@ge-crd.arpa
    Montanaro|  UUCP:      montanaro@desdemona.steinmetz.ge.com
(518)387-7312|  GE DECnet: advax::"montanaro@desdemona.steinmetz.ge.com"

-----------[000104][next][prev][last][first]----------------------------------------------------
Date:      Thu, 14 May 87 16:47:33 SST
From:      Chew Chye Guan <CCECCG%NUSVM.BITNET@wiscvm.wisc.edu>
To:        TCP-IP@SRI-NIC.ARPA
Subject:   NEW ARPANET SITE LIST
I was told that from April 1 that a lot of the sites have been renamed.  Can i
have the latest copy of the ARPANET site list? Thanks.  If you don't have idea
can you refer me to the correct user in ARPANET. Thanks
-----------[000105][next][prev][last][first]----------------------------------------------------
Date:      Thu, 14-May-87 23:02:35 EDT
From:      rick@SEISMO.CSS.GOV (Rick Adams)
To:        comp.protocols.tcp-ip
Subject:   Re:  IP fragmentation, and how to avoid it

I can provide you with the source the the 4.3BSD tcp as hacked to run with
the Sun 4.2 IP. It makes a tremendous difference in performance.
It often is the difference between making a connection or not being able to 
connect at all.

Based on the following, I am assuming that you don't even need a source
license. (Right Mike?)

---rick

	From: karels@monet.berkeley.edu (Mike Karels)
	Message-Id: <8605142343.AA09396@monet.Berkeley.EDU>
	To: CERF@usc-isi.arpa
	Cc: tcp-ip@sri-nic.arpa
	Subject: Re: C implementations of TCP/IP 
	In-Reply-To: Your message of 13 May 86 22:13:00 EDT.
	Date: Wed, 14 May 86 16:43:02 PDT

	The Berkeley 4.2/4.3BSD TCP/IP code is written in C.  It's not quite
	public domain (it is copyright by the university), but the only
	restriction on its use is that the University of California be
	credited.

			Mike

-----------[000106][next][prev][last][first]----------------------------------------------------
Date:      Fri, 15-May-87 07:34:07 EDT
From:      DYOUNG@A.ISI.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   tcp/ip for HP 9000/500

Can anybody point me to a tcp/ip implementation for the HP 9000/500
series?  If not, is there some public domain source code that would be
suitable to port to this machine?

David Young
-------

-----------[000107][next][prev][last][first]----------------------------------------------------
Date:      15 May 1987 07:34:07 EDT
From:      C. David Young <DYOUNG@A.ISI.EDU>
To:        tcp-ip@SRI-NIC.ARPA
Cc:        DYOUNG@A.ISI.EDU
Subject:   tcp/ip for HP 9000/500
Can anybody point me to a tcp/ip implementation for the HP 9000/500
series?  If not, is there some public domain source code that would be
suitable to port to this machine?

David Young
-------
-----------[000108][next][prev][last][first]----------------------------------------------------
Date:      Fri, 15-May-87 08:35:31 EDT
From:      brady@MACOM4.ARPA (Sean Brady)
To:        comp.protocols.tcp-ip
Subject:   Re:  IP fragmentation, and how to avoid it

>I can provide you with the source the the 4.3BSD tcp as hacked to run with
>the Sun 4.2 IP. It makes a tremendous difference in performance.
>It often is the difference between making a connection or not being able to 
>connect at all.

If you do have the source, would you be so kind as to allow me to use it? 
I am currently in need of doing some tcp work on a 4.2 Sun, and I am having
the usual difficulties. A copy of an improved tcp would be most appreciated.

					Sean

-----------[000109][next][prev][last][first]----------------------------------------------------
Date:      Fri, 15-May-87 11:21:28 EDT
From:      jeff@UMBC3.UMD.EDU (Jeffrey Burgan)
To:        comp.protocols.tcp-ip
Subject:   Re:  Wollongong TCP/IP and subnets

In the current version of Wollongong's IP/TCP product (2.3), they only
support 8 bit subnets. On May 27th, or there abouts, they will release 
3.0 which is 4.3 compatable (i.e.  supports the netmask and broadcast 
addresses). It appears to work pretty well. 

Jeffrey Burgan
Univ. of Maryland, Balto. County
Systems Staff

-----------[000110][next][prev][last][first]----------------------------------------------------
Date:      Fri, 15-May-87 13:08:19 EDT
From:      dunigan@ORNL-MSR.ARPA (Tom Dunigan 576-2522)
To:        comp.protocols.tcp-ip
Subject:   seeking ttcp.c for VMS/TWG

Has anyone modified ttcp.c to run on DEC VMS with TWG's tcp/ip?
thanks
  tom
    dunigan@ornl-msr.arpa

-----------[000111][next][prev][last][first]----------------------------------------------------
Date:      Fri, 15-May-87 14:54:30 EDT
From:      sra@MITRE-BEDFORD.ARPA (Stan Ames)
To:        comp.protocols.tcp-ip
Subject:   ULANA RFP is OUT!

The ULANA RFP has been released.  The final specification
can be found on the host ULANA.arpa (192.12.120.30)

To get the spec ftp to ulana, user guest, password anonymous
the file is called ulana.spec

Stan Ames

-----------[000112][next][prev][last][first]----------------------------------------------------
Date:      Fri, 15-May-87 17:11:21 EDT
From:      gary@ACC-SB-UNIX.ARPA.UUCP
To:        comp.protocols.tcp-ip
Subject:   3B2 Networking


Heinz,

For connecting 3B2/400's to either a serial line or an X.25 
network using TCP/IP protocols you can purchase a board
developed by ACC for AT&T which provides both the capability
of supporting point-to-point X.25 as well access to an X.25
style network.

The board, termed the ACP 2250, is based on a 10 Mhz
CMOS 68000 controller and uses the Motorola 68605 X.25
Level 2 chip to support data rates in excess of 64Kbps.
The board interfaces directly to the 3B2 Input/Output bus
and as such will work with all 3B processors.

Additional capabilities include, support for 64 SVC's, 
DTE/DCE programmable, internal/external programmable clocks 
and baud rates.  Electrical interface support for RS232 or
RS422/449.

The ACP 2250 has been integrated with AT&T's Streams-based
TCP/IP product, WIN/3B (Release 2.0) under Unix System V
Release 3.

The board is an exclusive product from AT&T and is available
from them; however you may contact me directly for detailed
technical information.

In terms of Ethernet, AT&T has a product called 3Bnet.  The
hardware portion of the product, ie the hardware interface to
the Ethernet has been integrated in to the aforementioned
TCP/IP software package.

Finally, AT&T's "Cheapernet" is called Starlan.  And as of this
writing I am not aware of an ability of supporting TCP/IP
over that net.  What AT&T is recommending is the use of their
ISN as a bridge from a Starlan network to an Ethernet running
TCP/IP and then out an X.25 line via the 2250.

I hope that helps.

Gary

-----------[000113][next][prev][last][first]----------------------------------------------------
Date:      Sat, 16 May 87 17:22:13 PDT
From:      melohn@Sun.COM (Bill Melohn)
To:        comp.protocols.tcp-ip,comp.dcom.lans tcp-ip@sri-nic.ARPA
Subject:   Re: IP to DECNET translation ????
In article <1505@uwmacc.UUCP> ejnorman@unix.macc.wisc.edu.UUCP (Eric Norman) writes:
>
>I just copied a file from a DECnet host to an IP host using
>a MicroVAX running Ultrix as an intermediary with
>
>  rsh ultrix-host dcat decnet-node::vms-file > unix-file
>

We do this all the time with our Sunlink DNA gateway. I define the following:

alias sethost  on -i dna-gateway dnalogin !$

in my .cshrc, and then can remotely login from any Unix node in our internet
to any DECnet node accesable via the Sunlink DNA DECnet gateway "dna-gateway"
with the following command:

sethost decnetnode

Similar aliases can be setup for file transfer, the "on" utility
exporting your filesystem enviornment to the gateway machine via NFS, which
is handy when you are transfering multiple files.

-----------[000114][next][prev][last][first]----------------------------------------------------
Date:      Sat, 16-May-87 20:23:35 EDT
From:      melohn@sluggo.UUCP
To:        comp.protocols.tcp-ip,comp.dcom.lans
Subject:   Re: IP to DECNET translation ????

In article <1505@uwmacc.UUCP> ejnorman@unix.macc.wisc.edu.UUCP (Eric Norman) writes:
>
>I just copied a file from a DECnet host to an IP host using
>a MicroVAX running Ultrix as an intermediary with
>
>  rsh ultrix-host dcat decnet-node::vms-file > unix-file
>

We do this all the time with our Sunlink DNA gateway. I define the following:

alias sethost  on -i dna-gateway dnalogin !$

in my .cshrc, and then can remotely login from any Unix node in our internet
to any DECnet node accesable via the Sunlink DNA DECnet gateway "dna-gateway"
with the following command:

sethost decnetnode

Similar aliases can be setup for file transfer, the "on" utility
exporting your filesystem enviornment to the gateway machine via NFS, which
is handy when you are transfering multiple files.

-----------[000115][next][prev][last][first]----------------------------------------------------
Date:      Sat, 16-May-87 21:29:40 EDT
From:      PAP4@AI.AI.MIT.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: Using SLIP to link two ethernets?

I was wondering: why use SLIP?  There are superior protocols that have
a very useable number of implementations.  Yes, SLIP does run on just
about any asynchronous port, but if you are really serious about this
connection, there is hardware that is better suited for passing packets.

I suggest a board that has BISYNC or SDLC/HDLC.  The advantages are:

	You don't have to worry about escaping framing characters
	(the hardware does this for you)

	The CPU gets an interrupt per packet, instead of per character
	(big win on a machine with poor interrupt latency, like a VAX)

	You can run at higher speeds with synchronous modems than you
	can with asynchronous modems given the same phone line

	Packets are framed and checksummed; added reliability

It depends on your host, of course.  But most minis and mainframes
have this sort of serial interface option.  Even PC's have boards
that can do this...

With appropriate software, you can run IP encapsulated in X.25 on
an HDLC interface as well, though not everyone considers this a win.
I believe the IMPs (excuse me, PSNs) can use HDLC for HOST/PSN
communication too, when they are geographically separated.

Oh well, just thinking out loud...  Hope this helps.

-Philip

-----------[000116][next][prev][last][first]----------------------------------------------------
Date:      Sun, 17-May-87 00:54:45 EDT
From:      rick@SEISMO.CSS.GOV.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: Using SLIP to link two ethernets?

The reason SLIP is popular is that it doesn't cost anything
(except maybe CPU cycles). You don't have to buy an external
board, you don't have to buy the software.

In most cases it is almost as good as a dedicated board.

---rick

-----------[000117][next][prev][last][first]----------------------------------------------------
Date:      Sun, 17-May-87 05:00:55 EDT
From:      henry@utzoo.UUCP
To:        comp.protocols.tcp-ip
Subject:   compressed SLIP

> ... Most implementations just run
> IP over the wire (with something like SLIP or compressed SLIP as the
> data link layer), with acceptable performance...

This brings to mind a question I've had for some time:  is there a formal
standard for compressed SLIP?  I assume this refers to crunching down the
IP headers on a link you know is point-to-point; an obvious thing to do,
and it really doesn't matter how you do it provided both ends agree, but
I may have a use for it and I'd like to know if there is a standard (or at
least a consensus).

(For that matter, is there a formal standard for SLIP?)

				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,decvax,pyramid}!utzoo!henry

-----------[000118][next][prev][last][first]----------------------------------------------------
Date:      Sun, 17-May-87 06:28:37 EDT
From:      swb@DEVVAX.TN.CORNELL.EDU (Scott Brim)
To:        comp.protocols.tcp-ip
Subject:   IP fragmentation, and how to avoid it

There's one other thing to check, which is rather simple.  What you
describe sounds exactly like the symptoms we used to get with hosts
trying to send IP trailers through gateways.  Be sure you have
"-trailers" in your ifconfig.
							Scott

-----------[000119][next][prev][last][first]----------------------------------------------------
Date:      Sun, 17-May-87 13:32:10 EDT
From:      budd@BU-CS.BU.EDU (Philip Budne)
To:        comp.protocols.tcp-ip
Subject:   compressed SLIP


Take a look at RFC914, it is a proposal for several methods
of sending IP datagrams over "thinwire".

-----------[000120][next][prev][last][first]----------------------------------------------------
Date:      Sun, 17-May-87 14:04:44 EDT
From:      bzs@BU-CS.BU.EDU (Barry Shein)
To:        comp.protocols.tcp-ip
Subject:   compressed SLIP


There is the Thinwire Protocol, RFC914 which I believe deals with exactly
what you suggest.

	-Barry Shein, Boston University

-----------[000121][next][prev][last][first]----------------------------------------------------
Date:      Mon, 18-May-87 14:07:55 EDT
From:      roy@phri.UUCP (Roy Smith)
To:        comp.protocols.tcp-ip
Subject:   Re: Using SLIP to link two ethernets?

In article <201108.870516.PAP4@AI.AI.MIT.EDU> PAP4@AI.AI.MIT.EDU ("Philip
A. Prindeville") writes:
> I was wondering: why use SLIP?

	We are considering setting up a SLIP connection.  There is a spare
9600 bps STAT-MUX channel on a 56 kbps leased line connecting the two
locations we want to link that we can get access to for free.  We would
love to run sync IP on something like DMR's but even if we could convince
the bean counters to buy the DMR's and lease a private line, we can't run
sync through the MUX.  It's either take the free asynch channel or nothing.
Nowitz and Lesk had the right idea so many years ago when they got UUCP
going -- it's a lot easier to start a network if it doesn't involve
up-front expenditures for hardware.  SLIP may not be as good as "real" IP
on a synch line with smart DMA controllers, but it's a lot better than
dial-up UUCP, and the hardware cost is a lot closer to the latter than the
former.
-- 
Roy Smith, {allegra,cmcl2,philabs}!phri!roy
System Administrator, Public Health Research Institute
455 First Avenue, New York, NY 10016

-----------[000122][next][prev][last][first]----------------------------------------------------
Date:      Mon, 18-May-87 17:41:19 EDT
From:      glee@cognos.uucp (Godfrey Lee)
To:        comp.protocols.tcp-ip
Subject:   Re: Dial-up TCP/IP (was interactive SMTP over phone lines)

In article <8705121532.AA19034@armagnac.DEC.COM> "Christopher A. Kent" <kent@sonora.dec.com> writes:
>We use SLIP

Is SLIP available publicly?

-- 
-----------------------------------------------------------------------------
Godfrey Lee, Cognos Incorporated, 3755 Riverside Drive,
Ottawa, Ontario, CANADA  K1G 3N3
(613) 738-1440		decvax!utzoo!dciem!nrcaer!cognos!glee

-----------[000123][next][prev][last][first]----------------------------------------------------
Date:      19 May 87 08:21:00 CDT
From:      "RENE" <rene@navresfor.arpa>
To:        "tcp-ip" <tcp-ip@sri-nic.ARPA>
Subject:   DDN BULLETIN BOARD
 
DEAR SRI-NIC,

    COULD YOU PLEASE ADD THE FOLLOWING PEOPLE TO YOUR MAILING LIST FOR THE
DDN BULLETIN BOARD:
                     RENE@NAVRESFOR.ARPA
                     YATES@NAVRESFOR.ARPA
                     SNAVELY@NAVRESFOR.ARPA
 
SINCERELY,

RENE RODRIGUE
------
-----------[000124][next][prev][last][first]----------------------------------------------------
Date:      Tue, 19-May-87 09:21:00 EDT
From:      rene@NAVRESFOR.ARPA ("RENE")
To:        comp.protocols.tcp-ip
Subject:   DDN BULLETIN BOARD


 
DEAR SRI-NIC,

    COULD YOU PLEASE ADD THE FOLLOWING PEOPLE TO YOUR MAILING LIST FOR THE
DDN BULLETIN BOARD:
                     RENE@NAVRESFOR.ARPA
                     YATES@NAVRESFOR.ARPA
                     SNAVELY@NAVRESFOR.ARPA
 
SINCERELY,

RENE RODRIGUE
------

-----------[000125][next][prev][last][first]----------------------------------------------------
Date:      Tue, 19-May-87 14:44:25 EDT
From:      jas@MONK.PROTEON.COM (John A. Shriver)
To:        comp.protocols.tcp-ip
Subject:   Re: IP fragmentation and Suns

I looks like I'm encouraging unecessary calls to Sun technical support
about the TCP/IP packet size code.  They have incorporated the 4.3BSD
changes in selecting packet size offered in the TCP Maximum Segment
Size option.  It will now only offer 536 bytes across routers, and the
packet size of the network rounded down to a multiple of 1024 for
local connections.  This comes out in SunOS 3.4, available reasonably
soon from Sun.

Other TCP/IP fixes of notable import in earlier releases:
	3.2: UDP checksums
	3.3: IP Subnets

I'm glad that some vendors are hearing the pleas for agressive TCP/IP
support from the user community.

-----------[000126][next][prev][last][first]----------------------------------------------------
Date:      19 May 87 17:13:02 GMT
From:      sundc!hadron!inco!holt@seismo.css.gov  (Mark Holt)
To:        tcp-ip@sri-nic.arpa
Subject:   looking for a PC comm board with TCP/IP

We are interested in *any* information we can get for the
following:

TCP/IP on a single-slot add-in communications board for PC clones
running MS/DOS (or concurrent DOS).  The board must be tempest'd and
proven to work with other TCP/IP processors on an Ethernet.

Thanx in advance,
Mark L Holt
MD-INCO

If I get this wrong, its 'cause its my first time!
-----------[000127][next][prev][last][first]----------------------------------------------------
Date:      Wed, 20-May-87 17:42:56 EDT
From:      emosel@NOTE.NSF.GOV (Eric Mosel)
To:        comp.protocols.tcp-ip
Subject:   A novice's comming out party


... hmmmm ! Yes well , as a novice I need all the advice I can
get about networking... the "foist" question is .......

  What is THE definitive book on networking that you (the networking
gurus) suggest I pick up?  E-Mail replies, please.

		Thanx for all the fish..
		Eric Mosel
		emose@note.nsf.gov

-----------[000128][next][prev][last][first]----------------------------------------------------
Date:      Wed, 20-May-87 23:23:56 EDT
From:      Mills@UDEL.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re:  Using SLIP to link two ethernets?

Roy,

Some years ago Ford Scientific Research Labs had a 14-kbps stat mux running
between Dearborn and London, with a DDCMP link between two TCP/IP-speakers
embedded in the trash. Be advised the stupendous delay dispersion introduced
by the stat-mux protocol does nasty things to the TCP retransmissio-timeout
estimation algorithm. There is in fact a data point described in RFC-889
involving that link. A skeptic might conclude it can't be done, then misght
be pleasantly surprised when it almost works, and come away with a new
respect for TCP robustness.

Dave

-----------[000129][next][prev][last][first]----------------------------------------------------
Date:      21 May 1987 02:42:47 PDT
From:      SHAPE-TC@ADA20.ISI.EDU
To:        tcp-ip@SRI-NIC.ARPA
Cc:        shape-tc@ADA20.ISI.EDU
Subject:   Info on IP tunnel for SUN 2
At SHAPE Technical Centre, The Hague, we are setting up a SUN 2/120
as our Internet host, which will talk to RSRE's gateway via X.25.
To minimize re-invention, I would appreciate hearing directly from
anyone who has experience of creating an IP "tunnel" between SUN IP
and (preferably) Morning Star Technology 1980 X.25.  Any other pointers
to similar solutions would be useful.

Thanks in advance.  Jon Wilkes,  SHAPE Technical Centre,  The Hague.
(It's lonely out here in Europe ...)
-------
-----------[000130][next][prev][last][first]----------------------------------------------------
Date:      Thu, 21-May-87 05:42:47 EDT
From:      SHAPE-TC@ADA20.ISI.EDU
To:        comp.protocols.tcp-ip
Subject:   Info on IP tunnel for SUN 2

At SHAPE Technical Centre, The Hague, we are setting up a SUN 2/120
as our Internet host, which will talk to RSRE's gateway via X.25.
To minimize re-invention, I would appreciate hearing directly from
anyone who has experience of creating an IP "tunnel" between SUN IP
and (preferably) Morning Star Technology 1980 X.25.  Any other pointers
to similar solutions would be useful.

Thanks in advance.  Jon Wilkes,  SHAPE Technical Centre,  The Hague.
(It's lonely out here in Europe ...)
-------

-----------[000131][next][prev][last][first]----------------------------------------------------
Date:      Thu, 21 May 87 09:17:40 -0400
From:      Andy Malis <malis@CC5.BBN.COM>
To:        Eric Mosel <emosel@NOTE.NSF.GOV>
Cc:        tcp-ip@SRI-NIC.ARPA, malis@CC5.BBN.COM
Subject:   Re: A novice's comming out party
Eric,

I like two:

William Stallings, Data and Computer Communications, Macmillan,
1985.

Andrew Tanenbaum, Computer Networks, Prentice-Hall, 1981.

Regards,
Andy
-----------[000132][next][prev][last][first]----------------------------------------------------
Date:      21 May 1987 10:31-PDT
From:      Mike StJohns <StJohns@SRI-NIC.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   tn3270 availability
1) Is there a protocol document for it?
2) Public domain software?  Where is it on the net?
3) Types of machines its available for?
4) Works with what flavors of IBM mainframes? (MVS VM??)

Mike
-----------[000133][next][prev][last][first]----------------------------------------------------
Date:      Thu, 21-May-87 09:37:11 EDT
From:      malis@CC5.BBN.COM.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: A novice's comming out party

Eric,

I like two:

William Stallings, Data and Computer Communications, Macmillan,
1985.

Andrew Tanenbaum, Computer Networks, Prentice-Hall, 1981.

Regards,
Andy

-----------[000134][next][prev][last][first]----------------------------------------------------
Date:      Thu, 21-May-87 12:34:32 EDT
From:      Mills@UDEL.EDU
To:        comp.protocols.tcp-ip
Subject:   Re:  Info on IP tunnel for SUN 2

Jon,

Congratulations on rejoining the Internet. Be advised the Dutch PTT requires
you insert a prefix code to get out of the country. Yes, that means 15
semi-octets, not 14, and is sure to break at least something at either end
of the circuit. It may be even lonelier in Europe than you think.

Dave

-----------[000135][next][prev][last][first]----------------------------------------------------
Date:      Thu, 21-May-87 13:07:26 EDT
From:      dlee@SRI-LEWIS.ARPA (Danny Lee)
To:        comp.protocols.tcp-ip
Subject:   TCP-IP mailing list


Hello,

Can you add the following subscriber to your mailing list?

 	"tcpgrp@sri-lewis.istc.sri.com"

We have established a mail re-distribution point for our staff members 
who currently subscribe to your mailings; in an effort to reduce the 
load on the internet.  I will inform you of the people to remove from 
your list when we receive mail through this new distribution point.

Thanks,

Danny Lee
SRI International
Fort Lewis Project office.

-----------[000136][next][prev][last][first]----------------------------------------------------
Date:      Thu, 21-May-87 13:31:00 EDT
From:      StJohns@SRI-NIC.ARPA.UUCP
To:        comp.protocols.tcp-ip
Subject:   tn3270 availability

1) Is there a protocol document for it?
2) Public domain software?  Where is it on the net?
3) Types of machines its available for?
4) Works with what flavors of IBM mainframes? (MVS VM??)

Mike

-----------[000137][next][prev][last][first]----------------------------------------------------
Date:      Thu, 21 May 87 16:33:49 PDT
From:      Van Jacobson <van@lbl-csam.arpa>
To:        tcp-ip@sri-nic.arpa, Sun-Spots@rice.edu
Subject:   new version of tcpdump available
There's a new version of tcpdump (a Sun-3 ethernet monitor
program) available for anonymous ftp from host lbl-rtsg.arpa
(file tcpdump.tar or tcpdump.tar.Z).  This is version 1.11 and
contains no major changes but lots of minor bug fixes.  The file
REVHIST in the tarchive describes what changed. 

[What follows is a probably-ill-considered flame at Sun.
You're already past the portion of this message with any
information content; feel free to skip the rest.]

For people interested in the source to tcpdump: In January, I
asked SMI for a letter saying I could distribute the source
(since the program was loosely based on Sun's etherfind).  They
said that should be no problem and would try to send the letter
as soon as they had had a chance to look over the program.  I
gave them a copy of the source (and I understand parts of tcpdump
will be incorporated in a future SMI release) but no letter has
shown up. 

If anyone out there has a channel to Sun and is interested in
having the tcpdump source available, I would appreciate your
banging on Sun.  You can point out that source availability can
only help sell Sun systems: the program requires Sun's NIT and,
therefore, won't run on anyone else's Unix box.  You might
also point out that the "products" that created SMI came about
through the free exchange of ideas *and source* throughout the
academic and research community.

This may seem like a lot of stew over one oyster but we've had a
lot of trouble with SMI lately over various source matters.  I
assume this is happening because they're getting big and
management responsibility is shifting from technical people to
lawyers.  I think these policy changes amount to cutting their
own throats.  After all, our whole community is tied together
with various networks so it's easy to send things to one another;
Unix has given us a "common language" so programs we write are at
least potentially useful to a large portion of the community; and
a decade of things like the BSD distribution, TeX, X Windows,
GNUmacs and net.sources have shown us how much faster we can
progress if we pass source around.  Who wants to do business with
a vendor too stupid, too venal or just too out-of-touch to
understand this? 

 - Van Jacobson

 disclaimer: The preceding was solely opinion and the opinion
	     was solely mine.
-----------[000138][next][prev][last][first]----------------------------------------------------
Date:      Thu, 21-May-87 18:51:51 EDT
From:      wild@xanath.odu.EDU
To:        comp.protocols.tcp-ip
Subject:   Defense Dept. Enlists DEC in OSI


                                  DIGITAL REVIEW
                                   MAY 18, 1987
    
                    Defense Dept. Enlists DEC In OSI Initiative
                                 By Michael Vizard
    
    LITTLETON, MASS. - Scoring a major coup in its bid to control the 
    industry's move towards open standards, DEC last week was selected by the 
    Department of Defense to participate in a project designed to build 
    gateways between networks using the Transmission Control Protocol/Internet 
    Protocol (TCP/IP) that the Defense Department sponsored and the developing 
    Open Systems Interconnect (OSI) protocol that is being advanced by the 
    International Standards Organization (ISO).  
    
    DEC's role in the project, which is being supervised by the National 
    Bureau of Standards (NBS), involves supplying its ISO-compliant networking 
    software, All-In-1 Office and Information Systems, VMS operating system 
    and a MicroVAX II to the NBS, while a library of TCP/IP networking 
    software is being provided by Network Research Corp. (NRC) of Oxnard, 
    Calif. 
    
    The Payoff
    
    "It looks like it's paid off for DEC to get all those OSI products out 
    early," said George Newman, an analyst with the Framingham, Mass., 
    research firm International Data Corp.  "It's a coup for DEC, because now 
    they have an opportunity to mold and shape OSI."
    
    Meanwhile, James Hunter, president of NRC, noted, "You can expect that DEC 
    and NRC products will be compliant with the new OSI gateways.  It's hard 
    to find true charity."
    
    For the Defense Department, the development of TCP/IP-to-OSI gateways will 
    ease its transition from the multivendor TCP/IP networking standard it 
    championed in the early 1970s to the higher-performance OSI standard. 
    
    By migrating to an OSI standard sponsored by the computer industry, the 
    Defense Department is reducing its costs associated with the design, 
    testing, maintenance and purchase of TCP/IP.
    
    The OSI standard, however, is not fully developed, and by commissioning 
    DEC and NRC to develop these gateways, the Defense Department is creating 
    a migration path from TCP/IP to OSI before the OSI standards are 
    completed.  
    
    "The OSI standards will not be ratified until late in 1988 at the 
    earliest," NRC's Hunter said, "but the government has to worry about the 
    problem of tying TCP/IP to OSI today."
    
    As a result, the NBS will first write an application that bridges OSI's 
    Message Handling Facility/X.400 (MHF) protocol to the Simple Mail Transfer 
    Protocol (SMTP) used in TCP/IP.  Later, the NBS will bridge OSI's File 
    Transfer, Access and Management Protocol (FTAM) to TCP/IP's File Transfer 
    Protocol (FTP).  
    
    "You could see a gateway as early as the summer of 1988," Hunter said. 
    
    Meanwhile, the NBS will make its specifications public so that other 
    companies will be able to build TCP/IP-to-OSI gateways. 
    
    The new gateway project, said William Johnson, president of DEC's 
    Distributed Systems group, will provide both industry and the government 
    with a clear migration path to OSI. 

-----------[000139][next][prev][last][first]----------------------------------------------------
Date:      Thu, 21-May-87 22:54:08 EDT
From:      PAP4@AI.AI.MIT.EDU ("Philip A. Prindeville")
To:        comp.protocols.tcp-ip
Subject:   Looking for Mr. BOOTP

I remember a few postings on this list about public domain implementations
of BOOTP clients for UNIX.  Didn't want to FTP the tcp-ip archives just to
grep for BOOTP related messages (might take days!!!), so I thought I would
ask the list for info on where to FTP to...

Reply to me directly.  Thanks in advance,

-Philip

-----------[000140][next][prev][last][first]----------------------------------------------------
Date:      Fri, 22-May-87 04:08:23 EDT
From:      gnu@hoptoad.UUCP
To:        comp.protocols.tcp-ip
Subject:   Attention: WWV, WWVH, WWVB, and GEOS Users

[Copied from the Usenet for all you Arpanauts.    -- gnu]

From: cgs@umd5.umd.edu (Chris Sylvain)
Newsgroups: rec.ham-radio,sci.electronics,sci.space
Subject: Attention: WWV, WWVH, WWVB, and GEOS Users
Keywords: survey of all users by NBS
Message-ID: <1692@umd5.umd.edu>
Date: 21 May 87 16:01:40 GMT

[From articles appearing in the May issue of RF Design and June's Ham Radio]
  The last user survey was in 1975, and over 10,000 responses were received.
According to the National Bureau of Standards (NBS), the responses were
"invaluable" in carrying out the mission of the NBS since 1975.

The NBS is conducting a survey of users of all its time and frequency services,
such as WWV and WWVH shortwave broadcasts, WWVB 60 kHz broadcasts, GOES satel-
lite broadcasts, and telephone time-of-day service. NBS requests users to par-
ticipate in the _Time and Frequency Services Users Survey_.
   The survey results will help NBS provide the best mix of services and levels
of service to the broad spectrum of users who depend on them. Feedback from all
kinds of users is needed to assure that the Bureau's finite resources for these
services are allocated in the most effective way.
   The Survey form is available from:   Time & Frequency Survey, Div. 524.00,
National Bureau of Standards, Boulder, CO 80303, or call (303) 497-3294 between
8 AM and 5 PM MDT to request a copy.
-- 
--==---==---==--
.. One, two! One, two! And through and through ..
   ARPA: cgs@umd5.UMD.EDU     BITNET: cgs%umd5@umd2
   UUCP: ..!seismo!umd5.umd.edu!cgs
-- 
Copyright 1987 John Gilmore; you may redistribute only if your recipients may.
(This is an effort to bend Stargate to work with Usenet, not against it.)
{sun,ptsfa,lll-crg,ihnp4,ucbvax}!hoptoad!gnu	       gnu@ingres.berkeley.edu

-----------[000141][next][prev][last][first]----------------------------------------------------
Date:      Fri, 22-May-87 08:53:02 EDT
From:      Mills@UDEL.EDU
To:        comp.protocols.tcp-ip
Subject:   Radio-clockwatchers update

Folks,

There seem to be lots of clockwatchers out there who are chiming on the wrong
host for radio-synchronized time. Last October erstwhile radio-clock host
dcn1.arpa (aka dcn-gateway.arpa) at 10.0.0.111/128.4.0.1, was moved to
University of Delaware along with its dcnet (128.4) cousins; however, the WWVB
radio clock was left behind and connected to host macom1.arpa (aka
linkabit-gw.arpa) at 10.0.0.111/192.5.8.1. This was announced on at least two
occasions to this list.

The following hosts are still chiming with dcn1.arpa at its dcnet address
128.4.0.1. This host, which happens to be synchronized via UDP/NTP to
radio-clock hosts macom1.arpa and umd1.umd.edu, but with somewhat reduced
accuracy and reliability. The data were extracted from the log files and cover
over the last week or so. These hosts should switch their chimes to
macom1.arpa 192.5.8.1 or umd1.umd.edu 128.8.10.1, which is also connected to a
WWVB radio clock. In particular, MILNET hosts should chime with umd1.umd.edu
in any case, since that host is reachable directly via a MILNET gateway
without the rough ride via the overloaded ARPANET/MILNET gateways.

Host			Chimes
------------------------------
[10.0.0.15]		1
[10.0.0.51]		28
[10.1.0.82]		8
[10.2.0.62]		13
[10.4.0.14]		7
[10.4.0.96]		4
[10.8.0.20]		1
[128.11.1.2]		2
[128.110.16.82]		7
[128.110.16.83]		21
[128.110.16.94]		1
[128.110.4.81]		2
[128.116.64.3]		1
[128.125.1.14]		28
[128.153.4.2]		126
[128.185.127.13]	10
[128.185.129.58]	2
[128.2.253.37]		2
[128.42.1.4]		1
[128.54.2.49]		18
[128.81.43.64]		2
[128.84.253.35]		2
[128.84.254.1]		7
[128.84.254.29]		6
[128.84.254.35]		6
[128.84.254.40]		7
[128.84.254.41]		7
[128.84.254.43]		6
[128.84.254.5]		6
[128.89.0.100]		8
[128.89.0.122]		1
[128.89.0.148]		3
[128.89.0.79]		3
[128.89.0.83]		1
[128.89.0.86]		5
[128.89.0.88]		4
[128.89.0.89]		7
[128.89.0.98]		10
[128.89.1.126]		1
[128.89.1.199]		1
[128.95.1.14]		7
[128.95.1.16]		5
[128.95.1.20]		5
[128.95.1.21]		4
[18.26.0.19]		2
[18.26.0.92]		13
[18.62.0.50]		3
[18.62.0.52]		3
[18.62.0.55]		2
[18.72.0.228]		4
[18.80.0.183]		10
[18.86.0.110]		1
[18.86.0.119]		6
[18.86.0.123]		4
[18.86.0.124]		1
[18.86.0.127]		1
[18.86.0.136]		1
[18.86.0.137]		1
[18.86.0.17]		1
[18.86.0.20]		5
[18.86.0.60]		5
[18.86.0.61]		4
[18.86.0.62]		7
[18.86.0.63]		8
[18.86.0.65]		8
[18.86.0.74]		3
[18.86.0.77]		2
[18.86.0.89]		5
[18.86.0.94]		1
[18.88.0.52]		5
[18.88.0.54]		12
[18.9.0.5]		1
[192.10.41.166]		4
[192.12.33.2]		15
[192.5.11.5]		7
[192.5.146.20]		3
[192.5.19.11]		5
[192.5.19.12]		2
[192.5.19.6]		2
[192.5.19.7]		3
[192.5.19.8]		1
[192.5.39.131]		2
[192.5.39.139]		8
[192.5.53.210]		2
[192.5.53.211]		9
[192.5.53.83]		10
[26.3.0.41]		7
[26.3.0.43]		12
[26.6.0.2]		6
[35.1.1.3]		1
[36.9.0.46]		1

The following hosts are chiming with dcnet host dcn6.arpa 128.4.0.6, which is
another 9600-bps hop from dcn1.arpa. Clearly, this is a bad choice, not only
because this host is farther away and less accurate, but also because the host
is frequently used for program development and may have buggy software
running. These clockwatchers should switch to one of the above solid players.

Host			Chimes
------------------------------
[10.0.0.121]		62
[10.0.0.51]		14
[10.1.0.121]		76
[128.125.1.15]		65
[128.125.1.16]		68
[128.2.253.37]		1
[128.54.0.10]		125
[128.54.2.49]		28
[128.59.16.101]		7
[128.81.43.64]		1
[128.95.1.21]		2
[192.10.41.166]		1
[192.12.33.2]		4
[192.5.39.92]		3
[26.0.0.61]		1
[26.3.0.43]		10
[26.5.0.65]		70
[26.7.0.65]		86

As you can see from the above, some of the players chime more often than
others. While UDP/TIME and UDP/NTP have relatively small impact on the server
resources, it probably serves no useful purpose to chime a particular host
with UDP/TIME more often than once or twice per day. For accuracies better
than UDP/TIME (in the order of a second or less), it is far better to use
UDP/NTP, which both macom1.arpa and umd1.umd.edu support. A 4.3bsd
daemon/synchronizer for UDP/NTP is available from Mike Petry
(petry@trantor.umd.edu).

Dave

-----------[000142][next][prev][last][first]----------------------------------------------------
Date:      22 May 87 14:12:00 PST
From:      <art@acc.arpa>
To:        "tcp-ip" <tcp-ip@sri-nic.arpa>
Cc:        ietf@gateway.mitre.org
Subject:   IP Datagram sizes

I don't recall ever seeing a suggestion such as follows, so I thought
that I'd throw the idea out for comment.

BACKGROUND:

In the internet environment, two end hosts generally don't know what
the maximum IP datagram size that can traverse the network is.  The
current solutions seem to be:

	1) negotiate down to the minimum of the datagram limits of
	   the directly connected networks.

	2) if using a gateway, use guaranteed limit of 576.

	3) use a fixed TCP segment size and don't think about it.

Solution 1 can cause lots of unneeded fragmentation
(i.e. host-ethernet-arpanet-ethernet-host).
Solution 2 may be unnecessarily suboptimal for the path.
Solution 3 may perform as either 1 or 2.

PROPOSAL:

Add an entry to the IP routing table which gives maximum datagram size
for sending to this destination network.  This entry would be initialized
based on the directly attached network used to send to that destination.
Add a new ICMP message type which a gateway sends back to an originating
host when it fragments an IP datagram.  The ICMP message would identify
the destination network and specify what size it had to fragment the
datagram to.  The originating host would update the limits for that network
in its IP routing table.  The originating host should adjust its segment
size (either immediately or on new TCP connection) to optimize IP datagram
size.  If current implementations ignore the new ICMP message, then they
would continue to operate as always.

Any Comments?
						Art Berggreen
						art@acc.arpa

------
-----------[000143][next][prev][last][first]----------------------------------------------------
Date:      Fri, 22-May-87 12:16:41 EDT
From:      minshall@OPAL.BERKELEY.EDU
To:        comp.protocols.tcp-ip
Subject:   Re: tn3270 availability

Mike,

>   1) Is there a protocol document for it?

No.  It should be written down, and I should probably be part of the
writing, and I know some questions that need to be answered, and
I know where some of the answers are, but I haven't had time to do it.

>   2) Public domain software?  Where is it on the net?

Yes.  On 'arpa.berkeley.edu' you may anonymously ftp 'pub/tn3270tar'
(in binary mode).

>   3) Types of machines its available for?

It works on 'all' 4.2/4.3 Unices (though there is a bug, simple to fix,
which makes it VERRRY slow on 4.2 machines).  I am working on a version
for MSDOS (really PCDOS), and may be done fairly soon.  In addition,
FTP Software has a beta test version for MSDOS.  In addition, IBM announced
a version (derived from a different version) for PC's.  In addition,
Ungermann-Bass has some interest in this area (as may other vendors).

I don't know of anyone who has ported it to VMS, though we may be
doing that at Berkeley soon (using a newly restructured tn3270
we are using locally).

>   4) Works with what flavors of IBM mainframes? (MVS VM??)

I think it works with all 2 of the the VM OS's (IBM/Wisconsin's and
the Spartacus/Fibronics).  In the MVS case things are a bit less clear.
In general it seems to function (at some level) with code derived
from the UCLA base.  I'm not sure about the other products (many of
them do '3270 - to - host over the network'.

Greg

-----------[000144][next][prev][last][first]----------------------------------------------------
Date:      Fri, 22-May-87 13:00:16 EDT
From:      rms@ACC-SB-UNIX.ARPA.UUCP
To:        comp.protocols.tcp-ip
Subject:   re: tn3270 availability

Mike,

TN3270 is not really a protocol, but rather a program to interpret raw IBM
3270 display protocol transmitted over a Telnet connection.  Other than
that, the TN3270 protocol deals with what Telnet negotiations are required,
when, and in what sequence.

I believe Greg Minshall at U. C. Berkeley is the originator of TN3270, which
was first implemented for Berkeley Unix.  It makes heavy use of Curses, and
also uses a TERMCAP-like file to map the user's ASCII keyboard into a 3270
EBCDIC keyboard.  TN3270 runs on 4.2/4.3 BSD Unix and its clones (Sun's,
etc.), and has also been ported (by Minshall) to run under MS-DOS with the
Ungermann Bass Personal-NIU Ethernet adapter.  CMU has ported the DOS version
to run with their enhanced version of the MIT PC/IP code, and Univ. of Md.
did the same for IBM.  FTP Software has a version in beta test which they are
about to release.  I am sure there are more, and there are certainly some in
the works.

We have been using TN3270 on our 4.3-based VAX, and it works well once you
get the keyboard set up in a reasonable fashion.  It is a bit of a CPU hog
though.  Also, I don't think FTP Software will mind me saying that we got
a beta release of their software and it's a dynamite product.  I can finally
trash my 3278 and VT100 and run Unix vi and IBM full-screen applications on
the same PC over a Telnet connection.  It would be nice to see the same
capability on a MacIntosh.

The DOS-based version is public domain and can be acquired from Berkeley
by contacting Greg Minshall.  The Unix version is also public domain, but
you probably have to have a license to use the curses library.  It is
included with the 4.3BSD distribution.

TN3270 operates with ACCES/MVS from ACC, VM TCP/IP from IBM (both 5798-DRG
and 5798-FAL), Wiscnet, KNET from Fibronics, and the public-domain MVS code
from UCLA.  I don't know if it works with DDN/MVS from Network Solutions
since the presentations I have attended made no mention of full-screen
support across Telnet.  They prefer to use SimWare for this.

I should also mention that Univ. of Wisconsin has developed similar software
for running full-screen applications across Telnet.  I believe it also runs
on PC's under DOS.  Marvin Solomon is a point of contact for this.

The real authority for TN3270 is Greg Minshall.  He reads this mailing list,
so I am sure he will respond to your query.  He can probably add a lot to
what I have said.

Ron Stoughton
ACC

-----------[000145][next][prev][last][first]----------------------------------------------------
Date:      Fri, 22-May-87 18:12:00 EDT
From:      art@ACC.ARPA
To:        comp.protocols.tcp-ip
Subject:   IP Datagram sizes


I don't recall ever seeing a suggestion such as follows, so I thought
that I'd throw the idea out for comment.

BACKGROUND:

In the internet environment, two end hosts generally don't know what
the maximum IP datagram size that can traverse the network is.  The
current solutions seem to be:

	1) negotiate down to the minimum of the datagram limits of
	   the directly connected networks.

	2) if using a gateway, use guaranteed limit of 576.

	3) use a fixed TCP segment size and don't think about it.

Solution 1 can cause lots of unneeded fragmentation
(i.e. host-ethernet-arpanet-ethernet-host).
Solution 2 may be unnecessarily suboptimal for the path.
Solution 3 may perform as either 1 or 2.

PROPOSAL:

Add an entry to the IP routing table which gives maximum datagram size
for sending to this destination network.  This entry would be initialized
based on the directly attached network used to send to that destination.
Add a new ICMP message type which a gateway sends back to an originating
host when it fragments an IP datagram.  The ICMP message would identify
the destination network and specify what size it had to fragment the
datagram to.  The originating host would update the limits for that network
in its IP routing table.  The originating host should adjust its segment
size (either immediately or on new TCP connection) to optimize IP datagram
size.  If current implementations ignore the new ICMP message, then they
would continue to operate as always.

Any Comments?
						Art Berggreen
						art@acc.arpa

------

-----------[000146][next][prev][last][first]----------------------------------------------------
Date:      Fri, 22-May-87 18:49:58 EDT
From:      JNC@XX.LCS.MIT.EDU ("J. Noel Chiappa")
To:        comp.protocols.tcp-ip
Subject:   Re: IP Datagram sizes


	This or a close variant of it sounds like a good idea to me.
It's been clear for a time that the TCP MaxSegSize negotiation only
gives you part of what you want.

	I'd suggest two minor changes. First, the message gives 'the
maximum datagram sise for sending to the destination *host + TOS*'.
(It has to work if the destination net is subnetted, we don't need to
messages, blah, blah, blah standard JNC flame.)
	It's also not clear whether you'd make it an ICMP message that
was returned every time a message was fragmented. (In any case, you
can simulate that using the existing Don't Fragment flag.) Such a
message makes using fragmentation for real almost impossible; the
extra network load every time a packet was fragmented would be
significant (like hosts that ignore Redirects). I think you'd want a
special mechanism which the user has to invoke, sort of like record
route, where it goes along the path; it is initialized to the MTU of
the outbound link from the host, and each node in the path resets the
value to the min of that and of the MTU on the next hop link. I don't
think you want a a special ICMP type, since then all switches would
have to examine all packets going though to see if they were an ICMP
packet of that type; extra overhead. I think the right thing is an IP
option, 'record minimum MTU'.
	In general, I think this is a good idea though.

	Noel
-------

-----------[000147][next][prev][last][first]----------------------------------------------------
Date:      Fri, 22 May 87 20:43:00 EST
From:      (David Conrad) <davidc@terminus.umd.edu>
To:        rms@ACC-SB-UNIX.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: tn3270 availability
A clarification:

>                                             CMU has ported the DOS version
>to run with their enhanced version of the MIT PC/IP code, and Univ. of Md.
>did the same for IBM.

    The CMU version of tn3270 is based on MIT PC/IP telnet with the 3270
additions written by Jacob Rehkter of IBM Yorktown and merged by Drew
Perkins.  It wasn't a port of the Berkeley DOS code and I don't believe
its publicly available outside of CMU.  The University of Maryland
version is based on the Yorktown/CMU version. 
  
    As for the protocol, I seem to remember some talk awhile ago of an
RFC that would describe entering and exiting 3270 emulation mode (or some
such).  Has anything come of this?

>Ron Stoughton

-drc
-------------------------------------------------------------------------------
David R. Conrad      The University of Maryland       arpa: davidc@umd5.umd.edu
(301) 454-2946              PC/IP Group             bitnet: conradd@umdd.bitnet


-----------[000148][next][prev][last][first]----------------------------------------------------
Date:      Fri, 22-May-87 20:05:20 EDT
From:      STAHL@SRI-NIC.ARPA (Mary Stahl)
To:        comp.protocols.tcp-ip
Subject:   Mailbridge homing tables

The gateway homing tables for all ARPANET and MILNET nodes have been
updated and are now available online at the NIC.  Gateway assignments
will be effective 15 June 87, as announced in the just-released DDN
Management Bulletin #33.

The files to FTP from SRI-NIC.ARPA are:

   NETINFO:ARP-MAILBRIDGE-HOMINGS.TXT (for net 10 nodes)
    or
   NETINFO:MIL-MAILBRIDGE-HOMINGS.TXT (for net 26 @amdBinFrom: nave a Polne

-----------[000149][next][prev][last][first]----------------------------------------------------
Date:      Fri, 22-May-87 20:11:00 EDT
From:      CLYNN@G.BBN.COM
To:        comp.protocols.tcp-ip
Subject:   Re: IP Datagram sizes

Art,
	I don't think that we NEED more message types (although I have
suggested some).  We have a Very Strong Hint already in place - all we
need to do is to have the IP reassembly code notice the size of the
First fragment of a fragmented datagram and pass it up to the higher
layers.  TCP could then send the appropriate max seg size option to the
other end; the routing table could record it for use in subsequent
connections (by the time a packet is fragmented, it MAY be too late
to help the current connection, depending on the packetization
algorithm being used).
	This assumes that the IP fragmentation algorithms split a
datagram so that the size of the first fragment is determined by the
MTU (and not, for example, into n equal pieces).  Are there any
implementations which do not make the first fragment as large as possible??
	Note that this is one of the things that a system may do without
the need for cooperation from other systems.  Note also, that since
route going and coming may not be the same, the size a system finds may
not be the best one for datagrams it sends.

Charlie

-----------[000150][next][prev][last][first]----------------------------------------------------
Date:      Fri, 22-May-87 21:43:00 EDT
From:      davidc@TERMINUS.UMD.EDU (David Conrad)
To:        comp.protocols.tcp-ip
Subject:   Re: tn3270 availability

A clarification:

>                                             CMU has ported the DOS version
>to run with their enhanced version of the MIT PC/IP code, and Univ. of Md.
>did the same for IBM.

    The CMU version of tn3270 is based on MIT PC/IP telnet with the 3270
additions written by Jacob Rehkter of IBM Yorktown and merged by Drew
Perkins.  It wasn't a port of the Berkeley DOS code and I don't believe
its publicly available outside of CMU.  The University of Maryland
version is based on the Yorktown/CMU version. 
  
    As for the protocol, I seem to remember some talk awhile ago of an
RFC that would describe entering and exiting 3270 emulation mode (or some
such).  Has anything come of this?

>Ron Stoughton

-drc
-------------------------------------------------------------------------------
David R. Conrad      The University of Maryland       arpa: davidc@umd5.umd.edu
(301) 454-2946              PC/IP Group             bitnet: conradd@umdd.bitnet

-----------[000151][next][prev][last][first]----------------------------------------------------
Date:      Sat, 23-May-87 12:30:00 EDT
From:      Kodinsky@MIT-MULTICS.ARPA
To:        comp.protocols.tcp-ip
Subject:   Re: A novice's comming out party

Eric - I would suggest three sources of information that you could
consulyt in this, your first step on a long and interesting journey:

First - "Computer Networks (or Computer networking)" by Tannenbaum, It
presents a good general overview of networking.  At Spartacus Computers
we strongly recommend it to all newcomers to networking.

Second - after reading Tannenbaum - get the DDN protocol handbook
(actually a three volume handbook, about 8 inches thick).  It will give
you all the gory details of the Arpa Net, TCP, IP, etc, etc.

Finally - Join the TCP/IP discussion list on the arpanet.  Even if you
have nothing to say, there is plenty to hear.

Good luck, and welcome

Frank Kastenholz

-----------[000152][next][prev][last][first]----------------------------------------------------
Date:      Sat, 23-May-87 15:19:46 EDT
From:      steve@BRL.ARPA (Stephen Wolff)
To:        comp.protocols.tcp-ip
Subject:   Re:  A novice's comming out party

Eric -

One thing no one has mentioned yet: Tanenbaum and Stallings are **old**
books - Tanenbaum, for example, has nothing on IP.  If you want to copy
a lovely little primer written by Dave Crocker (of MMDF "fame") from
Ungermann-Bass, that gives all the gory details on IP you're welcome.

Cheers,  -s

-----------[000153][next][prev][last][first]----------------------------------------------------
Date:      Sat, 23-May-87 18:44:44 EDT
From:      jas@MONK.PROTEON.COM (John A. Shriver)
To:        comp.protocols.tcp-ip
Subject:   IP Datagram sizes

Two problems with looking at incoming fragments.

1. It tells you the other guy is sending packets that are too large.

2. The TCP MSS option is only valid in SYN packets, which almost
always have no data.  You will find out too late.

Another interesting problem to think about is that the fragmentation
issue could shift dynamically as routes change.

I'd guess that the first tool would be an ICMP record MSS type, or
some IP option.  Of course, not many routers handle source routing
yet...

-----------[000154][next][prev][last][first]----------------------------------------------------
Date:      Sat, 23-May-87 23:02:00 EDT
From:      CERF@A.ISI.EDU
To:        comp.protocols.tcp-ip
Subject:   Re: IP Datagram sizes

Charlie,

Variations in paths and the possibility of multiple fragmentation on
the first datagram fragment suggests that you "strong Hint" may also
be very misleading.

Vint

-----------[000155][next][prev][last][first]----------------------------------------------------
Date:      Sun, 24-May-87 01:17:05 EDT
From:      geof@apolling.UUCP (Geof Cooper)
To:        comp.protocols.tcp-ip
Subject:   Re: IP Datagram sizes

I like the idea of an IP-level solution to the fragmentation problem
since it has application to UDP protocols (I know that none that exist
today could use it, but that's no excuse for ignoring UDP).

Isn't there a destination unreachable message with the reason being
"can't fragment and had to" (sorry my ICMP spec is at the office)?  If
not, we could certainly add one.

In that case, the idea is to always send TCP packets with the "don't
fragment" bit set.  Use the scheme suggested that keeps track of MTU's
in the routing cache.  Update the cache based on DU's received
(decrease the MTU a bit and try again) -- time out the entry on a long
timer to be able to detect new routes.

The obvious improvement is to have the ICMP message also include the
MTU restriction that is appropriate -- that requires changing ICMP, of
course, but it would probably be a good idea.

-----------[000156][next][prev][last][first]----------------------------------------------------
Date:      Sun, 24-May-87 09:21:55 EDT
From:      hwb@MCR.UMICH.EDU (Hans-Werner Braun)
To:        comp.protocols.tcp-ip
Subject:   Re: IP Datagram sizes

I don't understand what this whole fuzz about messages to help negotiating
the MSS is all about.

First of all all the assumptions that the paths are symmetric in both
directions are not valid any more, in particular with the NSFNET which is
now running since almost a year and the upcoming networks from other
agencies (like NASA) and may be even the frequently quoted Interagency
Research Internet. The previous tree structure of the Internet is
certainly overtaken by events by now, or at least not guaranteed any
more. All new schemes we come up with have to survive in a real meshed
net of networks. Most if not all I have heard here so far assumes that
there are symmetric paths.

Second, and as someone else has pointed out before, we only have influence
on the MSS in the first packet exchange, i.e., as seen from the host. Any
extension to negotiating the MSS otherwise is therefore non-trivial and
needs to be well architectured, including that all the host implementations
will need to be changed.

Third, the Berkeley folks have changed their MSS attitude considerably with
the version 4.3bsd. The assumption now is to use local network sizes if
you can be reasonably sure that the packets stay on the local physical
network, and to use the only at least somewhat guaranteed maximum size
of 576 bytes otherwise. This strikes me as an excellent idea.

What are we really talking about? Most of what we are discussing implies
the difference from 576 bytes to 1500 bytes, i.e., the maximum record
size on an Ethernet. But 1500 bytes is less then three times the 576 bytes.
In the longer run, i.e., a very few years, what we REALLY need are much
larger packets then 1500 bytes. This will become imperative with the
expected appearance of very high speed networks. I cannot help myself
thinking that a reasonable thing to do for today, supposing you want to
reach other then your local net, is to rather stick with the 576 byte
limit (a limit that is spelled out all over the place) and rather design
future networks which allow at least 20K or 40K packets on very high speed
networks which might run at multiple hundreds of megabits per second or
higher. Even if the local speeds are much lower then this, there could be
a higher speed piece in the middle. These short packets are a in fact real 
problem already at much lower speeds, and they are killing the gateways
because of the overhead they impose.

	-- Hans-Wermeon

-----------[000157][next][prev][last][first]----------------------------------------------------
Date:      Sun, 24-May-87 11:01:33 EDT
From:      zwang@CS.UCL.AC.UK (Zheng Wang)
To:        comp.protocols.tcp-ip
Subject:   IEN documents

I am looking for the IEN 30, IEN 48 which are not on line now in our dept.
If anyone has got them on line , please send them to me.

Thank you in advance!

-----------[000158][next][prev][last][first]----------------------------------------------------
Date:      Sun, 24-May-87 16:16:57 EDT
From:      pdb@sei.cmu.edu (Patrick Barron)
To:        comp.protocols.tcp-ip
Subject:   Re: IEN documents

In article <8705241500.AA26059@ucbvax.Berkeley.EDU> zwang@CS.UCL.AC.UK (Zheng Wang) writes:
>I am looking for the IEN 30, IEN 48 which are not on line now in our dept.
>If anyone has got them on line , please send them to me.

All the IEN's are on line and available via anonymous FTP from SRI-NIC.ARPA.
You'd want to retrieve IEN:IEN-30.TXT and IEN:IEN-48.TXT.

--Pat.

-----------[000159][next][prev][last][first]----------------------------------------------------
Date:      Sun, 24 May 87 15:48:47 GMT-0:00
From:      Zheng Wang <zwang@Cs.Ucl.AC.UK>
To:        tcp-ip@sri-nic.arpa
Cc:        zwang@Cs.Ucl.AC.UK
Subject:   IEN documents
I am looking for the IEN 30, IEN 48 which are not on line now in our dept.
If anyone has got them on line , please send them to me.

Thank you in advance!
-----------[000160][next][prev][last][first]----------------------------------------------------
Date:      Mon, 25-May-87 07:50:00 EDT
From:      VTTTELTY@FINFUN.BITNET
To:        comp.protocols.tcp-ip
Subject:   Where to get TCP/IP-Netbioses and VT100-Telnet?


Where can I find the following products:

1. Netbios on TCP/IP for IBM PC/AT and 3Com Ethernet controllers.

2. VT100-Telnet (supporting function keys) for IBM PC/AT.

3. Netbios on TCP/IP for VAX/VMS.

Santtu M{ki
Technical Research Centre of Finland/Telecommunications lab.

-----------[000161][next][prev][last][first]----------------------------------------------------
Date:      Mon, 25-May-87 11:32:42 EDT
From:      geof@imagen.UUCP (Geoffrey Cooper)
To:        comp.protocols.tcp-ip
Subject:   Re: IP Datagram sizes

> I don't understand what this whole fuzz about messages to help negotiating
> the MSS is all about.
> will need to be changed.
> ...
> Third, the Berkeley folks have changed their MSS attitude considerably with
> the version 4.3bsd. The assumption now is to use local network sizes if
> you can be reasonably sure that the packets stay on the local physical
> network, and to use the only at least somewhat guaranteed maximum size
> of 576 bytes otherwise. This strikes me as an excellent idea.

What about subnets?  If I have a cluster of subnets, each of which has
an MTU of 1500 bytes, I really want the extra speed.  And todays
gateways, which generally don't keep up with the full LAN bandwidth,
make an excellent case for using large packets when sending to hosts
that are off the current network.

- Geof
---
{decwrl,sun,saber}!imagen!geof

-----------[000162][next][prev][last][first]----------------------------------------------------
Date:      Mon, 25-May-87 14:07:48 EDT
From:      hope@gatech.edu (Theodore Hope @ LEGOLAND)
To:        comp.protocols.tcp-ip
Subject:   Re:  A novice's comming out party

The following IEEE tutorial is a good overview of all kinds of stuff:

 Tutorial: Computer Communications, Architectures, Protocols, and Standards.
 William Stallings, Ed.
 IEEE Catalog Number EH0226-1
 IEEE Computer Society Order Number 604


Also, I'd recommend the following [text]book:

 Data and Computer Communications
 William Stallings
 Macmillan Publishing Co.
 ISBN 0-02-415440-7
-- 
Theodore Hope
School of Information & Computer Science, Georgia Tech, Atlanta GA 30332
CSNet: Hope@gatech		ARPA: Hope@Gatech.EDU
uucp:	...!{akgua,decvax,hplabs,ihnp4,linus,seismo,ulysses}!gatech!hope

-----------[000163][next][prev][last][first]----------------------------------------------------
Date:      Mon, 25 May 87 12:50 N
From:      <VTTTELTY%FINFUN.BITNET@wiscvm.wisc.edu>
To:        TCP-IP@SRI-NIC.ARPA
Subject:   Where to get TCP/IP-Netbioses and VT100-Telnet?

Where can I find the following products:

1. Netbios on TCP/IP for IBM PC/AT and 3Com Ethernet controllers.

2. VT100-Telnet (supporting function keys) for IBM PC/AT.

3. Netbios on TCP/IP for VAX/VMS.

Santtu M{ki
Technical Research Centre of Finland/Telecommunications lab.

-----------[000164][next][prev][last][first]----------------------------------------------------
Date:      Mon, 25-May-87 18:02:22 EDT
From:      hwb@MCR.UMICH.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: IP Datagram sizes

Put your gateways into promiscuous mode and pretend you have a flat space
in the hosts. That assumes of course that the ARP caches time out properly.
How about the 8K requests Sun uses and appear as UDP fragments for NFS?

	-- Hans-Werner

-----------[000165][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26-May-87 02:15:33 EDT
From:      elwell@osu-eddie.UUCP
To:        comp.protocols.tcp-ip
Subject:   Market Survey

I'm working up some artwork for a "Captain Ethernet" T-shirt, in honor of
our local networking hacker.  I thought that some of you might (a) be
interested in buying one too, and (b) have some interesting ideas for an
appropriately superhero-like logo.  The cost would probably run about $4.50
to $5.00 plus postage, but it all depends on how many shirts I can order at
once.  This is not a profit-making venture...

If you'd be interested in getting one, or if you have logo ideas, send
me some mail!


-=-


							Clayton Elwell
The meek are getting ready...			Elwell@Ohio-State.ARPA
					   ...!cbosgd!osu-eddie!elwell

-----------[000166][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26-May-87 05:04:08 EDT
From:      gnu@hoptoad.UUCP
To:        comp.protocols.tcp-ip
Subject:   re: tn3270 availability

rms@ACC-SB-UNIX.ARPA (Ron Stoughton) writes:
> The DOS-based version is public domain and can be acquired from Berkeley
> by contacting Greg Minshall.  The Unix version is also public domain, but
> you probably have to have a license to use the curses library.  It is
> included with the 4.3BSD distribution.

The curses library can be gotten without license; I have a copy.
If you get it from a 4.3 tape, you signed a license saying you would
not distribute it; but I got it straight from the author.  It is
copyright by him (Ken Arnold) and the U. of California, but the
only restriction is that you leave in the copyright notices.

If anybody needs a copy of the real 4.3BSD curses library for unlicensed
use, let me know.
-- 
Copyright 1987 John Gilmore; you may redistribute only if your recipients may.
(This is an effort to bend Stargate to work with Usenet, not against it.)
{sun,ptsfa,lll-crg,ihnp4,ucbvax}!hoptoad!gnu	       gnu@ingres.berkeley.edu

-----------[000167][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26-May-87 08:20:57 EDT
From:      sanand@radha.UUCP (Sanand Patel)
To:        comp.protocols.tcp-ip
Subject:   end-to-end delays

If anyone has pointers to papers that discuss end-to-end delays
(+throughput) measured at the (TCP) socket interface, I would be appreciate
such info. Environments of interest are of the ethernet, proteon, token ring
type.

Thanks
Sanand Patel
---
--- seismo!mnetor!radha!sanand
--- utzoo!{lsuc,dciem}!radha!sanand
--- 416-293-9722
-- 
---
--- seismo!mnetor!radha!sanand
--- utzoo!{lsuc,dciem}!radha!sanand
--- 416-293-9722

-----------[000168][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26-May-87 08:45:44 EDT
From:      louden@GATEWAY.MITRE.ORG.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re:  IP Datagram sizes

Note that 576 is not the guaranteed limit for the networks but for the
reassembly buffers in the receiving host.  The networks can fragment
at smaller sizes and some do to get through sat-links and such.

-----------[000169][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26-May-87 11:27:47 EDT
From:      dcrocker@ubvax.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re:  A novice's comming out party

In article <8705231519.aa20147@SMOKE.BRL.ARPA> steve@BRL.ARPA (Stephen Wolff) writes:
>... If you want to copy
>a lovely little primer written by Dave Crocker (of MMDF "fame") from
>Ungermann-Bass, that gives all the gory details on IP you're welcome.

Steve has done me an unexpected service.  The primer is not quite ready
for the printers, but is very close.  However, Ungermann-Bass has not
yet fully decided how to use this document.  It was written as a
moderately sanitary introduction to the topic of TCP/IP, with a
reduced (if not minimal) amount of vendor-specific content.  (You
may prefer the term 'hype'.)

You might try contacting your local UB sales offices, tho the folks
there will not yet have heard of the document.  This will create just
the field sales demand we need to get a large-scale printing.

-----------[000170][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26-May-87 11:34:00 EDT
From:      weltyc@NIC.NYSER.NET (Christopher A. Welty)
To:        comp.protocols.tcp-ip
Subject:   .rev files and gethostbyaddr


	We are having a small problem with gethostbyaddr... It doesn't
work.  Currently most of our stuff is recompiled to work with the host
table, but for obvious reasons we would like to get this fixed.  
	The problem seems to be in the .rev files for named.  We have
quite a few subnets, and had one file for each, and it (gethostbyaddr)
was working fine but the configuration would crash the nameserver...
The documentation I have is of no use.  Does anyone have a WORKING set
of .rev files (and the boot file) for multiple subnets that I could see?

						-Chris Welty
						 weltyc@cs.rpi.edu

-----------[000171][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26-May-87 11:55:33 EDT
From:      yamo@AMES-NAS.ARPA (Michael J. Yamasaki)
To:        comp.protocols.tcp-ip
Subject:   Re: Re: IP Datagram sizes

Greetings.

> From: Hans-Werner Braun <hwb@MCR.UMICH.EDU>
> 
> What are we really talking about? Most of what we are discussing implies
> the difference from 576 bytes to 1500 bytes, i.e., the maximum record
> size on an Ethernet. But 1500 bytes is less then three times the 576 bytes.
> In the longer run, i.e., a very few years, what we REALLY need are much
> larger packets then 1500 bytes. This will become imperative with the
> expected appearance of very high speed networks. I cannot help myself
> thinking that a reasonable thing to do for today, supposing you want to
> reach other then your local net, is to rather stick with the 576 byte
> limit (a limit that is spelled out all over the place) and rather design
> future networks which allow at least 20K or 40K packets on very high speed
> networks which might run at multiple hundreds of megabits per second or
> higher. Even if the local speeds are much lower then this, there could be i
> [...]

Uh, gee, I was really appreciative that this issue (IP MSS negotiation) was 
brought up because at this very moment I've been grappling with the problem 
of high speed file transfer over a NSC HYPERchannel network.  In the short 
term I developed a simple ACK-NAK protocol so that I could transfer in 56K 
blocks (Why 56K is a long story. Why "blocks" instead of "packets" is that in 
the HYPERchannel world "packets" is not a useful term.).

It just seems too boggling to tackle the issues associated with the MSS
problem that we face here at NASA/Ames all at once, which would be required
to use our normal vehicle for file transfer namely TCP/IP.  Our MSS for 
the HYPERchannel network is 4K data + the HYPERchannel header.  This stresses
the buffer management schemes of our 4.2, 4.3 and TWG/SV versions quite
well (can you say crash and burn when two many rcp's happen at once ;-).
We have drivers on some of our machines (Cray 2, Amdahl 5840, SGI IRIS)
which can handle greater than 4K data (up to 64K).  Consequently, our local
net could conceivably have quite a range of MSSs.  In addition, all of our
local hosts with the exception of the Cray 2 have Ethernet connections and
plans are in the near term to experiment with a token ring net and FDDI
as soon as... Add Vitalinks, ARPAnet, gateways... 

Anyway, I just wanted to say that this is not a solution in search of a 
problem.  Selecting 576 for a gateway between ethernet and HYPERchannel is
a losing proposition.  Add the additional wrinkle that the host rather than
the network chooses the maximum size it can accept (HYPERchannel has no
theoretical upper bound on segment size although mileage may differ...).
An end to end protocol for MSS negotiation seems very appropriate.

                                                   -Yamo-

Thanks, Art, for bringing up an important issue.

-----------[000172][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26 May 87 14:42:06 -0400
From:      seisner@CC5.BBN.COM
To:        Stephen Wolff <steve@BRL.ARPA>
Cc:        Kodinsky@MIT-MULTICS.ARPA, Eric Mosel <emosel@NOTE.NSF.GOV>, tcp-ip@SRI-NIC.ARPA, seisner@CC5.BBN.COM
Subject:   Re: A novice's comming out party

You might also try "Data Networks" by Bertsekas and Gallager, which just
came out in the past few months.

Sharon
-----------[000173][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26-May-87 12:23:08 EDT
From:      jas@MONK.PROTEON.COM (John A. Shriver)
To:        comp.protocols.tcp-ip
Subject:   packet sizes in TCP and UDP

The reason to send packets larger than 576 bytes is simple.  If you
don't have congestion on the media, doubling packet size almost
doubles end-to-end performance.  This particularly helps in groups of
LANs.  (While the ARPANET can handle larger packets, I'm not sure that
the PSN software gives you more speed for them.)  Both TCP
implementations and routers like to see packets stay large.

I like the concept of using the Don't Fragment bit.  The ICMP message
will give us the IP header of the offending packet, so we can identify
the TCP connection and the offending IP length.  Moreover, the ICMP
message lands at the originator.  It even copes with asymetric paths.

Rather than keep timers down at the data link level, it might be
easier to just send a TCP segment with the full MSS offered by the
other end every minute (with Don't Fragment), and see if it gets
through in one piece.

My comment on ARPANET brings up an interesting point.  It may not
always be the case that larger IP packets are faster.  You may have
some VC link that suffers resource starvation when you do this.  Or
you might have a problem like SATNET, where one reason for small
packets is to dodge the bit error rate.

Just so folks know, what Sun does about deciding how big NFS packets
should be is to have an argument in the mount table specifying the UDP
packet size to use.  If none is specified, it's 8KB, which requires
routers to reliably pass 4 nearly back-to-back Ethernet packets.
Generally, you tweak the size to fit in one (maybe 2) LAN packet(s) if
you're NFS mounting across a router.  (You also have to do this if a
Multibus Sun-2 with a 3Com board is NFS-ing with a Sun-3, since the
3Com board is slow.)  This all has to do with the IP unique-ID problem
in reassembling IP packets.

-----------[000174][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26-May-87 12:45:12 EDT
From:      braden@ISI.EDU (Bob Braden)
To:        comp.protocols.tcp-ip
Subject:   Re: IP Datagram sizes

HWB:

Your comments are generally right on the mark, especially about the need
to dramatically increase packet sizes for high-speed nets in the
future.  However, I think that in the short term one cannot
always ignore the performance difference between 576 and 1006 byte MTU's
over typical WAN's.  [By the way, this question never occurred to me
before... what is the MTU of the NSFnet backbone? How about the regional
networks?] 

You suggest sticking to 576 byte packets. A better strategy may be to
adopt a larger MTU (say, 1500) and let fragmentation fall where it will.
Suppose you use 1500 across a path which has the ARPANET in the middle...
then each FTP/SMTP packet will be split into 1000 and 500 byte pieces,
for an average of 750 bytes per packet.  That is 75% efficiency, good
enough is many cases.  If a particular host has a high percentage of its
traffic across a WAN with a 1006-byte MTU, the host administrator can
adjust the effective MTU parameter of the interface down to 1006 to get
that last 25%.  A host needs intrumentation on its IP layer to detect and
report a situation of particularly bad statistics. Also, someone should
remind us which will beat up ARPANET/MILNET more... 576 packets, or
(1000,500) byte pairs. 

   Why fuss about fragmentation into small packets, in a community
   that practices single-character echoplex terminal interactions??

So, how do we go towards 20K packets?  What LAN technology will we need
to get there?  What will this imply for host interfaces?  How can we take
care of hosts that have not been converted to big buffers?  It seems that
when some parts of the Internet take very big packets while other parts
still take miserable little ones, it will be absolutely necessary for a
host to be able to learn the properties of a path it is using.  Yes,
Vint, paths do change dynamically, but as a practical matter they don't
change that fast, and we are probably willing to take a performance hit
if the new path makes our MTU choice suboptimal. 

Bob Braden

-----------[000175][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26-May-87 13:09:04 EDT
From:      dpk@BRL.ARPA (Doug Kingston)
To:        comp.protocols.tcp-ip
Subject:   Re:  IP Datagram sizes

Bob,
	The main argument against fragmenting is that when
the loss rate goes up, which is has lately under conditions
of heavy congestion, the total throughput drops dramatically
since the loss of any one fragment can kill a set of packets.

-Doug-

-----------[000176][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26-May-87 13:15:49 EDT
From:      Mills@UDEL.EDU
To:        comp.protocols.tcp-ip
Subject:   Re:  IP Datagram sizes

Bob,

The NSFNET fuzzbugs presently use an MTU of 576. With the latest fuzzware
the thugs can be set higher, while still using the buffer pool efficiently.
I chose 576 partly in defense of the buffer pool (tinygrams still sat
in a full packet buffer, but not any more) and partly to keep delays
small. In a rash moment, I've even thought about dynamic fragmentation
with preemption - a 20K monstergram getting sliced when a high-priority
tinygram arrives. For two dollars, I bet you don't remember where and
when that suggestion first came up (hint: it was at an overseas meeting).
The fuzz now have the hooks for schedule-to-deadline service. Thoughts
of stream-type service are prancing through my disheveled mind, but I
need to sweep out some other junk first.

Dave

-----------[000177][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26-May-87 13:20:08 EDT
From:      STAHL@SRI-NIC.ARPA (Mary Stahl)
To:        comp.protocols.tcp-ip
Subject:   Correction to homing table message

I made a typo in my previous message about the mailbridge homing tables.

The correct filenames are:

  NETINFO:ARPA-MAILBRIDGE-HOMINGS.TXT

  and

  NETINFO:MIL-MAILBRIDGE-HOMINGS.TXT


- Mary
-------

-----------[000178][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26-May-87 13:25:04 EDT
From:      PERRY@VAX.DARPA.MIL (Dennis G. Perry)
To:        comp.protocols.tcp-ip
Subject:   Re: Market Survey

Clayton,

Although I appreciate your enthusiasm in developing a item that may
be of interest to the research community, I must point out to you that
this particular activity, even though not for profit, is somewhat out
of line with Arpanet policy.  May I please point all of you out their
to the DDN publication NIC 50003, "Arpanet Information Brochure" and the
statement on page seven (7):

Users of Arpanet may only use the network to conduct the offical business
for which their access was authorized.  They may not violate privacy or
any other applicable laws, and must not use the network for private gain
or for commercial purposes, such as advertising or recruiting.

Please don't push me on the interpretation of this mild rule, the 
coefficient of elasticity is high, but the energy put into the push
may find a way of returning to its source.

Dennis G. Perry
Program Manager of the Arpanet
Information Science and Technology Office
DARPA
-------

-----------[000179][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26-May-87 14:08:01 EDT
From:      geof@apolling.UUCP.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: Re: IP Datagram sizes


 >      Ideally, one would look for a "type of service" routing capability which
 >      could avoid fragmentation - rather than having to construct a path by
 >      trial and error .... 

True, although you'd need more than one bit to describe the service you want.
The desired negotiation (repeated constantly, since routes may change) is:

        tcp: I want to use N bytes per packet on this connection
        network: I can send M bytes, M<N, without fragmenting
        tcp: OK, then I'll use M bytes per packet.

If there were an IP "MTU" option that is filled in by BOTH tcp's for EVERY
packet, and modified to the min(myMTU, packetMTU) by each gateway, the problem
would be solved, since the local network layer could cache M on a per-host basis.
Hmm... I guess that you could have each TCP generate the option only when it
saw packets that were fragmented (although you wouldn't ever find out that the
MTU has increased).

I wonder how that unbends the gateways?

- Geof

-----------[000180][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26-May-87 14:39:57 EDT
From:      elwell@OHIO-STATE.ARPA (Clayton M. Elwell)
To:        comp.protocols.tcp-ip
Subject:   (none)

Sigh.  It seems a little clarification is in order here.

	Although I appreciate your enthusiasm in developing a item that may
	be of interest to the research community, I must point out to you that
	this particular activity, even though not for profit, is somewhat out
	of line with Arpanet policy.  May I please point all of you out their
	to the DDN publication NIC 50003, "Arpanet Information Brochure" and the
	statement on page seven (7):
	
	Users of Arpanet may only use the network to conduct the offical business
	for which their access was authorized.  They may not violate privacy or
	any other applicable laws, and must not use the network for private gain
	or for commercial purposes, such as advertising or recruiting.

My message was posted to the Usenet newsgroup "comp.protocols.tcp-ip."
At the time I was not aware that it was automatically gatewayed to the
Arpanet.  If this violated any DDN policy, in letter or in spirit, it
was certainly unintentional. 

Even on Usenet, advertising is frowned upon.  This message was not an
offer to produce or sell any item, merely an attempt to see if anyone
else might be interested.  Keeping a sense of humor alive about your
work is certainly part of the Arpanet tradition (cf. the Hacker's
Dictionary and other such materials available via anonymous FTP all
over the internet...).  I do not see how the paragraph I posted could
have been construed as trying to "bend the rules," especially when
compared with the many commercially oriented ARPA mailing lists.
	
	Please don't push me on the interpretation of this mild rule, the 
	coefficient of elasticity is high, but the energy put into the push
	may find a way of returning to its source.

I'm not in the T-shirt business :-).  I'm a systems programmer for a
major university currently in the throes of some knotty networking
problems regarding the Arpanet and TCP/IP in general.  Sometimes you
have to laugh about it.  The TCP/IP newsgroup on Usenet seemed to be
the most likely place to find some people that would get the joke.

	Dennis G. Perry
	Program Manager of the Arpanet
	Information Science and Technology Office
	DARPA
	-------

I apologize for any inconvenience or perceived violation.  This is always
a risk, though, especially when a private system is connected to a
(relatively) public one.  Perhaps the Usenet gateway should be moderated.


--Clayton Elwell
  The Ohio State University
  Department of Computer and Information Science
  Research Computing Facility

-----------[000181][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26-May-87 14:41:03 EDT
From:      cetron@CS.UTAH.EDU (Edward J Cetron)
To:        comp.protocols.tcp-ip
Subject:   Re: Market Survey


	i'd rather see an ether-bunny.....

-ed

-----------[000182][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26-May-87 15:04:03 EDT
From:      seisner@CC5.BBN.COM
To:        comp.protocols.tcp-ip
Subject:   Re: A novice's comming out party


You might also try "Data Networks" by Bertsekas and Gallager, which just
came out in the past few months.

Sharon

-----------[000183][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26-May-87 16:02:17 EDT
From:      PERRY@VAX.DARPA.MIL (Dennis G. Perry)
To:        comp.protocols.tcp-ip
Subject:   (none)

Clayton, thanks for the note.  I am especially sensitive right now
about issues like this, having just spend a considerable amount of
time with the DoD Inspector General sorting out a similar problem, although
of a different content.

It seems that the policy for the Arpanet is not readily stretched into
the Internet.  In fact, Vint Cerf is looking into this area for the
Autonomous Networks Task Force of the IAB.

I appoligize if I seemed heavy handed.  I really do ignore most seeming
violations in the hope that no one will get upset with a little humor
now and then.  Also, as you pointed out, you did not purposefully
send it into the Arpanet.

dennis
-------

-----------[000184][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26-May-87 18:09:22 EDT
From:      BUDDENBERGRA@A.ISI.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   Datagram sizes

The discussion seems close to an observation I've run across.
Packet size and datagram size have historically been a function
of buffer size, since that was the bottleneck.  It seems that
the controlling factor in packet size is noise.  Shorter packets
are more noise resistant, all other controlling parameters constant.

Rex Buddenberg
-------

-----------[000185][next][prev][last][first]----------------------------------------------------
Date:      26 May 1987 18:09:22 EDT
From:      Rex Buddenberg <BUDDENBERGRA@A.ISI.EDU>
To:        tcp-ip@SRI-NIC.ARPA
Cc:        BUDDENBERGRA@A.ISI.EDU
Subject:   Datagram sizes
The discussion seems close to an observation I've run across.
Packet size and datagram size have historically been a function
of buffer size, since that was the bottleneck.  It seems that
the controlling factor in packet size is noise.  Shorter packets
are more noise resistant, all other controlling parameters constant.

Rex Buddenberg
-------
-----------[000186][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26-May-87 19:05:59 EDT
From:      NJG@CORNELLA.BITNET.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re:  IP Datagram sizes

It would seem that it would be interesting to know how common
fragmentation is. Is there some size (greater than 576 or not) that
will TYPICALLY not be fragmented? Has anyone measured this? Are there
known common gateways, IMPs, etc that have limits smaller than the
typical 1500 or so byte ethernet limit?

-----------[000187][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26 May 87 19:05:59 EDT
From:      Nick Gimbrone <NJG%CORNELLA.BITNET@wiscvm.wisc.edu>
To:        tcp-ip@sri-nic.arpa
Subject:   Re:  IP Datagram sizes
It would seem that it would be interesting to know how common
fragmentation is. Is there some size (greater than 576 or not) that
will TYPICALLY not be fragmented? Has anyone measured this? Are there
known common gateways, IMPs, etc that have limits smaller than the
typical 1500 or so byte ethernet limit?
-----------[000188][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26-May-87 19:16:32 EDT
From:      karels%okeeffe@UCBVAX.BERKELEY.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: IP Datagram sizes

There are a few other points in this discussion that haven't made.

One it that looking at IP fragmentation in received packets
doesn't work for one-way connections (FTP) or connections with
asymmetrical data flow (Telnet, most everything else).

As Doug pointed out, using large packets and depending on IP fragmentation
looses badly when the network becomes lossy.  I've always resisted
paying for 2 different fragmentation/reassembly mechanisms at once,
and only the TCP level allows acknowledgement (and progress) when only
part of the data gets through.  Also, under various circumstances,
IP datagrams may be fragmented more than once, resulting in lots
of packets of varying sizes, lots of them tiny.  When this happens,
hardware or software limitations are likely to cause some of these
small packets just after larger ones to be lost.  (One such bug
in the ACC LH/DH and the 4.2BSD driver caused 1024-byte packets
to be lost with high probability because of fragmentation.)

There were a few comments about use of larger sizes on the local
network(s).  4.3BSD's algorithm is to use a large size under the MTU
of the outgoing network interface if the destination is on the same
logical network (another subnet of the same network is considered local).
This assumes that the MTU on any segment of the network is not unreasonably
bad for other parts of the network.

If packet size for a path is determined and propagated by the routing
protocols, that wouldn't help hosts that don't listen to the routing
protocols.

I would very much like to see options for determining the min MTU,
min throughput and the hopcount of a path.  In order to return the
information to the sender, this would have to be done as an ICMP
message that is reflected and returned by the destination, or hosts/
gateways would have to use the convention of preserving such IP options
when echoing ICMP messages.  If this was an ICMP message, it could
be defined so that each gateway replaced the IP destination address
with the IP address of the next-hop gateway so that gateways need not
examine each datagram forwarded.  The original IP address would be
stored in the ICMP part of the packet or in an IP option.

		Mike

-----------[000189][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26-May-87 20:45:11 EDT
From:      narten@PURDUE.EDU
To:        comp.protocols.tcp-ip
Subject:   Re: IP Datagram sizes

It has already been pointed out that MSS negotiation can only be done
at connection set up time if one goes by the spec. One still can
dynamically adjust the MSS though, provided that the negotiated values
are high at the outset. Just because the negotiated MSS is large,
doesn't mean that segments of that size have to be sent.

In other words, negotiate a value that is too large rather than too
small, and use as large a value as gets through without fragmenting
(without exceeding the negotiated value of course). Such a scheme is
also compatible with what is currently implemented. Old TCP
implementations will negotiate small MSS's. 

Thomas

-----------[000190][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26-May-87 20:47:11 EDT
From:      hwb@MCR.UMICH.EDU (Hans-Werner Braun)
To:        comp.protocols.tcp-ip
Subject:   Re: Re: IP Datagram sizes

Gosh. It really looks as if you are interpreting me wrong. I am very much
in favor of longer packets. What I tried to describe, but may be I wasn't
clear enough, was that a solution will be intrusive for the hosts. The
simple scheme originally discussed just won't work. Nobody keeps you from
sending fragments for now and especially nobody is keeping you from doing
within your local environment whatever you please. If your Cray-2 sends
a graphics object stream to an IRIS you really don't want to put current
generation gateways into the middle anyway. In particular not if you 
think along the lines of the speed SGI is considering for the fairly 
near future. You even make a case for what I was saying, namely that we
need to develop link/physical level devices which do much better then 
the current 1500 bytes in use on Ethernets.

	-- Hans-Werner

-----------[000191][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26-May-87 20:53:40 EDT
From:      steve@BRL.ARPA.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re:  IP Datagram sizes

The NSFNET Management solicitation specifies MTU=1500 for the backbone.  -s

-----------[000192][next][prev][last][first]----------------------------------------------------
Date:      Tue, 26-May-87 21:21:04 EDT
From:      bob@osu-eddie.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: Market Survey

In article <8705261841.AA10935@cs.utah.edu> cetron@cs.utah.edu (Edward J Cetron) writes:
>	i'd rather see an ether-bunny.....

We have already developed the design for an ether-noose, used for
chasing the hardware people who sometimes decide to tap the backbone
during afternoon prime time.  It's not-for-profit (just like Clayton's
T-shirts), and can be made of materials near at hand by people with
very little artistic ability (like me, unlike Clayton).  Just grab a
half-wavelength of thick Ether cable, bend into the appropriate shape,
and hang from a bookshelf near your RFC collection so it's convenient.
Wave threateningly when all the Suns in the office lose carrier
simultaneously.
-=-
 Bob Sutterfield, Department of Computer and Information Science
 The Ohio State University; 2036 Neil Ave. Columbus OH USA 43210-1277
 bob@ohio-state.{arpa,csnet} or ...!cbosgd!osu-eddie!bob
 soon: bob@aargh.cis.ohio-state.edu

-----------[000193][next][prev][last][first]----------------------------------------------------
Date:      Wed, 27-May-87 06:21:19 EDT
From:      swb@DEVVAX.TN.CORNELL.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   tn3270 availability

Greg, have you considered making it work on an Encore Annex or other
terminal server?

-----------[000194][next][prev][last][first]----------------------------------------------------
Date:      Wed, 27-May-87 10:24:28 EDT
From:      sia@itech1.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: Where to get TCP/IP-Netbioses and VT100-Telnet


A rather complete TCP/IP package for IBM PC/XT/AT's that includes
a VT100 Telnet is available from:

FTP Software, Inc.
P.O. BOX 150
Kendall Square Branch
Boston, MA 02142
(617) 864-1711

The stuff for the VAX running under VMS is available from:

The Wollongong Group, Inc.
1129 San Antonio Road
P.O. Box 51860
Palo Alto, CA 94303
(415)962-7100

Wollongong also has a TCP/IP for the PC.

Another source for these packages would be:

Network Research Corp.
2380 North Rose Avenue
Oxnard, CA 93030
(805)485-2700


********************************************************************************
Stewart I. Alpert @ Ingram Software, Inc.  Department of Technical Services
UUCP:	    {ihnp4,sco,mwc,canisius}!itech1!sia
            {allegra,decvax,watmath}!sunybcs!canisius!itech1!sia
ARPA/CSNET: itech1!sia%canisius@csnet-relay.ARPA
AT&T:       1-716-874-1874 or 1-800-828-7250 or 1-800-464-8488 (in NYS)
USnail:     2128 Elmwood Avenue, Buffalo, NY 14215

-----------[000195][next][prev][last][first]----------------------------------------------------
Date:      Wed, 27-May-87 11:21:52 EDT
From:      kjd@rust.DEC.COM (Kevin J. Dunlap - DECwest Engineering)
To:        comp.protocols.tcp-ip
Subject:   Re: .rev files and gethostbyaddr


 
>>
>>	The problem seems to be in the .rev files for named.  We have
>>quite a few subnets, and had one file for each, and it (gethostbyaddr)
>>was working fine but the configuration would crash the nameserver...
>>The documentation I have is of no use.  Does anyone have a WORKING set
>>of .rev files (and the boot file) for multiple subnets that I could see?
>>
>>						-Chris Welty
>>						 weltyc@cs.rpi.edu
 
This question belongs in the bind@usbarpa.Berkeley.EDU mailing list not TCP/IP,
so any further discussion should be continued their.  For subscription
send to bind-request@ucbarpa.berkeley.edu.
 
The examples in the BIND distribution (anonymous ftp from ucbarpa pub/bind.tar)
are working examples.  Also, the examples in "Name Server Operations Guide
for BIND" are working examples.  Have you read this document found
in the 4.3BSD System Manager's Manual?  You must have missed section 4.1
page SMM:11-3
 
-Kevin Dunlap

-----------[000196][next][prev][last][first]----------------------------------------------------
Date:      Wed, 27 May 1987 17:03:40 LCL
From:      John M. Wobus <JMWOBUS%SUVM.BITNET@wiscvm.wisc.edu>
To:        TCP-IP Discussion Group <TCP-IP@SRI-NIC.ARPA>
Subject:   Making TCP avoid fragmentation to help performance.
Here are two principles:

(1) The two directions of the TCP connection should be handled independently,
    because fragmentation may be different in the two directions.

(2) Though the degree of fragmentation may change over time, we can assume
    that recent history is a pretty good predictor of the near future.

I suggest that the "receiving TCP" be given an indication that a packet
was fragmented and that it indicate this fact in its acknowledgement.
When the "sending TCP" gets the acknowledgement, it can try sending in a
smaller size packet.  Then, once in a while, the "sending TCP" might try
increasing the packet size to "test the waters".

This is something like the way TCP handles RTTs when deciding when it is time
to resend.  These two problems seem similar to me.  I presume the most
radical part of this proposal is to add a 1-bit field to the TCP header,
something one wouldn't want to do every day.

John Wobus
Syracuse University
-----------[000197][next][prev][last][first]----------------------------------------------------
Date:      Wed, 27-May-87 16:32:19 EDT
From:      alan@mn-at1.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: Re: IP Datagram sizes

In article <8705261555.AA14106@ames-nas.arpa> yamo@AMES-NAS.ARPA (Michael J. Yamasaki) writes:
>
>Selecting 576 for a gateway between ethernet and HYPERchannel is
>a losing proposition. 

Yes.  In 1985, we did some benchmarks to determine the impact of running
screen editors on the Cray-2.   The worry was that CPU overhead for context
switching would be prohibitively expensive. 

It turned out that while the CPU impact was acceptable (8% usage of 1
CPU for 32 users running heavy simulated vi), the Hyperchannel was com-
pletely saturated.   It turns out that a Hyperchannel is limited to ap-
proximately 300 blocks/sec.  This is independent of block size.

By plotting block size to bandwidth, you get a curve that flattens out to
around 8 megabits/sec for a block size of 64-128 kilobits, but it nudges 11
megabits/sec for block sizes larger than 512 kilobits.  (Not coincidentally,
this is the block size we chose for the Extended File System.)

These are special conditions (dedicated LAN, no gateway, low-lossage),
obviously, but it would be nice if future protocols could be designed
not to limit packet sizes to 64K just because of a 16 bit field width.

(Shades of Intel :->)


--
Alan Klietz
Minnesota Supercomputer Center (*)
1200 Washington Avenue South
Minneapolis, MN  55415    UUCP:  ..rutgers!meccts!quest!mn-at1!alan
Ph: +1 612 626 1836              ..ihnp4!dicome!mn-at1!alan
                          ARPA:  alan@uc.msc.umn.edu  (was umn-rei-uc.arpa)

(*) An affiliate of the University of Minnesota

-----------[000198][next][prev][last][first]----------------------------------------------------
Date:      Wed, 27-May-87 17:30:26 EDT
From:      JMWOBUS@SUVM.BITNET (John M. Wobus)
To:        comp.protocols.tcp-ip
Subject:   Making TCP avoid fragmentation to help performance.

Here are two principles:

(1) The two directions of the TCP connection should be handled independently,
    because fragmentation may be different in the two directions.

(2) Though the degree of fragmentation may change over time, we can assume
    that recent history is a pretty good predictor of the near future.

I suggest that the "receiving TCP" be given an indication that a packet
was fragmented and that it indicate this fact in its acknowledgement.
When the "sending TCP" gets the acknowledgement, it can try sending in a
smaller size packet.  Then, once in a while, the "sending TCP" might try
increasing the packet size to "test the waters".

This is something like the way TCP handles RTTs when deciding when it is time
to resend.  These two problems seem similar to me.  I presume the most
radical part of this proposal is to add a 1-bit field to the TCP header,
something one wouldn't want to do every day.

John Wobus
Syracuse University

-----------[000199][next][prev][last][first]----------------------------------------------------
Date:      Wed, 27-May-87 18:06:58 EDT
From:      cpw%sneezy@LANL.GOV.UUCP
To:        comp.protocols.tcp-ip
Subject:   Reassembly queues in TCP

Our MILNET Gateway is a VAX-11/750 running Berkeley 4.3 UNIX. Since installing 
4.3 we have experienced a number of crashes of the type:

	panic: out of mbufs: map full

After much searching and accounting for all the message buffers (mbufs), I
determined that this 'panic', in 2 cases, was precipitated by an uncontrolled
stream of 1 byte TCP packets emanating from (in panic #1) a MILNET host 2500
(plus or minus a thousand) miles away.  The actual configurations were:

	panic #1: (VMS login to NRL )             ( a telnet from NRL )
			 -------       -----------     ------     ----
	          tty---|UB RF?	|     |VAX-11/780 |---|MILNET|   |LANL|
			|connect|---tt|VMS Woll...|   |      |---|    |
			 -------       -----------     ------     ----

	panic #2: (VMS login to AFWL )            ( a telnet from AFWL)
			 -------       -----------     ------     ----
	          ibm---|UB ??	|     |VAX-11/780 |---|MILNET|   |LANL|
		  clone	|connect|---tt|VMS Woll...|   |      |---|    |
			 -------       -----------     ------     ----

Interestingly, the host at NRL crashed at the same time our host did.
I did not check out the AFWL case completely.  I would like to know
why these 1 byte packets are coming our way.  In both panics, each byte was
a hexadecimal '0d'.  Could that stand for 0VERdOSE???

The crash results when, a tcp sender begins to pump beaucoup packets out
on the net in assending (sic) sequence each 1 character in length, the network
conveniently loses some small number of them early on, and the receivers window
is greater than the total number of mbufs available.  The tcp input routine
appends these 1 character mbufs on a reassembly queue hoping for the
retransmission of lost packets so that it can make the list available to a
receiving application.  Needless to say, the offending system never sends
the missing pieces, the kernel links whatever remaining system message
buffers there are on the reassembly queue, and finally, some other process
asks for an mbuf with the M_WAIT option, and 4.3 panics in m_clalloc.

I call this the 'silly assembly syndrome'.

I have posted a fix to unix-wizards for the 4.3BSD UNIX folks, which prevents
the receiver from crashing.  However, it is most likely a problem in other TCP
implementations.  It appears to be something the AFWL and NRL, not to mention
other installations, might like fixed.

Any comments?

-Phil Wood (cpw@lanl.gov)

-----------[000200][next][prev][last][first]----------------------------------------------------
Date:      Wed, 27-May-87 22:26:49 EDT
From:      bzs@BU-CS.BU.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   Need "official" top-level Domain List


Is there any definitive top-level domain list (EDU, GOV, MIL etc)?
I looked in RFC990 (Assigned Numbers) and a few other places to
no avail.

If someone even has an unofficial but reasonably authoritative list
I'd appreciate that.

Specific question: Fill in the blank:

	.EDU	Educational
	.GOV	Governmental
	.NET	(Network Gateway?) <-the blank

	-Barry Shein, Boston University

-----------[000201][next][prev][last][first]----------------------------------------------------
Date:      Wed, 27-May-87 22:53:57 EDT
From:      STAHL@SRI-NIC.ARPA.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: Need "official" top-level Domain List

There is a list of all the initial top level domains in RFC 920;
however, others have been added since that RFC was published.  The NIC
is in the process of writing up guidelines for administrators who want
to establish a domain, and that document will contain descriptions of
all the top level domains established so far.  For a list only (no
description) of registered domains, FTP the file
NETINFO:DOMAIN-INFO.TXT from SRI-NIC.ARPA.  It contains names of all
the top and second level domains registered with the NIC.

- Mary
-------

-----------[000202][next][prev][last][first]----------------------------------------------------
Date:      Wed, 27-May-87 23:37:49 EDT
From:      BRUCE@UMDD.BITNET (Bruce Crabill)
To:        comp.protocols.tcp-ip
Subject:   Trailers

I'm confused.  RFC-893 (Trailer Encapsulations) indicates that the
header length field in the trailer is:

    "The header length field of the trailer data segment.  This
     specifies the length in bytes of the following header data."

However, it appears that in reality 4.3 machines take the header
length to be the total trailer length (the headers plus the Type and
Header Length fields).  Which is correct?

                                       Bruce

-----------[000203][next][prev][last][first]----------------------------------------------------
Date:      Wed, 27 May 87 23:37:49 EDT
From:      Bruce Crabill <BRUCE%UMDD.BITNET@wiscvm.wisc.edu>
To:        TCP-IP@SRI-NIC.ARPA
Subject:   Trailers
I'm confused.  RFC-893 (Trailer Encapsulations) indicates that the
header length field in the trailer is:

    "The header length field of the trailer data segment.  This
     specifies the length in bytes of the following header data."

However, it appears that in reality 4.3 machines take the header
length to be the total trailer length (the headers plus the Type and
Header Length fields).  Which is correct?

                                       Bruce
-----------[000204][next][prev][last][first]----------------------------------------------------
Date:      Thu, 28-May-87 08:25:00 EDT
From:      CERF@A.ISI.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: Datagram sizes

Rex,

the noise argument, while true in some absolute sense, has some interesting
new twists owing to new transmission media and methods. Fiber and coax LANs
are quite low in noise and can support very large packets while achieving
quite satisfactory error rates. Forward error correction schemes are more
feasible now than ever owing to higher speed and larger memory VLSI chips.

As a result, the absolute packet asize at which noise becomes a problem is
considerably larger, FOR SOME NETWORKS, than used to be the case.

Vint

-----------[000205][next][prev][last][first]----------------------------------------------------
Date:      28 May 1987 08:25-EDT
From:      CERF@A.ISI.EDU
To:        BUDDENBERGRA@A.ISI.EDU
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: Datagram sizes
Rex,

the noise argument, while true in some absolute sense, has some interesting
new twists owing to new transmission media and methods. Fiber and coax LANs
are quite low in noise and can support very large packets while achieving
quite satisfactory error rates. Forward error correction schemes are more
feasible now than ever owing to higher speed and larger memory VLSI chips.

As a result, the absolute packet asize at which noise becomes a problem is
considerably larger, FOR SOME NETWORKS, than used to be the case.

Vint
-----------[000206][next][prev][last][first]----------------------------------------------------
Date:      Thu, 28 May 87 10:20:21 EDT
From:      jqj@gvax.cs.cornell.edu (J Q Johnson)
To:        arpa.tcp-ip
Subject:   Re:  IP Datagram sizes (actually latency vs thruput)
One issue I'd like to see getting more attention in the current discussion
of transaction-parameter negotiation is the latency/thruput tradeoff.  It's
clear that for an ftp-like application all the user really cares about is
thruput, but for the rapidly growing rpc-based style of interaction latency
becomes critical.  My client programs care about the total time between the
dispatch of an RPC request and the response; if I'm going to transfer a
lot of data I'm willing to negotiate a separate channel for that transfer.

No, this is not simply a UDP vs. TCP issue.  Under different circumstances
I might want to use either protocol to carry my RPC traffic, or for that
matter might want a special-purpose protocol tuned for my particular style
of RPC.  There are still some general optimization issues that the IP
community needs to address.

Interested readers should see the discussion of special- vs. general-
purpose networking protocols in the latest Transactions on Computer
Systems (TOCS).

-----------[000207][next][prev][last][first]----------------------------------------------------
Date:      Thu, 28-May-87 10:49:31 EDT
From:      jon@CS.UCL.AC.UK.UUCP
To:        comp.protocols.tcp-ip
Subject:   IP fragments and subnets


three questions

1. We've subnetted the UCL net a lot recently using cisco routers.
does 4.3 adjust its mtu down when crossing a subnet router
as well as when crossing a gateway?

2. has anyone had trouble with pyramids tcp going through
routers?

 we observe tcp hangups and connection failures when
attempting ftp/rcp bulk transfers from a Sun 3 to a pyramid
98x. Packet loss in cisco is about 1/3 of every large packet

(sun sending 2-3 1000 byte packets bakc to back, pyramid
advertising 8K window).

3. Has anyone got a distributed tool for invalidating the arp
caches on a pile of 4.2 systems (for when we move a proxy arp
router)?

jon

-----------[000208][next][prev][last][first]----------------------------------------------------
Date:      28 May 87 15:08 PDT
From:      Tom Perrine <Perrine@LOGICON.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Cc: Perrine@LOGICON.ARPA, john@LOGICON.ARPA

Subject: Ethernet HW recommendations?
Message-Id: <8705281508.181.Perrine@LOGICON.ARPA>

I am investigating what it would take to install our *first* Ethernet.
I (think) I have the software questions answered, but now have to deal
with the hardware aspects.

We have the following systems:
	VAX-11/780 running 4.2BSD (no current plans to upgrade to 4.3)
	VAX 8250 running Ultrix (to be installed in July, Ultrix 2.0)
	PDP-11/70 currently running PWB! (This machine is our current Internet
		access, it will upgrade to 2.9BSD Real Soon Now)
	PDP-11/84 currently running SysVR2 (will upgrade to 2.9BSD if it
		is going to join the local net.)
	Several IBM ATs and AT clones (no current plans to add to net, but
		you never know)

The Vaxen are the primary focus.  We need to have these machines
talking together more than anything else.  We can wait for a while to
connect them to the 11/70 (at which point the Vaxen would become a
subnet behind the 11/70?).

The questions are:

1) What is the most reliable/cost-effective/cheapest ethernet device to
use in the 780 that is supported under 4.2BSD?  (The 8250 has an
integral (BI) ethernet controller, which Ultrix had better support!)

2) What is the most reliable/cost effective/cheapest ethernet device to
use in the 11/70, that is supported under 2.9BSD?

I dont care if the device drivers are in the BSD distributions or if
they come from a vender, but I don't want to have to write them (or do
a lot of support work for them).

Thanks for any help you can offer.  I am especially interested in
personal opinions of the various vendors, etc.  *Please send MAIL*.  If
there is any interest, I will summarize to the net.

Tom Perrine
Logicon - Operating Systems Division	San Diego CA (619)455-1330 x725
"There is a special place in hell reserved for people who park in fire lanes."

-----------[000209][next][prev][last][first]----------------------------------------------------
Date:      Thu, 28-May-87 13:30:11 EDT
From:      Takano%THOR@hplabs.HP.COM (Haruka Takano)
To:        comp.protocols.tcp-ip
Subject:   Re: Reassembly queues in TCP

Could it have something to do with the Wollongong software?  We have
Wollongong software running on a Vax some PC/AT clones here and I've
noticed that occasionally, when talking to our Dec-20, the window
size will go down to 1 (sounds suspiciously like a version of the
silly window syndrome).  Has anybody else run into this problem?

--Haruka Takano (Takano@HPLABS.HP.COM)
-------

-----------[000210][next][prev][last][first]----------------------------------------------------
Date:      Thu, 28-May-87 13:37:20 EDT
From:      geof@apolling.UUCP (Geof Cooper)
To:        comp.protocols.tcp-ip
Subject:   Re: IP Datagram sizes


 >      In other words, negotiate a value that is too large rather than too
 >      small, and use as large a value as gets through without fragmenting
 >      (without exceeding the negotiated value of course). Such a scheme is
 >      also compatible with what is currently implemented. Old TCP
 >      implementations will negotiate small MSS's. 

Ummm, the sending TCP doesn't know that the packets are being fragmented.
The receiving TCP does.  As John Wobus states, you have to treat the
different directions differently.

If a TCP-level solution is really the way of choice (I don't believe it
is) then just allow the MSS option to exist on ANY packet.  Beyond the
first packet the interpretation is that it is an advisory value, relating
to fragmentation.  I think this is the smallest change to the TCP spec
to make things work.  It also should work fine with all existing
implementations, since the MSS option should just be ignored past the
first packet (there is probably some implementation out there who sends
mail to the system maintainer of the other system, flaming at him to
fix his TCP....).

- Geof

-----------[000211][next][prev][last][first]----------------------------------------------------
Date:      Thu, 28-May-87 16:33:17 EDT
From:      minshall@OPAL.BERKELEY.EDU
To:        comp.protocols.tcp-ip
Subject:   Re: tn3270 availability

Scott,
	You asked whether I've thought about making tn3270 run
on an Encore Annex or other terminal server.  Not really.

	However, Phil Budne <budd@bu-cs.bu.edu> mentioned his
desire to port tn3270 to, specifically, the Encore.  I offered
him (in Monterey) access to the current development tn3270 (which
is easier to port than the distribution tn3270).  I don't know
if he has thought much more about it.

	There are problems, you understand.  The main problem is that
an Encore wants to have variables accessed as 'x[terminal_number]'
(or via pointer, or some such), and tn3270 isn't built in that way.
Also, one would like to be able to download 'map3270' (keyboard mapping)
tables.

	Anyway, you might want to check with Phil Budne.  Or, ask
your local, friendly vendor.

Greg

-----------[000212][next][prev][last][first]----------------------------------------------------
Date:      Thu, 28-May-87 18:08:00 EDT
From:      Perrine@LOGICON.ARPA.UUCP
To:        comp.protocols.tcp-ip
Subject:   (none)

Cc: Perrine@LOGICON.ARPA, john@LOGICON.ARPA

Subject: Ethernet HW recommendations?
Message-Id: <8705281508.181.Perrine@LOGICON.ARPA>

I am investigating what it would take to install our *first* Ethernet.
I (think) I have the software questions answered, but now have to deal
with the hardware aspects.

We have the following systems:
	VAX-11/780 running 4.2BSD (no current plans to upgrade to 4.3)
	VAX 8250 running Ultrix (to be installed in July, Ultrix 2.0)
	PDP-11/70 currently running PWB! (This machine is our current Internet
		access, it will upgrade to 2.9BSD Real Soon Now)
	PDP-11/84 currently running SysVR2 (will upgrade to 2.9BSD if it
		is going to join the local net.)
	Several IBM ATs and AT clones (no current plans to add to net, but
		you never know)

The Vaxen are the primary focus.  We need to have these machines
talking together more than anything else.  We can wait for a while to
connect them to the 11/70 (at which point the Vaxen would become a
subnet behind the 11/70?).

The questions are:

1) What is the most reliable/cost-effective/cheapest ethernet device to
use in the 780 that is supported under 4.2BSD?  (The 8250 has an
integral (BI) ethernet controller, which Ultrix had better support!)

2) What is the most reliable/cost effective/cheapest ethernet device to
use in the 11/70, that is supported under 2.9BSD?

I dont care if the device drivers are in the BSD distributions or if
they come from a vender, but I don't want to have to write them (or do
a lot of support work for them).

Thanks for any help you can offer.  I am especially interested in
personal opinions of the various vendors, etc.  *Please send MAIL*.  If
there is any interest, I will summarize to the net.

Tom Perrine
Logicon - Operating Systems Division	San Diego CA (619)455-1330 x725
"There is a special place in hell reserved for people who park in fire lanes."

-----------[000213][next][prev][last][first]----------------------------------------------------
Date:      Thu, 28-May-87 18:50:49 EDT
From:      brunner@SPAM.ISTC.SRI.COM.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: Need "official" top-level Domain List


------- Forwarded Message

Is there any definitive top-level domain list (EDU, GOV, MIL etc)?
I looked in RFC990 (Assigned Numbers) and a few other places to
no avail.

If someone even has an unofficial but reasonably authoritative list
I'd appreciate that.

Specific question: Fill in the blank:

	.EDU	Educational
	.GOV	Governmental
	.NET	(Network Gateway?) <-the blank

	-Barry Shein, Boston University

------- End of Forwarded Message
Barry, We are talking a bit internally about calling ourselves "sri.org",
due to our being "not-for-profit", like some of the below (our hostable).

Eric


10.3.0.7        rand-unix.arpa rand.org
192.5.14.33     rand-unix.arpa rand.org
10.1.0.111      gateway.mitre.org mitre-gateway.arpa
128.29.31.10    gateway.mitre.org mitre-gateway.arpa
10.2.0.111      mitre-lan.mitre.org mitre-lan.arpa
128.29.0.2      mitre-lan.mitre.org mitre-lan.arpa
26.2.0.65       aerospace.aero.org aerospace.arpa
128.29.73.1     bert.mitre.org bert.arpa
128.121.50.1    jvnca.csc.org
128.121.51.1    jvnc.csc.org
128.29.73.2     ernie.mitre.org ernie.arpa
128.109.139.2   rti.rti.org
128.121.50.2    jvncb.csc.org
128.109.130.3   mcnc.org
128.121.50.3    jvncc.csc.org
128.121.51.6    jvncf.csc.org
128.121.50.50   nexus.csc.org
128.121.50.51   pep.csc.org
128.121.50.200  colo.csc.org
192.12.54.33    arecibo.aero.org aerospace-sced.aero.org
192.5.14.100    sol.rand.org

-----------[000214][next][prev][last][first]----------------------------------------------------
Date:      Thu, 28-May-87 19:19:38 EDT
From:      gnu@hoptoad.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: Market Survey

bob@tut.cis.ohio-state.edu (Bob Sutterfield) writes:
> We have already developed the design for an ether-noose, used for
> chasing the hardware people who sometimes decide to tap the backbone
> during afternoon prime time.

Now really, is this part of your War Department sanctioned research, Bob?
You know, We who Watch Over the net have had our eyes on trouble makers like
you for a long time, but because we're Nice Guys we haven't said anything
until now.  But Watch Out.  Your Funding may be Cut any day now, and the
local Army base has been notified in case they want to take Appropriate
Action.

Oh, you weren't on the Arpanet?  We didn't sponsor your Research?
Well, we'll still notify the Army, Just in Case.

Isn't it Just Like the Government to make rules that are impossible or
improbable to follow?  It's just Coincidence that Rulemakers like to
have a Plausible Reason for stepping on people who Do Things we don't like.
Remember what Happened to MIT-AI!  Your Power Supply may be Next!   :-)
-- 
Copyright 1987 John Gilmore; you may redistribute only if your recipients may.
(This is an effort to bend Stargate to work with Usenet, not against it.)
{sun,ptsfa,lll-crg,ihnp4,ucbvax}!hoptoad!gnu	       gnu@ingres.berkeley.edu

-----------[000215][next][prev][last][first]----------------------------------------------------
Date:      Thu, 28 May 87 15:40:23 GMT-0:00
From:      Jon Crowcroft <jon@Cs.Ucl.AC.UK>
To:        tcp-ip@sri-nic.arpa
Subject:   IP fragments and subnets

three questions

1. We've subnetted the UCL net a lot recently using cisco routers.
does 4.3 adjust its mtu down when crossing a subnet router
as well as when crossing a gateway?

2. has anyone had trouble with pyramids tcp going through
routers?

 we observe tcp hangups and connection failures when
attempting ftp/rcp bulk transfers from a Sun 3 to a pyramid
98x. Packet loss in cisco is about 1/3 of every large packet

(sun sending 2-3 1000 byte packets bakc to back, pyramid
advertising 8K window).

3. Has anyone got a distributed tool for invalidating the arp
caches on a pile of 4.2 systems (for when we move a proxy arp
router)?

jon
-----------[000216][next][prev][last][first]----------------------------------------------------
Date:      Fri, 29-May-87 01:02:30 EDT
From:      JBVB@AI.AI.MIT.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   TN3270

I inquired of a Network Solutions salesperson after one of their
presentations at the conference in Monterey this past March, and
he said that their DDN/MVS did not support 3270-mode telnet.

Regarding an RFC: I suppose I could document what I've found while
doing FTP's implementation, but I'm not a mainframe programmer, and
I got the feeling during an earlier discussion here that the present
state of affairs wasn't to everyone's satisfaction.  Should I (or
some one of the others working on TN3270) institutionalize what
we've got now?  If not, I'd be happy to be included in any e-mail
discussion of what *ought* to be done.  I definitely agree that the
issue needs some work, since the number of available implementations
has been growing.

jbvb@ai.ai.mit.edu
James B. VanBokkelen
FTP Software, Inc.

PS: I sent a request to be added to this list to "tcp-ip-request@sri-nic.arpa"
three weeks ago, and it appears I haven't been added.  Was that the wrong
place to send it?

-----------[000217][next][prev][last][first]----------------------------------------------------
Date:      Fri, 29-May-87 07:19:15 EDT
From:      hedrick@TOPAZ.RUTGERS.EDU (Charles Hedrick)
To:        comp.protocols.tcp-ip
Subject:   Re:  IP fragments and subnets

4.3 should use the full MTU when going through a subnet gateway.
There is a parameter SUBNETSARELOCAL in in.c that controls whether
in_localaddr says that other subnets are local or not.  This routine
is called by tcp_mss, which is used to compute the MSS for a
conversation.  However we have seen some really odd code in some
4.3-derived versions of tcp_mss, so you may want to see that your
particular kernels do the right thing.

We know of no inherent problems with TCP through Cisco routers.  If
your problem is with Ethernet to Ethernet gateways, I have no
explanation.  There is a trivial bug in some versions of Cisco's
serial line support that would cause packet loss in gateways that use
serial lines to connect them.  I suggest contacting Cisco, as a fix is
available.  To see whether this is problem, try SHOW INTERFACES and
see whether you are seeing a huge number of giants.  In general, Cisco
has good enough monitoring facilities that by using various options of
SHOW you should normally be able to see what is going on.  One
warning: NFS produces sequences of up to 6 packets that are spaced
very close.  This is unfriendly with any gateway.  Some configurations
of Cisco gateways simply can't handle this at all.  You want to use
the read and write buffer size parameters in mount (or /etc/fstab) to
specify that writes should be no larger than 2 packets.  We use 2048,
but any number less than 2 * 1460 should probably work.  Pyramid does
not yet have these options.  If you are using a Pyramid as a client
across a gateway, please contact me and I'll give you the necessary
patch.  (Tell me whether you have source.  I'm not sure how to give
you a binary patch, but I'll try.)  It is fine to use a Pyramid as a
server across the gateway.  These very closely spaced packets do not
happen with TCP as far as I know.

I assume you know about the arp program?  You should rsh the
appropriate arp command to invalidate the arp tables.  Both Sun
and Pyramid have this program.

-----------[000218][next][prev][last][first]----------------------------------------------------
Date:      Fri, 29-May-87 07:46:20 EDT
From:      heker@JVNCA.CSC.ORG (Sergio Heker)
To:        comp.protocols.tcp-ip
Subject:   Re: Reassembly queues in TCP

Hakura:

I ran a test over a T1 line connecting a VAX8600 running ULTRIX and a 
VAX750 running Wollongong.  Both machines with DMR11 interfaces.  The
MTU's for both interfaces is 1248.  I noticed that the round trip delay
I was getting was less that 30 ms from 18 bytes to 1248 bytes, then as
I continued increasing the packet size it jumps to around 6 sec.  This
only happens when we do it with a machine running Wollongong.  When I 
tried from ULTRIX to ULTRIX the jump is about one order of magnitude
instead of 200 times.


					-- Sergio Heker, JVNCnet

-----------[000219][next][prev][last][first]----------------------------------------------------
Date:      Fri, 29 May 87 10:53:03 MDT
From:      cetron@cs.utah.edu (Edward J Cetron)
To:        comp.protocols.tcp-ip,comp.sys.dec,comp.unix.questions,comp.os.vms,comp.sys.misc
Subject:   Re: TCP/IP ETHERNET DECNET VAX
In article <2033@a.cs.okstate.edu> keith@a.cs.okstate.edu (Keith Lovelace) writes:
>Problem one.  Can ULTRIX, or 4.2 BSD (since it is supposedly not changed by
>DEC) support both a DECNet link and a TCP/IP link at the same time without
>problems?
	yes, no problem. and supposedly ultrix 2.0 will support a sort of
semi-transparent file and remote login capabilities (vms site says copy and the
ultrix site takes the copy on one side and converts it to a ftp on the other)

>Problem two.  Does ULTRIX do very well talking to terminal servers such as
>the ANNEX UX on TCP/IP?
	yes, just fine.....
>ETC.  Knowing that the 8350 is a BI bus machine, and that it comes configured
>with one BI to Ethernet adapter, does DEC sell this adapter as a seperate
>device?  I have only been able to find Unibus or Q-bus adapter
	it is call a DEBNT and is the slowest ethernet board DEC makes (on the
fastest bus the dec makes :-) )....anyway I understand that the delua on a
bi->unibus adapter is faster (though more expensive) explore with you sales 
person re: the DEBNT II....

(by the way, another pathway is to get tcp/ip for vms for an educational 
institution it is VERY cheap and I DO NOT MEAN  wollongong...)

-ed cetron
computer systems manager
center for engineering design
univ of utah



-----------[000220][next][prev][last][first]----------------------------------------------------
Date:      Fri, 29-May-87 09:49:33 EDT
From:      keith@a.cs.okstate.edu (Keith Lovelace)
To:        comp.protocols.tcp-ip,comp.sys.dec,comp.unix.questions,comp.os.vms,comp.sys.misc
Subject:   TCP/IP ETHERNET DECNET VAX

Help!!!

We here at OSU have just recently decided that we would purchase a VAX 8350 and
run ULTRIX on it.  We currently are running a VAX 11/780 with VMS.  We would
like to run DECNet to the two machines for file transferr, mail, etc. (our
first attempt at DECNetting anything).  We would then like to run a TCP/IP
link on the 8350 to do the terminal interface.  The first problem arises in
the fact that there are very few DEC people who know anything about ULTRIX,
much less TCP/IP.  This is also our first attempt at providing a campus wide
UNIX (at least a derivative) system, and also our first attempt at TCP/IP
networking.  In other words, we are stepping into unknown and previously
untouched areas.  To get on with it, my questions may be simple to you but
are still confusing to me.

Problem one.  Can ULTRIX, or 4.2 BSD (since it is supposedly not changed by
DEC) support both a DECNet link and a TCP/IP link at the same time without
problems?

Problem two.  Does ULTRIX do very well talking to terminal servers such as
the ANNEX UX on TCP/IP?

ETC.  Knowing that the 8350 is a BI bus machine, and that it comes configured
with one BI to Ethernet adapter, does DEC sell this adapter as a seperate
device?  I have only been able to find Unibus or Q-bus adapters.

The reason for going to the TCP/IP connection for terminals is based on the
fact that we don't want to lock ourselves into $'s of DEC hardware for
DECNet and then 2 years from now buy an ENCORE and find ourselves in
a real hole.

Are there other options that we are leaving out? Are we getting ready to
spend $'s only so that we can screw ourselves?  Does anyone out their
run a machine that has the dual connections discussed?  Is there some
other way that we haven't thought of that we are going to screw ourselves?

Keith...
___________________________________________________________________________
Keith Lovelace 
Computer Center                      Internet:  keith@a.ucc.okstate.edu
Oklahoma State University            UUCP:  {cbosgd, ihnp4, rutgers, seismo,
Stillwater, Oklahoma                         uiucdcs}!okstate!keith
Phone:  (405) 624-6301

-----------[000221][next][prev][last][first]----------------------------------------------------
Date:      Fri, 29-May-87 10:25:55 EDT
From:      dab@oliver.cray.COM.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: Reassembly queues in TCP


Many months ago I ran into problems with running out of mbufs
on the Cray computers.  Among the problems was the one described
by Phil Wood, that of getting in lots of little (1 byte) packets,
and having one of the early ones get lost.  I did a twofold fix
to the Berkeley code (4.2, same mods apply to 4.3) to keep this
problem to a minimum.

	1) Compact the TCP reassembly queue.  The Berkeley code
	   does not compact the TCP reassembly queue.  If you
	   have 500 1 byte packets on your reassembly queue, you
	   are using up 500 mbufs.

	2) In uipc_mbuf.c, there is the comment in m_expand()
		/* should ask protocols to free code */
	   Well, I did just that.  I wrote a routine called
	   pfdrain(), almost identical to pfctlinput(). Then, I
	   also added code to the tcp_drain() routine to actually
	   go through all the tcp reassembly queues and free up
	   all the fragments.  Since we haven't acknowledged any
	   of it, it's no problem to toss it.

These mods are not very long, and I sent them to Mike Karels awhile
ago (but not in time for the release).  It took around 40 lines
of code to do the above mods.  Perhaps these fixes might show up
on the 4.3 bug list at some point.  If you are in urgent need of
this code, contact me directly.
			Dave Borman
			Cray Research, Inc.
			dab@umn-rei-uc.arpa

-----------[000222][next][prev][last][first]----------------------------------------------------
Date:      Fri, 29 May 87 14:15:45 -0400
From:      tappan@MIKEY.BBN.COM
To:        "John M. Wobus" <JMWOBUS%SUVM.BITNET@wiscvm.wisc.edu>
Cc:        TCP-IP Discussion Group <TCP-IP@SRI-NIC.ARPA>
Subject:   Re: Making TCP avoid fragmentation to help performance.

>>Date:     Wed, 27 May 1987 17:03:40 LCL
>>From:     John M. Wobus <JMWOBUS%SUVM.BITNET@wiscvm.wisc.edu>
>>To:       TCP-IP Discussion Group <TCP-IP@SRI-NIC.ARPA>
>>Subject:  Making TCP avoid fragmentation to help performance.
>>  (omitted)
>>					......  I presume the most
>>radical part of this proposal is to add a 1-bit field to the TCP header,
>>something one wouldn't want to do every day.

If we're talking about changing the protocols maybe what we need is
something like this:

The sort of fragmentation and congestion problems that exist are
caused by one thing - lack of information on the part of the sender as
to what's happening to the data it has sent out.

For IP messages if a message gets fragmented and a piece gets lost the
sender never knows it, and has no choice but to eventually retransmit
the whole thing (which may get fragmented in the same way).

The same problem exists for the transport level protocols. If a TCP
sends several consecutive segments and one in the middle is lost then it
gets no feedback on any of them. All it can do is wait out the
retransmission timeout and resend some or all of the unacknowledged
data.

Consider an ICMP message like the following:
	----------------

Fragment Missing

 0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|     Type      |     Code      |         Checksum              |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                             unused                            |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|     Internet Header + 64 bits of Original Data Datagram       |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+


Type 
   ???

Code

 0 = IP fragments missing
 1 = Transport Protocol fragment missing

If Code = 0 then the message refers to a IP datagram that was
fragmented.  The sender has received some of the fragments but is
missing others. The IP header contained in the message indicates the
"fragment offset" of missing fragment(s). The "total length" indicates
how much of the datagram is missing at that point.

If Code = 1 then the message indicates that the some the Transport
Protocol is has lost data. The included header indicates which data is
missing, for example for TCP the TCP "sequence number" and IP "total
length" fields would define the missing data.


	---------------

How this might be used:

1) If a host receives several fragments of a message and one in the
middle is missing there is a good chance (given the way the Internet
actually works) that that fragment is lost. Instead of waiting for the
reassembly timeout the host can send back a Code 0 "fragment missing"
message. With luck and proper implimentation the other host can
recreate the missing fragment and send out only it. Even if it can't
recreate the fragment it can use the message as a strong hint as to
how large messages can get before fragmentation becomes a problem.
After all, the issue isn't really how large messages can get without
being fragmented - if all the fragments are delivered then throughput
is probably better then if the message has been sent in smaller
segments - the problem is when messages both get fragmented AND the
fragments are sent over an unreliable channel.

2) If the last fragment of a message is missing it takes the receiving
host longer to realize it, but it could send out a Code 0 message
instead of (or sooner then) a "Reassembly time exceeded" message.  If
several fragments of the message arrive it can get a good idea whether
the last fragment is missing by watching the arrival times.

3) If a transport level protocol receives out of sequence data it can
also assume that a segment is missing and produce a Code 1 message
which wll tell the sender to retransmit only that much data.

Note that in any of these cases if the ICMP message is not delivered
all that happens is that we revert to the current situation.



-----------[000223][next][prev][last][first]----------------------------------------------------
Date:      Fri, 29-May-87 12:53:08 EDT
From:      cetron@utah-cs.UUCP (Edward J Cetron)
To:        comp.protocols.tcp-ip,comp.sys.dec,comp.unix.questions,comp.os.vms,comp.sys.misc
Subject:   Re: TCP/IP ETHERNET DECNET VAX

In article <2033@a.cs.okstate.edu> keith@a.cs.okstate.edu (Keith Lovelace) writes:
>Problem one.  Can ULTRIX, or 4.2 BSD (since it is supposedly not changed by
>DEC) support both a DECNet link and a TCP/IP link at the same time without
>problems?
	yes, no problem. and supposedly ultrix 2.0 will support a sort of
semi-transparent file and remote login capabilities (vms site says copy and the
ultrix site takes the copy on one side and converts it to a ftp on the other)

>Problem two.  Does ULTRIX do very well talking to terminal servers such as
>the ANNEX UX on TCP/IP?
 yes, just fine.....
>ETC.  Knowing that the 8350 is a BI bus machine, and that it comes configured
>with one BI to Ethernet adapter, does DEC sell this adapter as a seperate
>device?  I have only been able to find Unibus or Q-bus adapter
	it is call a DEBNT and is the slowest ethernet board DEC makes (on the
fastest bus the dec makes :-) )....anyway I understand that the delua on a
bi->unibus adapter is faster (though more expensive) explore with you sales 
person re: the DEBNT II....

(by the way, another pathway is to get tcp/ip for vms for an educational 
institution it is VERY cheap and I DO NOT MEAN  wollongong...)

-ed cetron
computer systems manager
center for engineering design
univ of utah

-----------[000224][next][prev][last][first]----------------------------------------------------
Date:      Fri, 29-May-87 13:04:44 EDT
From:      dplatt@teknowledge-vaxc.ARPA (Dave Platt)
To:        comp.unix.wizards,comp.protocols.tcp-ip
Subject:   Re: IP fragmentation, and how to avoid it


About three weeks ago I posted a query concerning an IP-fragmentation
problem that I had encountered on my Sun workstation.  I've received a
really astounding amount of assistance from folks on the net, and have
been able to zap the problem.  Several people have asked me to
summarize my findings and the answers I received from informed
netfolks... so, here goes.

- The original symptom of the problem was that SMTP connections would
  hang, and then abort with a network-read timeout, while sending
  large messages to a few hosts on the Internet.  Other hosts
  (including those of the same type as the affected systems) were not
  affected.

- Several people suggested that I check to ensure that my Ethernet
  interface was configured with the -trailers option (it is).

- The problem was triggered by the fact that the MTU of my Sun's
  Ethernet interface (1500 bytes) was less than the MTU of our ARPANET
  gateway's IMP interface (1006 bytes).  This situation caused the
  TCP/IP packets sent by my Sun to be fragmented as they passed
  through the gateway.

- The fragmented packets would occasionally fail to be reassembled
  upon reception.  Some hosts apparently don't implement IP-packet
  reassembly (or don't do it reliably).  Also, I'm told that there is
  a bug in BSD 4.2 UNIX (and possibly in 4.3 as well) that prevents
  BSD systems from successfully fragmenting an already-fragmented IP
  packet.  Thus, if a 1006-byte fragment from our net's gateway had to
  be refragmented to fit within the MTU of the destination host's
  network, the new fragments would be malformed and could not be
  successfully reassembled.

- One method for working around the problem is to reduce the Sun's
  Ethernet MTU to <= 1006 bytes, so that our gateway won't have to
  fragment the packets.  I was able to locate the constant 1500 in the
  "ether_attach()" function in /vmunix, and patch it down to 1000
  bytes with adb; booting with the patched /vmunix resolved the
  problem.  Charles Hedrick posted the source for a small program that
  can change the MTU of the interface "on the fly", and it also works
  like a charm;  it's the method I'm now using.

  Reducing the Ethernet MTU increases the number of packets needed to
  complete NFS RPCs, and thus increases the overhead;  NFS continues
  to work just fine.  I've been warned that decreasing the MTU will
  probably break ND, but as I don't use it I don't really care.

- Another method for fixing the problem is persuading TCP to use a
  smaller segment size, so that the packets that it sends will not
  exceed the 1006-byte limit.  I tried patching the 1024-byte MSS in
  tcp_output() to a smaller size (512 bytes), but this did not appear
  to work.  I'm not sure why, as I have no sources for the SunOS 3.2
  version of BSD 4.2 TCP.

  Many people have pointed out that BSD 4.3 TCP makes a better choice
  of MSS, based on the MTU of the interface and on whether the packets
  will be routed through a gateway (a 512-byte MSS is used if the
  packets are sent to any non-local destination).  The BSD 4.3
  enhancements have been incorporated into SunOS 3.4, which is due to
  be shipped Real Soon Now according to our Sun sales-rep.  I FTP'ed
  the BSD 4.3 source for TCP from seismo (thanks, rick!) and can see
  the additional logic;  I haven't tried to retrofit the new TCP into
  SunOS 3.2 or patch in equivalent code due to lack of time and lack
  of urgency.

So... I've got a good workaround for the problem (reducing the MTU),
and the problem will go away once I install SunOS 3.4 with the BSD 4.3
enhancements to TCP.  Happy ending.

MANY thanks to all of the people on the net who have sent suggestions,
hints, and reports of similar problems elsewhere!

-----------[000225][next][prev][last][first]----------------------------------------------------
Date:      Fri, 29-May-87 14:23:08 EDT
From:      mkhaw@teknowledge-vaxc.ARPA (Michael Khaw)
To:        comp.protocols.tcp-ip,comp.sys.dec,comp.unix.questions,comp.os.vms,comp.sys.misc
Subject:   Re: TCP/IP ETHERNET DECNET VAX

In article <2033@a.cs.okstate.edu> keith@a.cs.okstate.edu (Keith Lovelace) writes:
...
>Problem one.  Can ULTRIX, or 4.2 BSD (since it is supposedly not changed by
>DEC) support both a DECNet link and a TCP/IP link at the same time without
>problems?
>
>Problem two.  Does ULTRIX do very well talking to terminal servers such as
>the ANNEX UX on TCP/IP?
>

We run both tcp/ip and decnet on our Ultrix machines, each of which has a
single DEC ethernet interface (DEUNA, DEQNA, DELUA, as appropriate).  We
also have many terminals that connect via tcp/ip terminal servers (Cisco
Systems "TIP"s).

I don't know what you mean by "do very well".  People on terminal servers
get marginally more sluggish response, because of the tcp/ip packet overhead,
than people on direct lines to the vax, but it is quite tolerable.

Anyone accustomed to VMS decnet will find the Ultrix decnet rather impoverished.
Ultrix decnet file transfer is rather inconvenient compared to either ftp or
VMS to VMS decnet file transfer.  Decnet remote logins from VMS to Ultrix
are also awkward because of the VMS terminal driver attempts to handle control
character interpretation (you can turn most of it off, but the point is that
decnet doesn't do that automatically for you), and line-oriented (instead of
character-oriented) i/o.  We run tcp/ip on our VMS machines and use it in
preference to Decnet.

Mike Khaw
-- 
internet:  mkhaw@teknowledge-vaxc.arpa
usenet:	   {hplabs|sun|ucbvax|decwrl|sri-unix}!mkhaw%teknowledge-vaxc.arpa
USnail:	   Teknowledge Inc, 1850 Embarcadero Rd, POB 10119, Palo Alto, CA 94303

-----------[000226][next][prev][last][first]----------------------------------------------------
Date:      Fri, 29-May-87 14:52:52 EDT
From:      tappan@MIKEY.BBN.COM
To:        comp.protocols.tcp-ip
Subject:   Re: Making TCP avoid fragmentation to help performance.


>>Date:     Wed, 27 May 1987 17:03:40 LCL
>>From:     John M. Wobus <JMWOBUS%SUVM.BITNET@wiscvm.wisc.edu>
>>To:       TCP-IP Discussion Group <TCP-IP@SRI-NIC.ARPA>
>>Subject:  Making TCP avoid fragmentation to help performance.
>>  (omitted)
>>					......  I presume the most
>>radical part of this proposal is to add a 1-bit field to the TCP header,
>>something one wouldn't want to do every day.

If we're talking about changing the protocols maybe what we need is
something like this:

The sort of fragmentation and congestion problems that exist are
caused by one thing - lack of information on the part of the sender as
to what's happening to the data it has sent out.

For IP messages if a message gets fragmented and a piece gets lost the
sender never knows it, and has no choice but to eventually retransmit
the whole thing (which may get fragmented in the same way).

The same problem exists for the transport level protocols. If a TCP
sends several consecutive segments and one in the middle is lost then it
gets no feedback on any of them. All it can do is wait out the
retransmission timeout and resend some or all of the unacknowledged
data.

Consider an ICMP message like the following:
	----------------

Fragment Missing

 0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|     Type      |     Code      |         Checksum              |
 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                             unused                            |
 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|     Internet Header + 64 bits of Original Data Datagram       |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+


Type 
   ???

Code

 0 = IP fragments missing
 1 = Transport Protocol fragment missing

If Code = 0 then the message refers to a IP datagram that was
fragmented.  The sender has received some of the fragments but is
missing others. The IP header contained in the message indicates the
"fragment offset" of missing fragment(s). The "total length" indicates
how much of the datagram is missing at that point.

If Code = 1 then the message indicates that the some the Transport
Protocol is has lost data. The included header indicates which data is
missing, for example for TCP the TCP "sequence number" and IP "total
length" fields would define the missing data.


	---------------

How this might be used:

1) If a host receives several fragments of a message and one in the
middle is missing there is a good chance (given the way the Internet
actually works) that that fragment is lost. Instead of waiting for the
reassembly timeout the host can send back a Code 0 "fragment missing"
message. With luck and proper implimentation the other host can
recreate the missing fragment and send out only it. Even if it can't
recreate the fragment it can use the message as a strong hint as to
how large messages can get before fragmentation becomes a problem.
After all, the issue isn't really how large messages can get without
being fragmented - if all the fragments are delivered then throughput
is probably better then if the message has been sent in smaller
segments - the problem is when messages both get fragmented AND the
fragments are sent over an unreliable channel.

2) If the last fragment of a message is missing it takes the receiving
host longer to realize it, but it could send out a Code 0 message
instead of (or sooner then) a "Reassembly time exceeded" message.  If
several fragments of the message arrive it can get a good idea whether
the last fragment is missing by watching the arrival times.

3) If a transport level protocol receives out of sequence data it can
also assume that a segment is missing and produce a Code 1 message
which wll tell the sender to retransmit only that much data.

Note that in any of these cases if the ICMP message is not delivered
all that happens is that we revert to the current situation.

-----------[000227][next][prev][last][first]----------------------------------------------------
Date:      Fri, 29 May 87 17:34:24 -0400
From:      Robert Hinden <hinden@CCV.BBN.COM>
To:        leiner@ICARUS.RIACS.EDU
Cc:        CERF@A.ISI.EDU, BUDDENBERGRA@A.ISI.EDU, tcp-ip@SRI-NIC.ARPA, hinden@CCV.BBN.COM
Subject:   Re: Datagram sizes

I am not sure that it is always the best strategy to avoid IP
fragmentation.  There are some cases where it might provide better
service.

I think that in cases of networks where the MTU are large and not too
divergent, avoiding fragmentation by using the minimum MTU is good.
A good example is a 1500 byte Ethernet connected to the 1006 byte
Arpanet.  Using 1006 MTU for the TCP segments probably provides the best
service.

In the case where one network has a large MTU and the other network
has a small MTU it may be better to send one large packet over the
first network and have it fragmented.  For example, suppose you had
10,000 bytes of data to send using TCP from an Arpanet host to a host
on the other side of Satnet (256 MTU).  If you assumed an MTU for the
connection of 256 it would require 47 datagrams (each w/ 20 bytes IP
header, 20 bytes TCP header, 216 bytes of data) to send the data.  If
you use an MTU of 1006, it would require 11 datagrams (each w/ 20 IP,
20 TCP, 996 data) over the Arpanet and 55 fragments (11 * 5) over
Satnet (first with 20 IP, 20 TCP, 216 data, next three with 20 IP,
236 data, last with 20 IP, 72 data).  I would think that using the
1006 MTU would provide better service.  It reduces the number of
datagrams into the Arpanet by a factor of four and adds 8 datagrams
into Satnet.  Assuming the worst case TCP, it would eliminate 36 TCP
ACK's on the return trip.

If you take the more complicated case of Ethernet to Arpanet to
Satnet, then it is, of course, more complicated.  Using 1006 for an
MTU would reduce the number of datagrams that the Ethernet-Arpanet
gateway would have to send into the Arpanet.  This gateway is
probably the biggest bottleneck in the path (10M bps in, 50K bps
out).  Using an MTU of 256 probably causes this gateway to block or
drop lots of packets.  Using an MTU of 1500 gives one the worst of
both worlds.

Isn't this fun.  No simple solutions.

Bob
-----------[000228][next][prev][last][first]----------------------------------------------------
Date:      Fri, 29-May-87 15:08:28 EDT
From:      leiner@riacs.EDU
To:        comp.protocols.tcp-ip
Subject:   Re: Datagram sizes

Vint,

Right.  Presumably, each network would be designed to use a maximum
packet size appropriate to its technology.  The right maximum on an end
to end basis is sort of the minimum of those across the multiple
networks in the path.

Right?

Barry
----------

-----------[000229][next][prev][last][first]----------------------------------------------------
Date:      Fri, 29-May-87 18:03:11 EDT
From:      hinden@CCV.BBN.COM.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: Datagram sizes


I am not sure that it is always the best strategy to avoid IP
fragmentation.  There are some cases where it might provide better
service.

I think that in cases of networks where the MTU are large and not too
divergent, avoiding fragmentation by using the minimum MTU is good.
A good example is a 1500 byte Ethernet connected to the 1006 byte
Arpanet.  Using 1006 MTU for the TCP segments probably provides the best
service.

In the case where one network has a large MTU and the other network
has a small MTU it may be better to send one large packet over the
first network and have it fragmented.  For example, suppose you had
10,000 bytes of data to send using TCP from an Arpanet host to a host
on the other side of Satnet (256 MTU).  If you assumed an MTU for the
connection of 256 it would require 47 datagrams (each w/ 20 bytes IP
header, 20 bytes TCP header, 216 bytes of data) to send the data.  If
you use an MTU of 1006, it would require 11 datagrams (each w/ 20 IP,
20 TCP, 996 data) over the Arpanet and 55 fragments (11 * 5) over
Satnet (first with 20 IP, 20 TCP, 216 data, next three with 20 IP,
236 data, last with 20 IP, 72 data).  I would think that using the
1006 MTU would provide better service.  It reduces the number of
datagrams into the Arpanet by a factor of four and adds 8 datagrams
into Satnet.  Assuming the worst case TCP, it would eliminate 36 TCP
ACK's on the return trip.

If you take the more complicated case of Ethernet to Arpanet to
Satnet, then it is, of course, more complicated.  Using 1006 for an
MTU would reduce the number of datagrams that the Ethernet-Arpanet
gateway would have to send into the Arpanet.  This gateway is
probably the biggest bottleneck in the path (10M bps in, 50K bps
out).  Using an MTU of 256 probably causes this gateway to block or
drop lots of packets.  Using an MTU of 1500 gives one the worst of
both worlds.

Isn't this fun.  No simple solutions.

Bob

-----------[000230][next][prev][last][first]----------------------------------------------------
Date:      Fri, 29-May-87 19:33:49 EDT
From:      leiner@riacs.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: Datagram sizes

I stand corrected.  Thanks.

Barry
----------

-----------[000231][next][prev][last][first]----------------------------------------------------
Date:      Fri, 29-May-87 19:35:14 EDT
From:      bzs@BU-CS.BU.EDU.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: Top Level Domains request...


I just want to thank everyone at once for all their help, responses
were coming in every 5 or 10 minutes for hours!

	-Barry Shein, Boston University

-----------[000232][next][prev][last][first]----------------------------------------------------
Date:      Fri, 29-May-87 21:28:03 EDT
From:      kab@reed.UUCP (Kent Black)
To:        comp.protocols.tcp-ip
Subject:   request for RFC 765


Could some kind soul please forward me a copy of RFC 765.

Thanks in advance,
Kent Black	...tektronix!reed!kab

-----------[000233][next][prev][last][first]----------------------------------------------------
Date:      Fri, 29-May-87 23:37:09 EDT
From:      ron@TOPAZ.RUTGERS.EDU (Ron Natalie)
To:        comp.protocols.tcp-ip
Subject:   Boards for UNIBUS


Our absolute favorite is the Interlan NI1010A board which is supported
in both 4.2 and 2.9.  We also used it in the PDP-11 BRL gateway.

-Ron

-----------[000234][next][prev][last][first]----------------------------------------------------
Date:      Sat, 30-May-87 03:05:34 EDT
From:      BRUCE@UMDD.BITNET (Bruce Crabill)
To:        comp.protocols.tcp-ip
Subject:   Re: TN3270

We don't want to simply document what we have now.  The current state of
affairs is not good.  Most people I've talked to seem to agree that it
needs to be cleaner and more information about the type of terminal and
the features the emulator will support need to be presented.  Bob Braden
made some comments a while back concerning the desirability of presenting
the control unit type as well as the terminal type, and I would like to
see some method of indicating if the emulator can handle the Yale ASCII
extensions.  Maybe we can form some kind of working group on this.  Currently
most of the people who would be affected by this is a fairly small group
and perhaps we could decide how it needs to be fixed and write an RFC
stating our findings.  I am more than willing to work on such a project.
How many others out there are interested?

                                       Bruce

-----------[000235][next][prev][last][first]----------------------------------------------------
Date:      Sat, 30 May 87 03:05:34 EDT
From:      Bruce Crabill <BRUCE%UMDD.BITNET@wiscvm.wisc.edu>
To:        TCP-IP@SRI-NIC.ARPA
Subject:   Re: TN3270
We don't want to simply document what we have now.  The current state of
affairs is not good.  Most people I've talked to seem to agree that it
needs to be cleaner and more information about the type of terminal and
the features the emulator will support need to be presented.  Bob Braden
made some comments a while back concerning the desirability of presenting
the control unit type as well as the terminal type, and I would like to
see some method of indicating if the emulator can handle the Yale ASCII
extensions.  Maybe we can form some kind of working group on this.  Currently
most of the people who would be affected by this is a fairly small group
and perhaps we could decide how it needs to be fixed and write an RFC
stating our findings.  I am more than willing to work on such a project.
How many others out there are interested?

                                       Bruce
-----------[000236][next][prev][last][first]----------------------------------------------------
Date:      Sat, 30-May-87 08:07:43 EDT
From:      MARTILLO@primerd.UUCP
To:        comp.protocols.tcp-ip
Subject:   (none)

From: <martillo@ATHENA.MIT.EDU>
Received: by TRILLIAN.MIT.EDU (5.45/4.7) id AA16437; Sat, 30 May 87 02:24:23 EDT
Date: Sat, 30 May 87 02:24:23 EDT
Message-Id: <8705300624.AA16437@TRILLIAN.MIT.EDU>
To: (@enx.prime.pdn:tcp-ip@sri-nic.arpa)
Subject: Distributed Document Handling System


I have heard that NSF is funding the development of a window-based,
distributed document handling and text processing system which
contains a WYSIWIG editor and has graphics handling and outputting
capabilities.  Supposedly the system is called the diamond system and
a lot of the work is being done at BBN.  I would be most grateful if
someone could write and tell me who to contact to find out more about
this system.


Yakim Martillo

-----------[000237][next][prev][last][first]----------------------------------------------------
Date:      Sat, 30-May-87 10:17:20 EDT
From:      davido@gordon.UUCP (David Ornstein)
To:        comp.protocols.tcp-ip,comp.sys.dec,comp.unix.questions,comp.os.vms,comp.sys.misc
Subject:   Re: TCP/IP ETHERNET DECNET VAX

In article <4610@utah-cs.UUCP> cetron@utah-cs.UUCP (Edward J Cetron) writes:
>In article <2033@a.cs.okstate.edu> keith@a.cs.okstate.edu (Keith Lovelace) writes:
>>Problem one.  Can ULTRIX, or 4.2 BSD (since it is supposedly not changed by
>>DEC) support both a DECNet link and a TCP/IP link at the same time without
>>problems?
>	yes, no problem. and supposedly ultrix 2.0 will support a sort of
>semi-transparent file and remote login capabilities (vms site says copy and the
>ultrix site takes the copy on one side and converts it to a ftp on the other)

After reviewing the doc for ultrix 2.0, we discovered that all that is really
supported using the new g-node scheme is NFS support.  There is no mention of
vms stuff.  On the other hand, we are currently running DECnet/Ultrix which
allows me to:

	VMS> COPY FILE.FOO GORDON::"/tmp/123"
and	csh> dcp "VMSNODE::USERS:[DAVIDO]FILENAME.EXT" /tmp/123

but not csh> cp "VMSNODE::USERS:[DAVIDO]FILENAME.EXT" /tmp/123

dcp is one of a few special commands that are built specifically to
work with Decnet/Ultrix.

>
>(by the way, another pathway is to get tcp/ip for vms for an educational 
>institution it is VERY cheap and I DO NOT MEAN  wollongong...)
>

Can somebody suggest a good implementation of tcp/ip for vms, or, alternately,
a good implementation of NFS for VMS. (N.B> We are not an educational 
institution. [read $++]).


-- 
-----------------------------------------------------------------------------
David Ornstein		"Never join a religion that has a water slide."

Internet:	davido@gordon
UUCP:		{mit-eddie|seismo}!mirror!gordon!davido
	     or {harvard|ames|decvax|husc6}!necntc!davido
US Snail:	Access Technology, 6 Pleasant St, Natck MA 01760
-----------------------------------------------------------------------------

-----------[000238][next][prev][last][first]----------------------------------------------------
Date:      Sun, 31-May-87 02:41:05 EDT
From:      mouse@mcgill-vision.UUCP
To:        comp.protocols.tcp-ip
Subject:   Re: IP to DECNET translation ????

In article <8705141842.AA28803@ucbvax.Berkeley.EDU>, JERRY@STAR.STANFORD.EDU writes:
> [...] Van Jacobsen at LBL developed an interface called DBRIDGE that
> allows IP packets to be sent over DECNET.  The way it works is the
> two hosts on either side of a DECNET link install this software that
> uses DECNET mailboxes to communicate between them.  The packets that
> they are transferring are IP packets.  Then a dummy interface is put
> into the kernel that knows to give IP packets to the dbridge process
> for transmission over DECNET.

Fascinating.  Sounds just like what I wrote to perform exactly the same
function (except I called mine `dnip').  Initial version looked like
point-to-point links; current version is a "virtual Ethernet" - a full
broadcast interface.  Not very efficient in its use of DECnet resources
though; it keeps O(n*n) DECnet connections open all the time, where n
is the number of hosts on the virtual Ethernet.  (Eventually, of
course, I intend to fix this.)

> Both Wollongong and SRI products make this software available.

And if anyone wants a non-commercial version (with the benefits and the
disadvantages thereof), send me mail.  At present I have it running
only on MicroVAX Ultrix 1.2; I can't promise anything else.  (Requires
source to install without nasty hackery.)

					der Mouse

				(mouse@mcgill-vision.uucp)

-----------[000239][next][prev][last][first]----------------------------------------------------
Date:      Sun, 31-May-87 07:38:00 EDT
From:      CERF@A.ISI.EDU
To:        comp.protocols.tcp-ip
Subject:   Re: Datagram sizes

Bob,

thanks for a very helpful summary. By this time it should be apparent
that fragmentation at the IP level is not and never was intended to do
more than provide flexibility to deal with systems having different
maximum packet sizes without the source having to know what path the
datagrams were taking. 

Such "blind" flexibility has a performance cost - and was intended 
mostly to deal with the case that the "system" was not or could not
be engineered ahead of time for performance and consistency.

Now that we are focusing, properly, on performance, it seems clear
that the source needs to know (find out? be told?) more about
internet conditions, at least on the paths its datagrams are taking.
similarly, the source needs to be able to assert more precisely
what its requirements are to aid the gateway routing system in choosing
paths.

I hope everyone who is working on and thinking about this problem will
continue to consider the tradeoff between performance and ability to
cope with the unexpected (so important for crisis management). Our
objective should be to learn how to engineer for high performance without
losing our ability to deal with the case where we have only ad hoc
connectivity and no control over paths taken.

If there are folks out in Internetland with markedly different opinions
on these points, I'm very interested to hear them and to understand the
line of reasoning which leads to other conc ( ( PhyPhyPsiz

-----------[000240][next][prev][last][first]----------------------------------------------------
Date:      31 May 1987 07:38-EDT
From:      CERF@A.ISI.EDU
To:        hinden@CCV.BBN.COM
Cc:        leiner@ICARUS.RIACS.EDU, BUDDENBERGRA@A.ISI.EDU tcp-ip@SRI-NIC.ARPA
Subject:   Re: Datagram sizes
Bob,

thanks for a very helpful summary. By this time it should be apparent
that fragmentation at the IP level is not and never was intended to do
more than provide flexibility to deal with systems having different
maximum packet sizes without the source having to know what path the
datagrams were taking. 

Such "blind" flexibility has a performance cost - and was intended 
mostly to deal with the case that the "system" was not or could not
be engineered ahead of time for performance and consistency.

Now that we are focusing, properly, on performance, it seems clear
that the source needs to know (find out? be told?) more about
internet conditions, at least on the paths its datagrams are taking.
similarly, the source needs to be able to assert more precisely
what its requirements are to aid the gateway routing system in choosing
paths.

I hope everyone who is working on and thinking about this problem will
continue to consider the tradeoff between performance and ability to
cope with the unexpected (so important for crisis management). Our
objective should be to learn how to engineer for high performance without
losing our ability to deal with the case where we have only ad hoc
connectivity and no control over paths taken.

If there are folks out in Internetland with markedly different opinions
on these points, I'm very interested to hear them and to understand the
line of reasoning which leads to other conclusions.

Vint
-----------[000241][next][prev][last][first]----------------------------------------------------
Date:      Sun, 31-May-87 12:31:46 EDT
From:      mqh@batcomputer.tn.cornell.edu (Mike Hojnowski)
To:        comp.protocols.tcp-ip
Subject:   Re: TN3270

In article <8705300729.AA11310@ucbvax.Berkeley.EDU> BRUCE@UMDD.BITNET (Bruce Crabill) writes:
>             Maybe we can form some kind of working group on this.  Currently
>most of the people who would be affected by this is a fairly small group
>and perhaps we could decide how it needs to be fixed and write an RFC
>stating our findings.  I am more than willing to work on such a project.
>How many others out there are interested?

I discussed these issues with several TN3270 gurus in Monterey.  The consensus 
seemed to be that there is indeed a need for work in this area, but that none 
had the time or energy to do it.  I will agree that this is a relatively 
minor issue, but with more vendors coming out with TN3270ish programs, it's 
important that this be settled soon.  I'm willing to be in on this.  
-- 
Mike Hojnowski (Hojo)		{ihnp4,rochester}!cornell!batcomputer!mqh
ICBM 042N 29  076W 27		mqh@batcomputer.tn.cornell.edu
"With friends like that, who needs enemas?"  - Max Headroom

-----------[000242][next][prev][last][first]----------------------------------------------------
Date:      Sun, 31-May-87 15:13:24 EDT
From:      Lixia@XX.LCS.MIT.EDU (Lixia Zhang)
To:        comp.protocols.tcp-ip
Subject:   Re: Datagram sizes

>	Isn't this fun.  No simple solutions.
I think there exist simple answers to the problem.

1.As Vint pointed out, IP fragmentation is not designed for performance
  optimization purpose.  That is probably why one faces so complicated
  tradeoffs when trying to play with the fragmentation beyond its intended
  goals.  So just avoid fragmentation if you can (or until someday when IP
  fragmentation is redesigned for regular usage).

  Considerations like TCP header or ack overhead are TCP design
  considerations.  Since TCP was designed to be "general purpose", the
  overhead with possibly small segment size should have been justified
  during design; if not, the design should have provided tunings (e.g.
  smaller header, and accumulated ack) to reduce the overhead.

  I'm not saying that TCP must achieve highest throughput indenpendently from
  the network support, but rather,
  (1)TCP design should make sure that the overhead will not cause a big
     concern even when the net does not support large TCP segment size;
  (2)higher and lower layer protocols should collaborate to whatever extent
     possible for a overall better performance, but should not misuse the
     network functions that are not designed for the purpose.
    (An analogy: relying on IP redirect is not the solution to the EGP
     extra-hop problem; EGP got to fix itself.)

2.(just for fun, I'm not suggesting this) if wanted to play with
  fragmentation for performance, there is a simple answer too.  An optimal
  decision can always be made when all the information needed for such a
  decision is available.
  By common sense, it is suggested that gateway fragmentation is costly and
  should be avoided if possible.  IF other tradeoffs must also be evaluated,
  then an ideal is to let the source host know, so that it can do the same
  computation as you did about the gain and loss with each IPgram size and
  make a right choice every time.  (There are many more other issues to
  worry than just TCP overhead: the cost of fragmentation at gateways,
  the probability and cost of fragment losses ...)

  In reality it is probably impossible/infeasible to equip hosts with all
  the complicated knowledge/information, therefore an engineering tradeoff
  is between how much knowledge to build into the host, and how much
  performance gain we get by the complication.  The answer is simple,
  the engineering is difficult.  You always have a hard time when using a
  wrong tool to fix things.

Lixia
-------

END OF DOCUMENT