The 'Security Digest' Archives (TM)

Archive: About | Browse | Search | Contributions | Feedback
Site: Help | Index | Search | Contact | Notices | Changes

ARCHIVE: TCP-IP Distribution List - Archives (1984)
DOCUMENT: TCP-IP Distribution List for November 1984 (22 messages, 10959 bytes)
SOURCE: http://securitydigest.org/exec/display?f=tcp-ip/archive/1984/11.txt&t=text/plain
NOTICE: securitydigest.org recognises the rights of all third-party works.

START OF DOCUMENT

-----------[000000][next][prev][last][first]----------------------------------------------------
Date:      Thu, 1 Nov 84 17:09:47 cst
From:      herb@wisc-rsch.arpa (Benington Herb)
To:        SIRBU@MIT-MC
Cc:        tcp-ip@SRI-NIC
Subject:   Re:  TCP vs TP Class 4
As one of the authors of the National Academy report on potential use
of ISO protocols by DoD, I can report on its status.  The Committee's
report has been completed for about a month and has been undergoing
peer review.  If all goes well, the report should be published within
a month.  Jerry Rosenberg, a member of the NRC staff, and I have been
working out a way to make the Executive Summary and the full report
separately available as RFCs if Jon Postel concurs.

					Herb Benington
-----------[000001][next][prev][last][first]----------------------------------------------------
Date:      03-Nov-84 15:59:04-UT
From:      mills@dcn6
To:        tcp-ip@nic
Subject:   LSI-11/73 and Interlan Ethernet
Folks,

I received several questions about my comment in the latest Internet monthly
report that we were having trouble with the new LSI-11/73 processor and the
Interlan Ethernet interface. Thus, my response may be of interest to a wider
community.

The problem was that interrupts due to incoming packets failed to occur with
the 11/73, but worked just fine with the older LSI-11/23. Interlan diagnostics
revealed no problem with the 11/23, but did report "spurious interrupts" with
the 11/73. We moved the Interlan vector address from 260 to 230, since we
suspected interaction with the floppy disk controller at 264, and the
"spurious interrupts" message went away.

We then found nestled deep in the driver code (ours) an instruction that
cleared the transmit interrupt-enable bit at the beginning of the transmit
interrupt service routine. In principle, this should cause no problem (and did
not with the 11/23), since it was set at the time the interface was
initialized for the next transmit operation and no transmit interrupts would
be expected meanwhile. However, probably due to the increased speed of the
11/73, the access to the device registers necessary to clear that bit
apparently destabilized something in the interface and the receive side hung
up. We removed the bit-clear instruction and our problems went away.

We have seen this type of problem before in other interfaces, in particular
the DEC DLV11-J serial interface. In fact, we have observed the ACC MDMA/1822
interface for the Q bus to present interrupts when the interrupt-enable bit is
cleared! The moral seems to be to resist the urge to diddle interrupt-enable
bits asynchronously with arriving data.

We have not found the cause of the "spurious interrupts" message. Just to be
ornery, we restored the Interlan interrupt vector to 270 and rebooted our
fuzzware, which is fuzzing happily as I type. Does the set of all spurious
messages contain a spurious message? Think about that. Thanks to Mike
O'Connor, veteran sniffer of fuzzbug spoor, who helped track this ugly one
down.

Dave
-------
-----------[000002][next][prev][last][first]----------------------------------------------------
Date:      Mon, 5 Nov 1984  00:04 PST
From:      DAVIES@Sumex
To:        MRC@SCORE
Cc:        Davies@Sumex
Subject:   TCP/IP
Is there a public domain implementation of TCP/IP available for
perusal anywhere?

        -- Byron
-----------[000003][next][prev][last][first]----------------------------------------------------
Date:      Tuesday,  6 Nov 1984 10:15-PST
From:      imagen!geof@su-shasta.arpa
To:        shasta!tcp-ip@sri-nic, shasta!works@rutgers
Cc:        
Subject:   Query Re Experience with Interlan NI3210

[sorry if this is somewhat off the point, but I think the audience is correct]

Is there anyone out there who uses (and especially has written a driver
for) the new Interlan multibus Ethernet card?  The card is the one with
the Intel 82586 Ethernet controller chip on it.

I would be interested in any reports about the reliability of the card,
any anomalies or undocumented features in programming it, etc.

Please respond to me directly.  If there is interested and enough
response, I will summarize replies to the list.

- Geof Cooper
  IMAGEN
  imagen!geof@su-shasta.arpa
-----------[000004][next][prev][last][first]----------------------------------------------------
Date:      8 Nov 84 16:29:24 EST
From:      ddn-navy @ DDN1.ARPA
To:        tcp-ip @ sri-nic.arpa
Cc:        ddn-navy @ DDN1.ARPA
Subject:   PDP 11"s running RSTS-E...
Does anyone know of or have experience connecting PDP 11 hosts running
OS RSTS/E to the ARPANet?  RSVP to <DDN-Navy @ DDN1.ARPA> and thanx
in advance...Rod Richardson sends...Navy Systems Impl'r for DDN

-----------[000005][next][prev][last][first]----------------------------------------------------
Date:      13 Nov 84 13:08:49 EST
From:      Charles Hedrick <HEDRICK@RUTGERS.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   question about IBM implementations
We are trying to hook an AS-9000 (an IBM-compatible thing) up to our
Ethernet.  Currently we are running MVS.  We have a copy of the UCLA MVS
code, however last time we looked it didn't support Ethernet.  We have
ordered an Ethernet controller from ACC.  It is supposed to emulate the
1822 controller, which the UCLA code does support.  All quite
reasonable.  However the folks who manage the AS-9000 are now looking at
bringing up VM.  I am hoping there is somebody out there who knows
enough about the Wisconsin VM implementation to answer a couple of
questions:
  - can it handle MVS running under VM?  In particular:
	- can I run FTP in an MVS batch job?
	- can users on other hosts access files stored on MVS volumes
		(i.e. in OS format instead of on CMS minidisks)?
	- if we have TSO running under MVS, will it be able having
		incoming and outgoing Telnet connections?   (Presumably
		what I am really asking is whether the VM TCP/IP talks
		to VTAM.)
  - if we end up getting something from ACC that emulates an 1822
	interface, will the Wisconsin code support it?
  - what sort of data rates (throughput of file transfers and/or number
	of packets per sec.) can the DACU handle?  We had the impression
	that it was sort of limited.  The ACC interface is to be
	68000-based, and should be fairly fast. (I am reluctant to give
	actual numbers for something that doesn't exist, but we hope the
	results will be better than what we are seeing on our DEC-20 and
	VAXes.)
-------
-----------[000006][next][prev][last][first]----------------------------------------------------
Date:      Wed, 14 Nov 84 07:24:33 cst
From:      lhl@wisc-rsch.arpa (L.H. Landweber)
To:        tcp-ip@sri-nic
Subject:   list
Please add me to the tcp-ip digest mailing list.
-----------[000007][next][prev][last][first]----------------------------------------------------
Date:      Wed, 14 Nov 84 20:56 EST
From:      JBKim@MIT-MULTICS.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Subject:   distribution list for newsletter
date:  14 nov 84

dear nic,

please add me to the distribution list for the tcp-ip newsletter.  i am
at JBKim at MIT-MULTICS

Also, do you handle the ddn newsletter?  I would like to get on
distribution for that too.  Seems to me a long time ago I used to be on
that distribution list.  I would prefer receiving the ddn newsletter
rather than accessing it on your file.

thank you kim

-----------[000008][next][prev][last][first]----------------------------------------------------
Date:      18 Nov 1984 10:47 PST
From:      Art Berggreen <ART@ACC>
To:        tcp-ip@sri-nic
Subject:   VMS TCP/IP and X.25
RE: Vint Cerf's inquiry about TCP/IP and X.25

I'm testing an X.25 package that runs with Wollongong's VMS TCP/IP
system and uses ACC's IF-11/X.25 front-end.  This package is currently
implemented to conform to the DDN interface specification, but could
be easily extended to allow attachment to PDNs.

Since the full "interoperable" IMP support for DDN won't be available
until at least spring '85, I'd be happy to hear from anybody that
might be interested in using such a system with a standard PDN.

    ARPANet:	Art@ACC.ARPA
    Phone:	(805) 963-9431

We also now have a single board solution for tying VAXs together
over T1 links.  Driver support for VMS (TWG system) and 4.2BSD
is in test.

    					Art Berggreen
    					Advanced Computer Communications
------
-----------[000009][next][prev][last][first]----------------------------------------------------
Date:      Mon, 19 Nov 84 11:26 PST
From:      Provan@LLL-MFE.ARPA
To:        tcp-ip@nic.arpa
Subject:   rfc878 (1822L) vs. rfc852 (Short Blocking)
Well, this doesn't really have anything to do with TCP or IP, but...

RFC878 and RFC852 both assign their own meanings to Imp-Host message
9 (incomplete tranmission) subtype 6.  878 gives it the meaning
"Logically addressed host went down" while 852 give it the meaning
"Connection set-up delay."  Has one or the other of these been superceded
or canceled?  Anyone know the scoop?

While I'm on the subject, RFC852 has several interesting subtype for
type 9 messages.  It gives me the feeling that these subtypes will only
be sent in response to messages using the short blocking feature, but
it never comes out and says that.  Will they be sent in answer to
messages not using the short blocking feature and if not, why not?
-----------[000010][next][prev][last][first]----------------------------------------------------
Date:      Mon, 19 Nov 84 14:56:36 EST
From:      Andrew Malis <malis@BBNCCS.ARPA>
To:        Provan@lll-mfe.arpa
Cc:        tcp-ip@sri-nic.arpa, malis@BBNCCS.ARPA
Subject:   Re: rfc878 (1822L) vs. rfc852 (Short Blocking)
Don,

RFC 878 is correct.  Note that RFC 878 describes the actual
implementation of logical addressing in the IMP, while the short
blocking feature as described in RFC 852 was proposed but never
implemented.  If and when the short blocking feature is
implemented, the Type 9 subtypes beyond 5 that are listed in RFC
852 will be incremented by one (i.e., 6 -> 7 and so on).  

There are two arguments as to whether the new type 9 subtypes
should be returned for messages not using the short blocking
feature.  Many, such as yourself, would advocate always using the
new subtypes; but if the IMP does use the new subtypes for
non-short-blocking messages, then hosts that don't change their
network software could get confused by these new subtypes.

Of course, there are ways to select different behavior for
different hosts.  Those that always want the new subtypes could
set a bit in their NOPs or have the NOC include it in their
configuration information.  This would have to be decided at the
time that short blocking is implemented.  In any case, an updated
RFC would also be issued.

Regards,
Andy

-----------[000011][next][prev][last][first]----------------------------------------------------
Date:      21 Nov 1984 08:11-CST
From:      SAC.LONG@USC-ISIE.ARPA
To:        ailist-request@SRI-AI.ARPA
Cc:        Human-Nets-Request@RUTGERS.ARPA, Space-Request@MIT-MC.ARPA, TCP-IP@SRI-NIC.ARPA
Subject:   Delete SAC.LONG@USC-ISIE from Mailing List

Please delete SAC.Long@USC-ISIE from the mailing list.  The
host administrator at ISIE has established an account <BBOARD>
which receives the mail from all of the various interest groups
on the ARPANet, provided there is a sufficient interest demonstrated
by the user community to read the list.  This reduces the load
on the network traffic and the host resources.  (Perhaps other
systems could do the same)?  The <BBOARD> acount allows all
users read-only priviledges, so I can use HERMES (or any other
mail facillity) to read the mail.  Thus, I am not saying "farewell",
but only "so long for now."

Thank you,

  --  Steve

-----------[000012][next][prev][last][first]----------------------------------------------------
Date:      Wed, 21 Nov 84 11:48:20 est
From:      Alan Parker <parker@nrl-css>
To:        tcp-ip@nic
Subject:   decnet -> tcp
Given a Vax running VMS that is on a local decnet and on the Internet 
(using Wollengong software) does a mail router exists that will pass
mail between the two networks?   I wouldn't think that would be very
difficult, but I would like to know about an existing one before forging
ahead.

-Alan
-----------[000013][next][prev][last][first]----------------------------------------------------
Date:      21 Nov 84 14:59:00 EST
From:      <charlie@ari-hq1>
To:        tcp-ip <tcp-ip@nic>
Cc:        twg@sri-iu,twg@ari-hq1, charlie
Subject:   Reply to Alan Parker's inquiry regarding Internet-DECNET relationship

Dear Alan,

	The Wollongong software that you are running allows your VMS DDN host 
to serve as a gateway between the Internet and your local DECNET.  The 
constraint is that all machines on your local net must have TCPIP in order 
to have use of the full range of network services via the gateway.  For mail 
service only, I do not think it necessary to have all machines running 
TCP/IP (Note:  The TCP/IP that the other machines need in order to 
utilize full network services does not have to be Wollongong's.  If the 
other machines are on Berkeley UNIX, they will already have TCP; if they 
are VAXs under VMS, then you will probably want to copy TCPIP onto each at a 
cost of around $1,500 per machine.)  Further information can be obtained from 
Jerry Scott of Wollongong (Note correct spelling is 'o' after the second 'l'),
who can be reached via network mail at TWG@ARI-HQ1 or TWG@SRI-IU.


				Charlie Abzug
				U.S. Army Research Institute

				202/274-8221  AUTOVON 284-8221

				Charlie@ARI-HQ1
------
-----------[000014][next][prev][last][first]----------------------------------------------------
Date:      Wed, 21 Nov 84 16:11:03 est
From:      God <root%bostonu.csnet@csnet-relay.arpa>
To:        parker@nrl-css.ARPA, tcp-ip@sri-nic.ARPA
Subject:   Re:  decnet -> tcp

	We are also running TWG's tcp/ip on VMS, 4.2bsd and
	DECNET links to decnet only VMS's and our 2060
	(which hopefully soon will have an NI20 to end some of
	this mess.)

	For a VMS mailer we have set up the Software Tools'
	mailer on all our VMS systems. It is designed to
	talk SMTP over most any link which you provide. It's
	ok, we had to fix a few little bugs and obviously
	add the channel interfaces but it is now acting as
	a gateway between itself on DECNET sites, MM on the 20
	and sendmail (which itself continues to be the real nuisance.)

	If I could be of any assistance don't hesitate to
	contact me.

		-Barry Shein, Boston University
-----------[000015][next][prev][last][first]----------------------------------------------------
Date:      22 Nov 1984 05:35-EST
From:      CERF@USC-ISI.ARPA
To:        root%bostonu.csnet@CSNET-RELAY.ARPA
Cc:        parker@NRL-CSS.ARPA, tcp-ip@SRI-NIC.ARPA
Subject:   Re:  decnet -> tcp
Barry,

thanks.  I have made an inquiry direct to Wollongong about their
software.  My interest stems from the commercial X.25 world, the
plans at CCITT for an X.400 Mail Handling standard and my MCI
Mail project.

What sort of performance have you been able to get from the
Wollongong TCP/IP?  Over what kinds of communications facilities?

thanks,

vint
-----------[000016][next][prev][last][first]----------------------------------------------------
Date:      26 Nov 1984 0419-PST (Monday)
From:      (Van Jacobson) van@lbl-csam
To:        CERF@USC-ISI.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   TWG TCP/IP Performance
I missed the original message of this discussion but thought I'd put in
my 2 cents worth anyway.  TWG is about to distribute some code that
lets you run their TCP/IP package over DECNET circuits.  I'm one of the
authors of that code.  After writing it, we did some performance
investigation that I keep forgetting to publish and this seemed like a
good opportunity to write it down.  The next few paragraphs talk how
TWG TCP/IP performs over hardware it drives (NOT using the DECNET
bridge code).  The last two paragraphs talk about the DECNET bridge
performance (the discussion might also apply to the ACC X.25 bridge for
TWG TCP/IP since I've been told they based their stuff on our
dbridge).

The TWG TCP/IP software is almost straight Berkeley 4.2 Unix network code.
The system call mechanism changed (to a VMS QIO) but everything after the
system call, up to and including the network device drivers, is 4.2bsd.
Because of this, one would expect that the TWG TCP/IP performance would be
very similar to 4.2bsd TCP/IP performance.

We measured throughput between a Vax 780 & Vax 750 connected by a 10MB Pronet
ring.  One Vax ran a process that sent some fixed sized buffer 1000 times down
a TCP circuit.  The other Vax ran a process that received & discarded packets
on that circuit.  The sending process recorded the time between sends & the
total time to send 1000 buffers.  Both Vaxes & the ring were idle except for
these processes.  The 750 was running VMS 3.4 with the TWG TCP/IP.  The 780
was run with both 4.2BSD & VMS 3.4.  In all cases, the socket buffering was
2K bytes & the TCP Max Segment Size was 1K (these are the default values).
For 1K byte buffers the process-to-process throughput was

  4.2BSD      63.7 Kbyte/sec
  TWG TCP/IP  60.6 Kbyte/sec

(in a similar test between a 4.2bsd 780 & 4.2bsd 750 using Interlan
controllers over a 10MB Ethernet, Rob Gurwitz of BB&N reported 66.4
Kbyte/sec.  I've done a simple simulation that said the best throughput
we could hope for on the Pronet or an Interlan Ethernet was about 150
KByte/sec.  Although there's a possible factor-of-two improvement to
be had, the network is already faster than almost anything else around
and, for our purposes, probably fast enough).

We varied the sender's buffer size from 128 bytes to 2048 bytes in 16 byte
increments then fit a least-squares line to the <buffer-size, time-to-send>
pairs.  The results were:

	      intercept   slope
		(ms)    (us/byte)
  4.2BSD        3.19      17.3
  TWG TCP/IP    3.06      19.5

The intercept is probably a fairly device-independent measure of the
per-packet CPU overhead.  The slope, however, is dominated by the
device characteristics.  To get an idea of the CPU time spent moving
data around, we ran the same sender & receiver processes in a single
780 processor through the "loopback" device.  The results were:

	      intercept   slope
		(ms)    (us/byte)
  4.2BSD        10.7       5.3
  TWG TCP/IP    10.3       5.9

(Since the above numbers include both the sender & receiver, they
should be divided by 2 to compare with the previous result).  The
intercept now includes the context-switch times between the sender &
receiver so it shouldn't be taken too seriously.  Normally, the socket
buffering would be circumvented by running both processes in the same
processor & the slope would be pretty meaningless.  The TCP "silly
window" code prevents that from happening here and half the slope is
probably a good estimate of the CPU time spent moving user data
around.

The bottom line of the preceding is: an upper bound on shipping 1K
buffers via TCP on a Vax 780 is 5.9ms of CPU time per buffer under
4.2bsd and 6.0ms/buffer under VMS/TWG.

Now, finally, to the DECNET bridge performance:  the bridge uses a
pseudo-driver in the network kernel to capture outbound IP packets and
write them on a special "raw" socket.  The pseudo-driver also takes
anything written on this raw socket and injects it into the network
kernel as an IP packet.  The raw socket is bound to a normal, user-mode
process that reads packets from the socket & writes them to a DECNET
virtual circuit and vice-versa.  This situation is almost identical to
the "loopback" test described above and I expected the performance to
be about the same, i.e., 16ms/buffer for 1K buffers.  What I measured
was 14.6ms/buffer (the difference was probably due to the simple
"protocol" between the bridge process & the raw socket).  In other
words, you could pump 68 Kbyte/sec through the bridge if DECNET used
zero CPU time.  In practice, DECNET uses at least an equivalent amount
of CPU time (it's currently our performance bottleneck but we haven't
invested much effort in tuning it).  We hope to put about 30 Kbyte/sec
through the bridge (for DECNET over an Ethernet via a DEUNA) when we're
done.  Our current numbers are too embarassing to quote.

While the bridge code should do well over high speed links, it's got a
problem at low speed.  Usually the bridge code is run over low speed
(56 or 200 Kbit/sec) point-to-point DMR-11 connections.  Since the
IP-driver & driver-bridge communication is via datagrams, the system
behaves like a high-speed subnet connected to a low-speed subnet.  If
there are several active TCP circuits, you overrun the buffering
available in the outbound driver-bridge path and start to drop
packets.  We've observed a factor of two decrease in agregate
throughput with as few as 4 active TCP circuits (over a 56Kbit line).
Normally, I think, the subnet congestion control mechanism deals with
this problem but it's sort of hard to implement one here.  Since the
newest 4.2 code responds to ICMP quench packets, my thought is to port
that code to the TWG system then have the driver ship a quench back to
IP if it ever has to drop a packet.  I'd be grateful for any
ideas/opinions/thoughts on this problem and/or solution.  Thanks.

My sympathy to anyone who read this far.  Sorry this got so long.

-Van Jacobson (van@lbl-csam.arpa)
-----------[000017][next][prev][last][first]----------------------------------------------------
Date:      26 Nov 1984 16:32:38 PST
From:      POSTEL@USC-ISIF.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Subject:   Sorting Addresses

Some hosts have multiple addresses listed in the HOSTS.TXT data base.
For most of the cases the listed addresses are equally useable, but in
a few cases there is a preferred order to the addresses.  This may be
due to the reliability of some network or interface or gateway, or due
to administrative controls.  For example, the BBN-VAN-GW is not allowed
to create X.25 virtual circuits in the PDN that it will pay for.  In the
HOSTS.TXT data base the addresses are listed in their preferred order of
use.  It is not appropriate to sort these addresses in to some other 
order.

--jon.
-------
-----------[000018][next][prev][last][first]----------------------------------------------------
Date:      Mon, 26 Nov 84 20:22:42 EST
From:      Jonathan Dreyer <jdreyer@BBNCCM.ARPA>
To:        POSTEL@usc-isif.arpa
Cc:        tcp-ip@sri-nic.arpa
Subject:   Re: Sorting Addresses
I assume that it IS appropriate to sort addresses in specific
cases if you "know better," e.g. if you wish to show preference
for addresses on your own local net.


-----------[000019][next][prev][last][first]----------------------------------------------------
Date:      27 Nov 1984 1026 PST
From:      Ron Tencati <TENCATI@JPL-VLSI.ARPA>
To:        tcp-ip@sri-nic
Subject:   Re: Sorting Addresses

If a particular host cannot be reached at the first address encountered in
HOSTS.TXT, is there any way to get the software to "keep searching" the list
to get another address to try.  Often, the address listed first is not reachableand unless someone detects that fact and reorders the list, no connections can 
be made...?

Thanks,

Ron
------
-----------[000020][next][prev][last][first]----------------------------------------------------
Date:      27 Nov 1984 16:58:53 EST (Tuesday)
From:      Chris Perry <m12023@mitre>
To:        TCP-ip@sri-nic
Subject:   Help with ULTRIX Switchover
Folks,

MITRE (Washington, DC) is switching to an 11/750 running ULTRIX.
I'd appreciate hearing from other installations running
TCP/IP under ULTRIX.  Thanks.

Chris Perry
(CPERRY @ MITRE)

-----------[000021][next][prev][last][first]----------------------------------------------------
Date:      Thu, 29 Nov 84 12:43:18 GMT
From:      Robert Cole <robert@ucl-cs.arpa>
To:        tcp-ip@sri-nic.arpa
Cc:        service@ucl-cs.arpa
Subject:   A multi-homed mail host warning
Following a recent note from Jon Postel we have decided to try (again)
to use two addresses for our mail host at UCL.
We currently use both addresses for sending mail, and some people have
asked us to put the second address in the NIC tables.

Due to some International tarrif problems it is not possible to call UCL
using the second address we have put in the table, but we can call you.
If you find you are getting a backlog of mail for UCL-CS then please
examine your SMTP address tables. If you need further help, and you
cannot reach us directly, send a message to UKSAT@ISID.

The new entry will be:
HOST : 128.16.9.3, 14.0.0.9 : UCL-CS.ARPA,UCL-CS,UCL :: LOGICAL-HOST : IP,TCP/SMTP :

Robert Cole.


END OF DOCUMENT