The 'Security Digest' Archives (TM)

Archive: About | Browse | Search | Contributions | Feedback
Site: Help | Index | Search | Contact | Notices | Changes

ARCHIVE: TCP-IP Distribution List - Archives (1983)
DOCUMENT: TCP-IP Distribution List for July 1983 (11 messages, 4234 bytes)
NOTICE: recognises the rights of all third-party works.


Date:      6 Jul 1983 10:53:51 EDT (Wednesday)
From:      Dennis Rockwell <[email protected]>
To:        [email protected], [email protected], [email protected]
Cc:        [email protected]
Subject:   retransmit overtaking persist bug
There is a bug in the BBN TCP timer code which causes connections
with large delays to hang.  The symptom is that the sender will
continually send single-octet packets which are one octet past
the receiver's advertised window.  The cause is that the persist
timer (used for probing closed windows) was fixed, which the
retransmit timer is adaptive (variable).  When the persist timer
goes off, it resets the retransmit timer.  Thus, when the retransmit
timer exceeds the persist timer, you hang.

The fix is to replace the token T_PERS in tcp_procs.c (about line 250)
with tp->t_xmtime*2.  This is the only instance of T_PERS except for
its definition (which you can delete if you wish).  This guarantees
that the persist timer is always greater than the retransmit timer.

If you know of any system running the BBN software that doesn't receive
one of these mailing lists, please inform either them or me.

Sorry to send this out to such a wide audience, but this bug will
bite more systems as the Internet grows.

Date:      8 Jul 1983 1515-PDT
From:      CHASE at USC-ISIB
To:        tcp-ip at SRI-NIC
Subject:   Re: Possible ISIB TCP bug
The problem reported by KLH with Ftp between ISIB and MIT-MC has been fixed.
The bug was in the Tops20 monitor code.  Basically, Tops20 couldn't send a Fin
at the time the Ftp process did its Close% because there was still data queued
from a previous Send%, and by the time the data went out, the check for
needing to send a Fin was missed.  Much thanks to Ken for his accurate error
reports and assistance in tracking this down.

The fix inserts new check after PKZ23A.  If the send side of the connection is
still open but the user has done a Close%, ENCPKT is called to "encourage" a
packet later, when the Fin can be sent.

	LOAD T1,TSSYN,(TCB)	; Get send state ;;;LDB T1,[110314,,13]
	CAIE T1,SYNCED		; Connection synchronized? ;;;CAIE T1,7
	 JRST PKZ24B		; No.  No FIN can be sent.
	JN TSUOP,(TCB),PKZ24B	; Jump if connection still OPEN by user
				;  ;;;MOVE CX,13(14)
				;  ;;;TRNE CX,400 
				;   ;;;JRST PKZ24B
	MOVEI T1,^D200		; Try to send FIN later, we must have been
	CALL ENCPKT		;  unable to send it this time through
				;  (ie, due to presence of q'd snd data)

While putting this problem to rest, it would be appropriate to put to rest
some misconceptions that came out of the discussion of this problem.
The TCPSIM package from BBN running at ISI does not just abort data
connections in place of trying to close them properly.  A Close% is done,
and only after the Close% fails to take effect after a timeout period is an
Abort% done to clean things up.  I'm sure that the above bug caused it to
appear to certain sites that only the abort was done.  But although the
package does have its shortcomings (the case in question is an example of its
skimpy error reporting), it does the best that can be done in this case.

The characterization of Tops20's TCP implementation as record oriented is not
quite accurate.  A user program can send one byte, two, ten or a whole page
worth, without any kind of record or segment considerations.  The monitor will
buffer these bytes until there are enough of them for efficient transmission,
or until the user program does a push.  The real fault with the user interface
is that it requires a different set of monitor calls instead of the Bin/Bout
flavor, and that these calls are very clumsy to use.  Now, however, the just
released DEC user interface will hopefully restore consistency and simplicity
to network i/o, and remove the need for simulation packages altogether.

<>Dale Chase
Date:      Mon, 11 Jul 83 15:59:58 PDT
From:      Rich Wales <[email protected]>
To:        [email protected]
Subject:   XNS for 4.1BSD UNIX?
Has anyone implemented the Xerox Network Systems (XNS) protocol in
Berkeley UNIX (4.1BSD)?

-- Rich Wales <[email protected]>
Date:      Monday, 11 Jul 1983 17:14-PDT
From:      [email protected]
To:        Rich Wales <[email protected]>
Cc:        [email protected]
Subject:   Re: XNS for 4.1BSD UNIX?
Yes, contact Network Research Corp (213)474-7717.

Just fyi, I can't vouch for them or their implementation(s).

-- Jim
Date:      26 Jul 1983 09:45:44-EST
From:      Paul McNabb <[email protected]>
To:        [email protected]
Subject:   TCP/IP for VMS
I am looking for an implementation of TCP/IP under VMS.  Any leads
would be appreciated.

Thanks in advance.
Paul McNabb
([email protected])
Date:      26 July 1983 17:55 edt
From:      Vinograd.Multics at MIT-MULTICS
To:        pam at PURDUE
Cc:        tcp-ip at SRI-NIC
Subject:   Re: TCP/IP for VMS
See TCP-IP digest Vol 2: Issue 12 for a complete desc of same being done
at UWISC. It includes IBM and UWISC contact names. DACU box is available
from IBM 60/120 days ARO and costs about $15K.
Date:      27 Jul 1983 0835-PDT
From:      Eric P. Scott <EPS at JPL-VAX>
To:        TCP-IP at SRI-NIC
Subject:   TCP/IP for Data General MV/4000, MV/8000, or MV/10000
If you know of any implementations for these machines, please
reply to the list.  Thanks in advance.
Date:      27 Jul 1983 0903-PDT
From:      Francine Perillo <PERILLO at SRI-NIC>
To:        pam at PURDUE, tcp-ip
Cc:        PERILLO
Subject:   Re: TCP/IP for VMS
There is yet another VMS TCP/IP implementation written by some people
at Tektronix.  The code is not completely Internet-compatible for
example, they have not implemented gateway or internet control protocols.
It was designed with Tektronix' internal needs in mind but it is
available upon request.  Contact Tim Fallon or Stan Smith in Beaverton,
Oregon at (503) 627-5347 or try [email protected] or [email protected]

-Francine  /NIC
Date:      Wednesday, 27 Jul 1983 09:35-PDT
From:      Chris Kent <decwrl!kent%[email protected]>
To:        Shasta!"[email protected]"
Subject:   Re:  Re:  TCP/IP for VMS
Kashtan's stuff works and seems to be available from the Wollongong
Group. It's full 4.1c networking code.

The people at Rice that did the Phoenix Unix under VMS emulator are
also reported to have the Berkeley TCP/IP running under their system,
but I don't know details.


(Hope the header isn't too munged -- someone along the way is hacking
mailers this week. Reply to [email protected] if you must.)

Date:      Wed, 27 Jul 83 11:08:56 EDT
From:      Mike Muuss <[email protected]>
To:        [email protected]
Cc:        [email protected]
Subject:   Re:  TCP/IP for VMS
I've heard of 2:

*) Compion (aka DTI) has a product called ACCESS/T.  Netmail a request
   to <[email protected]>.

*) Kashtan has ported the Berkeley UNIX TCP/IP to VMS.  I don't know
   the availibility.  Try <[email protected]> or thereabouts.
Date:      28 July 1983 09:59 edt
From:      DClark.INP at MIT-MULTICS
To:        EPS at JPL-VAX
Cc:        TCP-IP at SRI-NIC
Subject:   Re: TCP/IP for Data General MV/4000, MV/8000, or MV/10000
    The Data General division in Research Triangle Park, N.C., has
recently taken on the task of doing TCP/IP for some DG machines.
Unfortunately, I cannot remember which ones.  Nor do I remember the name
of the project manager of that effort. But the DG RTP office is small,
and you could get a long way just calling, I expect. If that does not
work, I could reconstruct the name with some effort; let me know.
    Dave Clark