The 'Security Digest' Archives (TM)

Archive: About | Browse | Search | Contributions | Feedback
Site: Help | Index | Search | Contact | Notices | Changes

ARCHIVE: Rutgers 'Security List' (incl. - Archives (1988)
DOCUMENT: Rutgers 'Security List' for April 1988 (23 messages, 13966 bytes)
NOTICE: recognises the rights of all third-party works.


From:      Derek Andrew <sask!>  2-APR-1988 08:04
To:        watmath!
One problem with using passwords is their vulnerability to an eavesdropping
attack.  One solution is to implement a challenge and response algorithm as
described here.

1.  Let P(day) represent a secret permutation of the alphabet, unique every
    day.  e.g. P(today) = zfhkgjavitclbumxpwdonyrseq

2.  Let C(T,N) be a function which generates a string of N unique letters.
    T is the time of day (number of seconds since midnight).
    e.g. C(12:00:00,5) = jrmxo

3.  Let M(P(day),C(T,N)) be a mapping function such that every letter in
    C(T,N) is replaced by the letter following it in P(day). This mapping
    is simple and can be applied by visual inspection.
    e.g. C(zfhkgjavitclbumxpwdonyrseq,jrmxo) = asxpn

The permutation of the alphabet of the day P(today) is printed, then, when
the user is logging in, a challenge is issued using the C(T,N) function and
the user's response is compared with the result of M(P(day),C(T,N)).  If
they match, the user is allowed access.

How does one evaluate the security of this system?  What are the possible
attacks assuming 100% collection of the wiretap data?  How does one choose
a suitable value for N?  If N = 1, the attacker has a 1:26 chance getting in,
but if N = 26, the attacker can derive P(today) after one observation.

Does the C(T,N) function need to be secret or is it alright to allow the
attacker to anticipate the challenge?  I suggest using T as a parameter
to eliminate the problem of the same challenge being issued twice on the
same day (thus with the same P(day)).
From:  7-APR-1988 12:21
The best source for information on security in any of the OSI supported
services is the OSI Implementors Workshops held approximately quarterly
at the NBS in Gaithersburg, Maryland.

There are currently two documents available, both of which provide some
information on X.400 and security.  They are the "Stable Implementation
Agreements" and the Working Agreements.  The stable agreements are just
that, implementors are expected to be working towards those ends.

Additionally, the IEEE X.400, particularly the British representatives
from the British PTT and BCI,Ltd., have been working directly on the
X.400 Security Issues.

NBS Special Publications are available from the U.S. Government
Printing Office.

Doug Hunt
From:      Luke Ward <LUKEW%VTVM2.BITNET@CUNYVM.CUNY.EDU>  7-APR-1988 14:21
To:        SECURITY Digest <>
Get hold of a copy of the new 1988 draft X.400 and X.500 (directory)
standards.  In the US we order these from ANSI (New York) or a company
called Omnicom (Vienna, VA) - not sure what the Italian source is.

In particular you probably want to take a look at the following:

 CCITT / ISO          ISO Title

 F.400 / DIS 8505-1   Message Handling: System and Service Overview
                      (see esp clause 15: Security Capabilities of MHS)
 X.402 / DIS 8505-2   Information Processing Systems - Text Communication
                      - MOTIS - Overall Architecture
                      (see esp clauses 10: Security Model, 21: Authent-
                      ication, and annexes D: Security Threats, and
                      E: Provision of Security Services in X.411 / ISO 8883-1)
 X.509 / DP 9594-8    Information Processing Systems - Open Systems Inter-
                      connection - The Directory - Part 8: Authentication

Luke Ward
Data Administration, Virginia Tech
From:  8-APR-1988 08:17
I'd like to retract that statment.  We recently received a mandatory
security patch from DEC for many versions of VMS.  I suspect that
there is more going on here than we know about.

Tony Li - USC University Computing Services	"Fene mele kiki bobo"
Uucp: oberon!tli						-- Joe Isuzu
Bitnet: tli@uscvaxq, tli@ramoth
From:      John Pershing <>  13-APR-1988 09:17
In general, before trying to transport _any_ machine-readable media
across _any_ international boundaries, you should consult your friendly,
local, neighboorhood lawyer.  Even if the U.S. export laws impose no
restrictions on taking various software or data out of this country, the
other country's export laws may prohibit you from bringing it back!

From:      the terminal of Geoff Goodfellow <>  18-APR-1988 02:36
To:        hackers_guild@ucbvax.Berkeley.EDU
NYT NEW YORK: the intrusions.

    According to the Lawrence Berkeley officials, the yearlong
investigation involved the FBI and security experts from the Air Force
and the Army, as well as private security investigators. Under West
German law, not enough evidence was obtained for prosecution, the
Lawrence Berkeley officials said.
    According to Stoll, the West German compromised the military computers
by taking advantage of security loopholes in several different
operating systems, the software programs that manage data in a
computer. On computers operating under the Unix system, he frequently
used a loophole to give himself ''superuser'' status, which allowed him
to read and alter all material stored in the computer.
    The intrusions involved a variety of U.S. military computer systems in
this country, Europe, and Japan. The Lawrence Berkeley Laboratory
became a starting point for connecting to two unclassified military
networks, known as Milnet and Arpanet. They link computers at military
bases and military contractors.
    At one computer at the Naval Coastal Systems Command, in Panama City,
Fla., the intruder transferred to a computer in West Germany an
encyrpted file containing user passwords. The intruder broke some of
the codes and called back to search through files protected by the
passwords. The intruder also gained acess to computers at the Army's
Fort Buckner base in Japan and at the Anniston Army Depot, a supply
base for the Army's Redstone Arsenal, in Huntsville, Ala.
    At the Air Force Systems Command, in El Segundo, Calif., the intruder
managed to attain the status of system manager. ''I watched as he
scanned all of their SDI references and the usual pile of things and
then started printing out information on the space shuttle,'' said
Stoll. ''The Air Force later told me it was not classifed information.''
    Other systems entered included military computers in San Diego, the
Pentagon's Optimus data base, and a computer at NASA's Jet Propulsion
Laboratory, in Pasadena, Calif.
    The officials at the Lawrence Berkeley Laboratory said that they
monitored attempted intrusions into a total of 450 military computers.
    ''Basically, he was walking down the street twisting the doorknob of
each house,'' Stoll said. ''He wouldn't push hard, but then he would go
around and do the electronic equivalent of trying the back door and the
side windows. If they didn't budge, he would go to the next house on
the street.''
    Shortly after discovering the intrusions, Stoll, aided first by City
of Berkeley officials and later by federal law-enforcement officers,
began trying to trace their origin. They were traced to a computer at a
U.S. military contractor in McLean, Va., near Washington. The Lawrence
Berkeley officials declined to identify the company.
    They then discovered that the intruder was dialing from Hanover to a
university computer in Bremen, West Germany. That computer was used to
connect to machines in the United States.
    The intruder's location was masked by dialing into the military
contractor's computer in Virginia and then using that computer's
capability to call other computers around the country, including those
at Lawrence Berkeley. The Lawrence Berkeley computer was used to
connect to the military networks - Arpanet and Milnet - to gain access
to the military installations.
    In tracing the intruder, the security investigators created an
automatic alarm system. Stoll wrote a computer program that would dial
his pager whenever the West German gained access to the computer at
Lawrence Berkeley. The pager automatically called a security official
from the Tymnet McDonnell-Douglas Network Systems Co., a computer
network company based in San Jose, Calif. The Tymnet official then
notified West German law enforcement officials.
    But the investigators traced the calls back to Hanover, where it took
as long as 30 minutes to set up a trace because of antiquated
equipment. The intruder's calls generally lasted no longer than five
    In January of 1987, the security managers at Lawrence Berkeley created
an electronic sting operation using a large file of fictitious,
seemingly secret information. The file contained a reference to an
address at the Berkeley laboratory where further information related to
the Strategic Defense Initiative could be obtained.
    Once the file was discovered, the intruder remained connected to the
Lawrence Berkeley computer for more than an hour. Three months later,
according to the Lawrence Berkeley officials, a letter was mailed from
a United States citizen living in the Northeast to the address given by
the lab, inquiring about the false SDI information.
    The letter was given to the FBI.
From:  18-APR-1988 22:28
I'm purchasing a machine that reads and writes the mag strips on
credit cards.

Question:  Does anybody know if credit cards are high field or low
field magnetic systems?

From:      "Dennis G. Rears (FSAC)" <drears@ARDEC.ARPA>  22-APR-1988 13:19
>     I want to ask a question about computer security. Are there
>any legal obligations (British or American) on a computer user
>who finds a major flaw within a popular OS?

   No legal obligations at all unless there is something explicity
in the liscensing agreement.  Even if there is it would be hard to
prove that customer A had knowledge of the bug.

>individual bring a bug to the attention of a computer manufacturer
>given that site computing personnel take a dim view of anybody who
>finds these bugs?

    Notify the vendor.  If you get no response send out the bug to
everybody you can.  Since you had previously notifed the vendor and
he did nothing, they could be held liable for any damages caused by
the bug.

From:      tencati@VLSI.JPL.NASA.GOV  22-APR-1988 19:37
My opinion is as follows:

The computer user has NO legal obligation to inform other users or the 
manufacturer of a newly discovered bug in the operating system.  However
if knowledge of the bug is used to exploit OTHER people's operating
systems, then the user has committed a crime.  If that machine is
located in another U.S. state, or is a U.S. Government-owned machine,
then the user has committed a felony.

If the user wanted to inform the computer manufacturer.  He/She should call
the general offices, and tell the secretary that they wish to discuss
a newly-found security bug with the appropriate person(s) within their 
company.  They will be routed through a maze of phone numbers and will 
eventually come upon the correct ears.

Ron Tencati
Jet Propulsion Laboratory
Pasadena, Ca.  USA
From:      Christopher Seline <SCS7317@OBERLIN>  24-APR-1988 03:57
I stoped using ATM's a few years ago when I discovered an interesting bug.
If there was an error giving out the cash, the machine indicated an error
on the screen but STILL debited your account or credit card....

This bug has (theoretically) been repaired.

From:      "Curtis C. Galloway" <>  24-APR-1988 16:06
From the Pittsburgh Post-Gazette, April 24.
Used without permission.

by Roger Stuart


A South Park man who was stung seeking bogus computer-stored
information about U.S. military secrets has a long history of
mysterious associations, ranging from foreign intrigue to local

As with past incidents, authorities don't know -- or won't say --
what Laszlo J. Balogh was up to this time when his name surfaced in a
sting that caught a West German computer hacker who repeatedly gained
access to classified military files.

As with past exploits, Balogh, 43, emerged again as part-clever and

Although he has claimed extnsive foreign government contacts and
driven expensive foreign cars, he once testified that he had
difficulty recording an undercover conversation for the FBI because
the recorder kept slipping beneath his sweat suit.

In the past, Balogh has billed himself as a Hungarian refugee; a
draftsman; a credit corporation employee; a trucking company owner; a
diamond dealer; a world traveler; a bodyguard for Kuwaiti princesses;
a CIA hit man; and an FBI informant.

But longtime neighbors on Ventura Drive said they had no clear
picture of Balogh's activities because he is "quiet," "keeps to
himself" and is "often gone for weeks at a time."

...Balogh in 1978 was an officer in a now-defunct company when
another company official was accused of giving Penn Hills officials a
forged check drawn on a non-existent bank.  The check was to be used
as security in an unsuccessful effort to obtain a garbage-hauling
contract.  ... Balogh also was involved in a Pittsburgh trucking firm
that filed for bankruptcy in 1980.

His name surfaced again last week in connection with Marcus Hess,
identified by The San Francisco Examiner as the West German computer
student who broke access codes to snoop in to U.S. military files a
half-world away in Berkeley, Calif.

Earlier, a West German weekly magazine, Quick, identified the
computer intruder as Mathias Speer, 24.  Clifford Stoll, a researcher
at the Berkeley Laboratory and Leroy Kerth, a Lawrence Berkeley
Laboratory director who oversaw the investigation, said that name may
have been a pseudonym.

In this case, Balogh, in what investigators believe was an attempt to
get more information about confidential military files, took the bait
investigators dangled in the hopes of learning who was gaining
illegal access to the computer system.

Having discovered that an intruder had been reading their computer
records, officials at the U. S. Department of Energy's Lawrence
Berkeley Laboratiry planted a fictitious file to bait the hacker's

The purpose was to keep the hacker on the line long enough for
authorities to trace his phone call.  The hacker tapped into the
computer using a telephone and computer modem.  In the event that the
call coudln't be traced, authorities also included in the fictitious
file an address for the snooper to write for additional information.

Berkeley officials thought they had solved their security problem in
January 1987, when West German officials were able to trace the phone
call to a computer student in Hanover.

They were surprised four months later when they received a letter
from Balogh, who requested the information offered in the fictitious

...Although caught, the West German student has not been charged with
any crime.  The extent of Balogh's involvement has not been revealed.

The FBI isn't saying what, if anything, it knows about Balogh, who in
1983 served them as an informant and government witness.

[More about Balogh's involvement in schemes to steal $38,000 in
diamonds, secure garbage-hauling contriacts with a phony check, and
steal computer equipment to sell to the Soviets.]

Curt Galloway
UUCP: ...!{seismo, ucbvax, harvard}!!cg13+
Drop In Any Mailbox, Return Postage Guaranteed
From:      Fred Blonder <>  24-APR-1988 19:08
I don't think the concept of a virus applies to ATMS. They're not
constantly being exposed to random unverified software. Now if one of
the Bank or atm companies programs tries something funny, you might
be in trouble, but I hope I am not being too optimistic in assuming
that such software gets scrutinzed throughly before it get used. The
main point is that ATMS are not general purpose computers, and don't
need an operating system, so all the standard hooks that a virus would
use to gain a toehold just do not exist.

	the ATM is instanteanous.....this is wierd, either the PIN is
	stored on the card (read: very stupid) or the machine just
	ignores the PIN of credit cards.....

Some ATM systems store the PIN encrypted on the card, just like Unix
passwords. If the encryption algoritm is truly one-way, this is safer,
but stil doesn't protect against a forged card if you know the
encryption algorithm.

Once when I entered my PIN number incorrectly I was surprised when the
machine accepted it and went to the next stage of the menu, but when I
finally did something that requred the machine to contact the host
computer, it barfed and made me type the PIN again. This is an odd
system, but at least it seems secure.
					Fred Blonder
From:      Eric McIntosh <MCINTOSH%CERNVM.BITNET@CUNYVM.CUNY.EDU>  25-APR-1988 04:20
Just for information I would like to describe briefly here our
proposed solution to the vulnerability of passwords.

At CERN we are installing Security Dynamics software on our CRAY
UNICOS system. This software requires that every authorised user
be issued with a SecurId card which displays a password which
changes every 30 seconds and is, I guess, the ultimate in
dynamic passwords. In addition, every user must have a PIN
so that possession of the card alone is insufficient. When a
user attempts to login he is prompted for his PIN and current
passcode instead of his UNICOS password. We believe this
system to be very difficult to break even with continuous
monitoring of lines or networks. It makes life a little more
difficult for the user but a lot more difficult for crackers.

We have also extended this system to cover batch job submission,
to prevent tampering with batch jobs, and to verify user
authorisation even when jobs are stored and forwarded and
possible monitored by an unauthorised person.
From:      "Mark D. Gabriele" <Gabriele@DOCKMASTER.ARPA>  25-APR-1988 08:19
I am aware of several packages which have been evaluated by the
National Computer Security Center for securing the MVS operating
system: RACF version 1 release 5 (C1); Top Secret Version 3.0 (C2);
ACF2/MVS releases 3.1 through 4.0 (C2).  A C2 system is more
secure than a C1 system.  Only one VM security package has been
evaluated for VM, and that is ACF2/VM, which got a C2 rating.

I have never used MVS, but I have worked with ACF2/VM and I found it
to be an extremely flexible and usable system.  It supports an enormous
degree of custom-tailoring if your site needs it; if you don't, then
you can run it as it comes right out of the box.  I have never worked
on systems running RACF/VM or VMSecure.


note that I don't act as a spokesman for whoever it is that I work for.
From:      mason@EDDIE.MIT.EDU  25-APR-1988 09:39
The PIN you key in is locally (to the atm) run though a one way
enxryption algorithm with the account # (stored on the card) and
compared with the same thing (again stored on the card).
This means if you know the format of the data written on the card
(ANSI X9.1) and the encryption stuff (ANSI X9.8) you can write
your own cards. Also, in many remote areas where banks feel there's
not as much need for security if the ATM cannot talk to the host it
will perform transactions and just journal them. Cut the phone lines
and Voila! The two year old expired card works again.
I don't keep any more money in an account attached to a card
than I can afford to lose.
From:  25-APR-1988 14:04
>1.  Let P(day) represent a secret permutation of the alphabet, unique every
>    day.  e.g. P(today) = zfhkgjavitclbumxpwdonyrseq

If you assume that what the user types is vulnerable to an eavesdropping
attack you must assume that any thing the computer displays is also
vulnerable.  WSO if the computer displays P(day) that the attacker knows
it. OOPS!

>2.  Let C(T,N) be a function which generates a string of N unique letters.
>    T is the time of day (number of seconds since midnight).
>    e.g. C(12:00:00,5) = jrmxo

The use of any other monotonically non-decreasing function would work as good
as the time.

>3.  Let M(P(day),C(T,N)) be a mapping function such that every letter in
>    C(T,N) is replaced by the letter following it in P(day). This mapping
>    is simple and can be applied by visual inspection.
>    e.g. C(zfhkgjavitclbumxpwdonyrseq,jrmxo) = asxpn

If the attacker knows P(day) (see above) then the security of this method
depends on keeping M secret.  But the M you chose is easy to determine
from just a few examples.

>Does the C(T,N) function need to be secret or is it alright to allow the
>attacker to anticipate the challenge?  I suggest using T as a parameter
>to eliminate the problem of the same challenge being issued twice on the
>same day (thus with the same P(day)).

It doesn't matter what C you use if P(day) and M are known.

The only successfull looking system of this type is based on the use
having a small calculator like device that used DES or some other
encryption algorithm to generate the proper response to a challenge
based on a key stored in the device and back at the computer.

Mark Biggar
I have recently been assigned the task to make sure that the Departmental
computers (macintoshes, as well as some IBM's..) are virus free.  This should
prove to be quite a task seeing as how the faculty members love to get a
hold of all the PD and shareware stuff that they can get their hands on.

Basically what I have decided to do is to write my own application and or
protecting CDEV (on the order of Vaccine) to deal with any virus problems
which we may have.  I would greatly appreciate it if anyone who has been
bitten by one of these beasties or anyone who has trapped one without being
bitten could send me the source/object code to it/them.  I would be willing
to correspond in any way necessary (US Snail Mail, Bitnet, direct modem
transfer, disks, tapes, etc..).  As for making use of some of the applications
which are already out there, well call me overly paranoid, but I really dont
trust anything anymore that I dont have a copy of the source to.

After I write the above, I will post it to the net(s) along with a copy of
the source....

thanx in advance...
Bye for now but not for long
David S. "Greeny" Greenberg
From:      John Pershing <>  26-APR-1988 10:15
If your friendly, local department store takes 5 minutes to do a credit
authorization, then there is something severly wrong with their system.

Credit authorizations for the major credit cards are almost always
processed in a matter of seconds:  the systems that they use are FAST!
If a request is not answered in some specified time limit it is an
implicit authorization, and the issuing company will eat the loss if this
turns out to be a mistake.

      John A. Pershing Jr.
      IBM Research, Yorktown Heights
I recently got the following from our people in our micro-computer section.
I'm no expert on Kermit, but it smells like security, so I thought somebody
on the SECURITY list might be able to help --

                   CAN YOU HELP US?

Script files can be used with KERMIT on microcomputers to
do many things much more efficiently.  Micro Resource Center
staff have been developing script files to log in automatically
to the VAX and CYBER mainframes.

We are quite stumped by one problem and wondered if you
had any suggestions?

Script files can be set up in one of two ways:  (1) the
password to complete your log-in is contained in the
script file, or (2) you manually enter your password
each time you log in.

Our concern is protecting the security of access to your
mainframe accounts.

When the password is contained in the script file itself,
it is not visible on the screen during the execution of
the log in procedure.  Security is protected as people watch
you log in, (they can't see the password, and they can't even
watch your fingers because you aren't typing anything).
However, for those of us with microcomputers that cannot be
locked or are located in semi-public areas, it would take
a very few minutes for a "hacker" to find the script file
and identify a password to any mainframe computer account,
if the script file was left on the hard disk.

Typing the password in manually does nothing to solve this problem.
In Kermit, apparently when the script file is halted to permit
typing from the console, there is no way to prevent
the password from being visible on the screen.  In addition,
we have not been successful in clearing the entire screen
rapidly enough or changing the colors on the screen at that point
to prevent users from viewing the password as they watch you
log in.

This is a section of the script file we have been using to log
in to the VAX with the password included in the script file:
   input 5 Local>
   output c\13
   input 20 Username:
   output MYUSERNAME\13\10
   input 5 Password:
   set input echo off
   output MYPASSWORD\13
   set input echo on
   pause 1

This is a section of the script file we have been using to log
in to the VAX and enter the password manually:
   input 5 Local>
   output c\13
   input 20 Username:
   output MYUSERNAME\13\10
   input 5 Password:
   output @con;                         type in password here
   output \13
   pause 1

I would really appreciate any ideas you might have on how we
can successfully protect access to our passwords to either the
VAX or CYBER mainframes under these conditions, without totally
abandoning the concept of automating our log in procedure
through use of Kermit Script Files.

If it isn't possible, I guess that's useful to know, too!

From:      tencati@VLSI.JPL.NASA.GOV  28-APR-1988 12:22
There's NO glaring hole in VMS security currently (to the best of anyone's
knowledge).  The current security patch that DEC is distributing is their way
of trying to be responsive to all us customers who jumped all over them when
the CHAOS feces hit the fan.  

DEC found flaws in their VMS Workstation Software (VWS) AFTER they had released
V4.7.  That is why the cover letter says the patch has to be re-applied each
time you upgrade to a new version which is lower than 4.7.  The VWS apparently
has been around since 4.3, which is why the release covers all versions of
VMS from 4.3 to 4.7 inclusive.

I'd like to commend DEC for the effort they are putting forth.  Put yourself
in their place.  Here you are, a large company, and your product is found to
have flaws that could cause certain systems in certain instances to compromise
the security they may be relying on.  What's the best way to "strongly suggest"
that your customers implement the patch you have devised?

I think they did a good job.  We all wish VMS was bug-free, but when you're
developing a product in a competitive environment, the faster you can get it
out, the more money you can make.  And unfortunately, we live in a money-based

The patch to SYS.EXE appears to have something to do with the page-faulting
algorithm or working-set adjustment according to what I remember seeing in 
the fiche when I checked to see what it was patching.  This also appears to
be a retro-fit for all 4.7 and under versions.

Before you criticize DEC, first put yourself in their shoes and see if you
can come up with a better idea, keeping in mind that YOU are one of the 
customers you would have to make happy.

Ron Tencati
Jet Propulsion Laboratory
Pasadena, CA.

[These opinions are mine.  Not JPL's, not NASA's, all mine!]
From:      Stan Horwitz <V4039%TEMPLEVM.BITNET@CUNYVM.CUNY.EDU>  29-APR-1988 10:16
  I seems to me that ATM technology isn't at risk from hard core hackers
doing damage as from share stupidity on the part of ATM card holders.  I
have several cards and use the regularly.  The word INSTANTANEOUS is wrong.
It takes me just as long to get a credit card authorization as it does to
make a withdrawal with my ATM card, give or take a few seconds here and there.

  As far as ATM's go, I am not sure but I think that what happens is you
enter an entire transaction and the machine reads some stuff off the card,
then the whole batch gets sent via telecommunications line somewhere where
the transaction is processed and verified, then if cash is wanted, a signal
is sent to the actual ATM machine and you get your bucks and a receipt.  Just
about the only transaction I ever make at ATM's is withdrawals.  If I happen
to type in my PIN incorrectly, it does not immediately warn me, I have to
enter the entire transaction, then wait for the central ATM host to bounce
a warning back to me.  This is hardly instantenous.
From:      Bob Dixon <TS0400%OHSTVMA.BITNET@CUNYVM.CUNY.EDU>  29-APR-1988 14:59
We are looking at encryption software for IBM MVS systems. We are aware of
Kryptonite, DEF, Psypher and IBM's product.

1. Are there any others available?
2. Does anyone out there use any of these, and if so would you be willing to
   share your experiences with us, to help us make a choice?
3. Many encryption packages use the DES algorithm. Does this imply any kind
   of compatibility between packages? IE - if a file is encrypted by one
   package using the DES algorithm, can it be decrypted by another package
   which also uses the DES algorithm, assuming that you know the key? This is
   very important if one wants to exchange encrypted files over networks
   between dis-similar computers.
                                                      Bob Dixon
                                                      Ohio State University
From:      NET%"" 29-APR-1988 20:00
Date: Fri, 29 Apr 88 16:18 MST
From: Watching the Detectives---now we *know*! <>
Subject: RE: Security in X.400 MHS

>Get hold of a copy of the new 1988 draft X.400 and X.500 (directory)
>standards.  In the US we order these from ANSI (New York) or a company
>called Omnicom (Vienna, VA) - not sure what the Italian source is.

Actually, you can't get copies of the correct 1988 versions of X.400
and X.500 series from ANSI; they have not been published by the CCITT. The
latest versions, approved March 21-31 in Geneva, almost certainly haven't
made it back to ANSI for distribution.  Hal Folts of Omnicom has copies
of the draft standards, and if you call and specifically ask for those
versions, you may get what it is you want (but beware that the hand-written
comments do have the force of law...).  Also, remember that nothing,
particularly in the area of security, is final until the November Plenary
in Melbourne.  

Some errors in your titles:

F.400 (also numbered X.400) is ISO 10021-1 (DIS ballot expected shortly)
X.402 is ISO 10021-2 (DIS Ballot expected shortly)
X.411 is ISO 10021-4 (DIS Ballot expected shortly)
(see also ISO 10021-3,5,6,and7 for additional X.400 cross-lists)

Joel Snyder
DECUS representative in US delegation, CCITT Study Group 7