The 'Security Digest' Archives (TM)

Archive: About | Browse | Search | Contributions | Feedback
Site: Help | Index | Search | Contact | Notices | Changes

ARCHIVE: Rutgers 'Security List' (incl. - Archives (1989)
DOCUMENT: Rutgers 'Security List' for February 1989 (66 messages, 41109 bytes)
NOTICE: recognises the rights of all third-party works.


From:      [email protected] (Chris Crook)  6-Feb-1989 11:23:16
To:        [email protected]
This is not meant to be an advertisement in my behalf so I will simply quote
an article from an insurance magazine:

"You may order a booklet about alarm systems from the National Burglar and
Fire Alarm Association, 1120 19th Street N.W., Suite LL20, Washington, 
D.C.  20036.  Enclose a check or money order for $2.  If you install an alarm
system... you may be eligible for a discount (on your insurance)."

I sent in my $2 but have yet to recieve my booklet yet (although the check has
cleared).  Hope I have been of some assistance!!!

C. Crook
|   The opinions expressed here are clearly not my own, nor anybody elses!    |
Date:      Tue, 17 Jan 89 06:57:10 CST
From:      [email protected] (Mahan)   6-Feb-1989 11:44:12, [email protected] (Mahan)
To:        [email protected], [email protected]
Subj: EMP exposure for explosive detection

     I have done some study on the effects of EMP on various materials.
Firstly, the EMP will not set off typical rifle or pistol ammunition, as the
metal shell casing provides a good shield for the powder.

     I would assume that the shells that the Navy was having problems with were
electrically fused.  Any explosive that depended upon electrical means for
detonation would be likely to detonate when exposed to an EMP.  However, this
is due only to the induced voltages and currents in the detonator circuits.

     Any electrical devices, especially those using integrated circuits, are
most probably going to be rendered useless by the pulse.  The intense magnetic
field of the pulse will erase magnetic media unless it is extremely well
shielded by a ferromagnetic material (Aluminum will not do).

     There are several papers and books on the effects of EMP dating from the
1950's to the present date.  The best starting topic is probably nuclear
weapons effects.  Most of the information is unclassified and beats Stephen
King for horror. :-)

Stephen Mahan
Naval Coastal Systems Center
Panama City, FL  32408

[email protected]
From:      [email protected] (Jim Duncan)  6-Feb-1989 12:49:53
To:        [email protected]
When I was in the alarm business a few years ago we bought more than half of
our equipment from North Supply, a subsidiary of United Telecom.  They
originally were suppliers of telephone equipment, everything from central
office switches and phones to telephone poles (and the strap-on steel gaffs
for climbing them).  The move into security equipment was quite natural
because the product lines and the people who buy them are so similar.  They
carry a wide range of alarm equipment and hardware, more than what's listed
in their catalog.

	North Supply
	600 Industrial Parkway
	Industrial Airport, Kansas 66031
	Phone: (913)791-7000
	Telex: 4312079

When I was buying from them, they had a WATS line for me to call in my order
and for talking to technical support; I don't know if they still do that.
They might pay your shipping; they did for us.  It was extremely difficult
for other suppliers to beat North Supply's price on most items, especially
hardware, since they were supplying so much of it to phone companies.

Disclaimer: I don't have anything to do with North Supply; I am a satisfied
past customer.  The information in this posting is dated.  The address and
phone number listed above is current.

When you call to ask for their catalog, be sure to tell them you need the
catalogs for both security equipment *and* phone equipment.

  Jim Duncan, Computer Science Dept, Old Dominion Univ, Norfolk VA 23529-0162
       (804)683-3915     INET: [email protected]    UUCP: ..!uunet!xanth!jim
From:      [email protected]  (Commander Spock)  6-Feb-1989 13:27:52
To:        [email protected]
I am currently an undergraduate student at California Polytechnic University,
Pomona, CA, majoring in Computer Information Systems.  I am a senior and have
a couple more "core" classes to go before graduating.  Right now, I'm in a
interesting class called "Information Resource Management".  In our class,
we are supposed to give a 40-minute presentation on a topic that he has listed
and we have selected.  Mine was on "computer security".  Now I have several
questions.  For one thing, what sort of "satisfaction" (if any) does the person
receive when he (or she) hears that their little program is wiping out have the
country's defense network system or ruining entire departments within large
Fortune 500 (not to mention the smaller companies) companies?  Most of the
reading that has been published so far on those caught say that they did not
write the programs on purpose.  If so, why the **** did they write it in the
first place?  Second, what right do they have in attempting to distribute
the program(s) and ruin other people's property?  I fail to see how lawyers,
judges, and bureaucrats alike don't REALLY sit down and conjure up some
good, punishing laws.  If the virus (or "viri...plural) destroys systems
and the author of the program is found, should privacy laws or trespassing
laws punish those who created the material?  I have to admit: part of the
problem with the spread of viri is that people often destroy the systems
themselves with little or no aid from the viri; that is, in a state of
panic, people are the virus, not the virus.  To me, it sounds like funny
logic, but look at the special viri that have affected the Macintosh
community (P.S. I'm a Mac user).  For instance, the "nVIR" virus has little
if no special consequences from its disruption(s).  Yet, when people run
programs like Interferon or Ferret (for SCORES) or a message pops up (like
with Vaccine), people go into a frenzy...a state of shock almost.  By trying
to fix the problem, people make matters worse.  In addition, the emphasis
of backing up your system should be (STRONGLY) emphasized.  I myself have
a rather large hard drive (70 Mb).  Several hours and disk-paks later, it's
all done.  People complain about the time it takes to make a lousy 6 to 8
hour backup, *BUT* when the virus (or viri...let's suppose double-wammy)
attack the system, it takes much longer to recoup the material that was lost.
What's the logic here?

Logic and physcology play important emphasis here in addition to fixing the
actual problem.  Now as a special request, can anyone provide me with any
background behind viri?  Two, are there any good books on how viri work or
how people react psycologically to these problems?  Three, can anyone name
of some good laws that have been or currently being written into effect
to remedy such situations as what has happened this past year or two?  I
would appreciate *ANY* comments, suggestions, information, or sent material
to either of the two addresses listed below.  Thanks for listening, and
thanks in advance for any information.

Pax vobiscum.

Spock          INTERNET: [email protected]
                         [email protected]
                 BITNET: [email protected]
From:      Lynn R Grant <[email protected]>  7-Feb-1989 18:58:54
To:        [email protected]
There is government certification of software, of a sort:  the National
Computer Security Center ratings for trusted systems.  And just as
everyone has said about certification, it is slow, labor intensive, and
imposes a lot of work upon both the vendor and the government.

   Lynn Grant
From:      [email protected] (Arthur S. Kamlet)  7-Feb-1989 19:14:19
To:        [email protected]
If you are charged with burglary, then the prosecution must prove
you had intent to commit a crime.  Possession of lockpicks is
evidence which a jury could believe to mean you planned to enter
rooms in the building to remove items, even if you never did.

A defendant caught in this position would be expected to say he
just wanted to trespass, and never intended to remove any property.
So, a defendant who says what he would be expected to say may not be
believed by a jury.

It's a tough situation.
Art Kamlet  [email protected]  AT&T Bell Laboratories, Columbus
From:      [email protected] (Adam J. Richter)  7-Feb-1989 19:34:11
To:        [email protected]
>a file called ".secret" in your home directory, which is printed instead
>of the standard "Password:" prompt.  [...] With .secret files,
>fake login programs are easy to detect...

	You must also modify /bin/login so that it doesn't do this on
a pty.  Otherwise, the trojan horse can spawn a real /bin/login on a
pty, and duplicate what that process prints from the same input.  Even
if you make /bin/login only executable by root, and pty's only
accessible by setuid programs, it can still find out what login will
print by using whatever mechanism normal users use for logging in
across the network.

	Other than that, I think you've got a really bright idea.

			--Adam J. Richter
Adam J. Richter			[email protected]
2600 Ridge Road			....!ucbvax!widow!adamj
Berkeley, CA 94709		Home: (415)549-6356
From:      [email protected] (Jim Kirkpatrick)  8-Feb-1989  2:49:56
To:        [email protected]
Cory Kempf asks --

   Are there any systems out there that implement some way of verifying
   that the program that you (the prospective user) are talking to is
   really the login program?

I have forgotten most of the details, but one idea CDC has tried is as
follows -- in the Device Interface (DI; "terminal server" in generic talk)
one may define a trusted character sequence.  It is usually something wierd
like ^a^b, something a user would not normally enter.  When the DI receives
this sequence, it immediately terminates any connections/sessions in progress
and starts a new one connected to the login process.  Thus, if you know your
terminal is connected to such a DI and you enter that sequence, you are
now genuinely logging in.  Of course you have to trust the ethernet cable too
and so on.  If you aren't sure your DI is set up that way and enter the trusted
sequence, of course a good login simulator could simulate the proper response.
You MUST know the DI was set up and what it's trusted sequence is.

One problem is if you are on a PC, transferring files/data that use all 128
ASCII values, and "stumble" across the trusted sequence.  BOOM, you're blown
away and asked to login.  That's also why the sequence is usually control
characters.  Terminals like Tek graphics terminals only use printable
characters for the most part; I don't know about ALL file transfer protocols,
but some stay away from control codes as well.
From:      "W. K. (Bill) Gorman" <[email protected]>  8-Feb-1989  3:09:56
To:        Security Digest <[email protected]>
>Even if you catch the criminal red handed, in the act of committing a
>crime, you still have to prove they _intended_ to commit the crime _when
>they entered the building_ to make burglary stick.

I have seen *exactly* this defense used successfully in court numerous times.

>(Granted, possession of burglar's tools would certainly be an
>indicator of intent

Not necessarily. Consider the *incidental* thief who is also regularly and
legitimately employed as a carpenter/plumber/construction worker/whatever
and just happens to carry all these around in his vehicle, since they
are necessary to his/her regular employment. Good luck with that one!

|W. K. "Bill" Gorman                 "Do             Foust Hall # 5           |
|PROFS System Administrator        SOMETHING,        Computer Services        |
|Central Michigan University      even if it's       Mt. Pleasant, MI 48858   |
|[email protected]                wrong!"         (517) 774-3183           |
|Disclaimer: These opinions are guaranteed against defects in materials and   |
|workmanship for a period not to exceed transmission time.                    |
From:      <[email protected]>  8-Feb-1989  4:06:45
To:        [email protected]
The following excerpt is forwarded from [email protected]:

I recently downloaded a CDEV file named "Gatekeeper".  Current version
is 1.0.  I'm not sure whether or not this is a BETA copy or not.  But
in lue of the situation, I thought it was worth a try testing the
program.  This program allows users to actually see what's going on with
their resources as far as possible viri programs are concerned.  it
lists out there resource name, ID number, application that they
attacked (or in this case, attempted to attack), application which\
they attacked from, and what source disk; not to mention the date
and time.  The program, by default, keeps a record log of everything
from startup to shutdown.  It prevents possible viri programs from
attacking either the Finder or System files.  I'm not sure whether
the program prevents attacks on the Desktop file, but since both the
author and myself have tested the SCORES virus, I feeel that the
program DOES include the Desktop as one that is innoculated.  Once
gatekeeper is taken off the disk, the viri programs DO attack the
applications and/or three files.  When Gatekeeper is in the System
Folder, nothing (ABSOLUTELY NOTHING) happens.

As a personal testing of my own, I've tested "nVIR" as well as
"SCORES" on Gatekeeper.  In addition, I've tested "mutated" versions
of "nVIR" at Gatekeeper.  Gatekeeper STOPS ALL of those!  I'm
not sure whether or not it stops either "Hpat" or "INIT29"; however,
someone has a copy of "Hpat" on his hard disk, and sometime this
week, I plan on getting a copy of it.  Hopefully this weekend, I'll
have it isolated and disassembled.  I've used MacNosy 2.35 (I have
a newer version, but it doesn't seem to work real well with my
Mac Plus) for disassembling into source code.  However, there are
some problems here with Gatekeeper.

When exiting, Gatekeeper either locks up (with an ID=02 msg) or
simply clears and redisplays (rapidly) a blank dialog (with NO msg)
repeatly.  I cannot seem to define a parmeter file for customizing
certain applications that DO require certain resource checks (like
MS Excel or ResEdit).  That provides the same error message or
dialog.  The ID=02 message shows up on my Mac Plus, and the other
message shows up on either the Mac SE or II.  The IIx has NOT been
tested yet.  Other than that, I've been quite impressed with the
product so far.  And what's best is that it's FREE!

Spock          INTERNET: [email protected]
                         [email protected]
                 BITNET: [email protected]

<end of excert>

-David Richardson, The University of Texas at Arlington
Bitnet: [email protected]   Internet/Domain:  [email protected]
UUCP: ...!{ames, texbell!}!!b645zax
USnailMail: P O Box 192053, Arlington, TX  76019-2053
PhoNet: 817-273-3656 (FREE from Dallas/Ft. Worth, school months only)
From:      Russell Brand <[email protected]>  8-Feb-1989 10:42:27
To:        [email protected]
Cc:        [email protected]
RSA is patented algorithm.   You can contact their corporate address
to get details about conditions for use.

	Jim Bidzos
	10 Twin Dolphin Drive, 
	Redwood City, CA  94065

As I understand it, unless you are either MIT or part of the
government you will have to license it.

From:      Jeff Makey <[email protected]>  8-Feb-1989 11:02:27
To:        Security Interest Group <[email protected]>
Cc:        Mark Crispin <[email protected]>
Mark Crispin recommends an "out-of-band" storage place for passwords,
which would only be accessible by an appropriately privileged program,
and suggests as a good example an unnamed operating system that sounds
suspiciously like DEC's RSTS/E for PDP-11 computers (which I broke
into as a college sophomore using a simple Trojan Horse attack).

Unfortunately, this offers no more protection than a hypothetical UNIX
password file protected as follows:

    drwxr-xr-x 13 root         1024 Jan  9 13:31 /
    drwxr-xr-x  2 root         3584 Jan 23 03:42 /etc
    -rw-------  1 root         2899 Jan 10 12:48 /etc/passwd

except that there is no possibility of *accidentally* setting the file
or directory permissions wrong.  Any addition of some sort of
"out-of-band" storage place on UNIX (say, /dev/passwd: a special
device that is readable only by root, regardless of the inode mode)
would be a kludge with limited benefit, and would no doubt be subject
to its own special class of attacks.

This scheme will have virtually no benefit on a well-administered UNIX
system, whereas the cost is moderate and may be very high (if early
implementations are easy to subvert).

                        :: Jeff Makey
                           [email protected]
From:      EVERHART%[email protected]  8-Feb-1989 11:32:42
To:        [email protected]
The benefits of changing passwords periodically SOUND fine. However,
if I convince people to use passwords that are long (typically derived
from sentences) and well chosen, and then ask them to repeat it every
month or two, the response is to either write the passwords down
OR to choose MUCH less secure passwords. Consider that if one requires
changes quarterly, passwords like "Quarter11989" are easy to remember.
They are also not so very secure once this discussion is overheard.
Jan1989uary  would be another template variation. Given that people
have to remember their passwords, we should remember that if they
choose their passwords well, they should NOT have to change them
often. Try to force frequent changes and you'll get lousy password
choices OR you'll have them written down near the terminals. Which
you accept depends on how responsible your system's users are. A
responsible computer user will choose a hard password to guess and will
NOT need to change it often. Some guidance in choosing good passwords
is valuable. I suggest people use first letters of words in some sentence
they can remember, or combos of words and numbers not all of which
may be English but which they can remember easily. If your system
accepts random punctuation characters between words, they can be
useful too. VMS is a bit limited there, unfortunately.
  This is said in the hope that people will not blindly go expiring
peoples' passwords monthly and expecting that their system security
will be improved by the practice. The actual effect on security may
well be to lower it a lot.
Glenn Everhart
Everhart%[email protected]
From:      Mark Crispin <[email protected]>  8-Feb-1989 12:02:27
To:        Jeff Makey <[email protected]>
Cc:        Security Interest Group <[email protected]>
My comments were not referring to RSTS/E, which was essentially a cheap
imitation of the operating systems I was referring to (just as Unix was
essentially a cheap imitation of Multics...:-)).

While it is true that "security through obscurity" is of limited value (most
of these crackers are following cookbook procedures that they couldn't figure
out on their own), that isn't the benefit of having the passwords stored
outside of the filesystem.  The benefit exists in making trap doors more

On one operating system, where passwords were associated with directories, the
trap doors took the form of passwords secretly applied to system directories
which normally never have passwords (so only privileged users could write to
them).  It could be a long while before the system manager notices that the
root directory (or the directory on which the login procedures reside) has had
a password applied to it.  In the meantime, the cracker can get owner access
to this directory from any account at any time, just by knowing the password.

This isn't a problem on Unix, although you do need to audit for fake accounts
with powerful group access rights.  However, you CAN have file links to the
password file in some unobvious (and quite accessible) place known only to our
friend the cracker.

Both forms of trap door are applied by a villian who "borrows" a privileged
job/terminal while the owner isn't looking.  The villian never knew any
privileged passwords, and his time on the terminal was quite limited; he just
had enough time to put in the trap door and cover up his traces before the
owner got back from the bathroom.

I fixed this on the aforementioned OS by having a list of directories which
could not have their passwords set or modified without a special flag being
set in the kernel (the mechanism was kept obscure, of course; it caused great
amusement to the systems programmers who stumbled across it but to my
knowledge no villians ever found out about it).  Any attempt to set or modify
a password of one of the special directories would cause a hardcopy log on the
system console even if the magic flag was set.  Any attempt to set or modify
when the  when the kernel flag was not set would cause a system error event
which made a hardcopy log and a log in the system error event file.  This
happened no matter what privileges the user had.

None of this was bulletproof, but it didn't matter.  Most villians don't have
access to your hardcopy console log, and this change was never announced.  I
just starting catching attempts...I also found out who the careless privileged
users were.  One of them was me.

-- Mark --
From:      [email protected] (Greg Berg)  9-Feb-1989  4:57:35
To:        [email protected]
The AT&T toolchest has a program named 'watchit' that can be tailored
to execute various alarms on file access/modification.
From:      <[email protected]>  9-Feb-1989  4:57:58
To:        [email protected]
We are about to install a lab with a number of hard drives that will support
training and general student use.  When students have access, we would like
to make those programs and files used for training to be "locked" in some
way.  Any suggestions for this.  At the moment, there is no networking in-
volved with these machines.
From:      Jim Yang <[email protected]>  9-Feb-1989  5:12:03
To:        Any <[email protected]>
I need PC software which can grant several passwords for one machine usage.
This can be freeware, shareware, or commercial software. If anyone in the net
know those kinds of programs, please let me know. Thank you in advance for
your help.


BITNET: [email protected]
InterNet: [email protected]
From:      Tom Dimock <[email protected]>  9-Feb-1989  7:17:35
To:        [email protected]
Russ - The method of authentication described is fairly well known, and
is in fact a simplified version of the authentication protocol used by
Kerberos.  The major change Kerberos makes is that part of the "string
to be encrypted" has a defined structure which communicates the
privilege being requested.  It can be used in a network where all
workstations are smart, or where the protocol is supported by a smart
card or other external device.  The Racal-Guardata RGL 500 Watchword
Generator, for example provides a one-way version of this protocol (
it authenticates the user to the host - the user must assume that
he/she reached the desired host and not a fake).
From:      [email protected] (John Merrill)  9-Feb-1989  7:32:46
To:        [email protected]
   Here's the rub.. Thermal neutrons are easily stopped by materials with
   high cross-sections like cadmium.

   So the question arises "Is this TOO easy or am I missing something".  

You're missing sonething obvious.  If a thermal neutron screen were
used by itself, then your argument holds.  If, however, it is used in
conjunction with X-ray screening, then the presence of a large,
square, box of metal in the middle of a suitcase will surely raise
some eyebrows, won't it?

Of course, placing your bomb in the middle of a large vessel of water
would have much the same effect...but I think that would be just a
little hard to conceal.

By contrast, the nitrite sniffers are already known to be relatively
easily defeatable.  Consider the bombing of the Brighton hotel where
the Conservative Party held its 1986 party conference.  That hotel
was very thoroughly `sniffed' prior to the conference...and a bomb
still slipped through.
From:      Luke OConnor <[email protected]>  9-Feb-1989  7:37:37
To:        [email protected]
There have been several messages that present methods to authenticate 
the user and the system through a straight forward protocol that involves 
a few exchanges. I do not challenge their correctness directly, but they
do not deserve the title of "zero knowledge". Zero knowledge proofs rely 
on randomness to provide a small chance of fraud. 

Let P be the prover and V the verifier. The Prover maintains he has a secret,
which only he knows, and can thus identify himself with. The Verifier is
to be convinced that P does know this secret. A secret may be the factorisation
of a large integer, the discrete log of an integer mod q, or the solution to
a problem that is NP-hard. There are others. P and V conduct exchanges,
where at the conclusion of the exchanges V is convinced with overwhelming
probability that P knows the secret. On the other hand, P has an infinitely
small chance of fooling V. 

Say there are k exchanges. At each exchange, P answers the questions of V.
If P knows the secret then P can always answer the questions correctly. If P
does not know the secret then there is some chance, say E, that P can guess
the correct answer to V's enquiry. P can fool V with probabilty roughly
E^-k, which when k is chosen suitably, is next to impossible for P.
A zero knowledge proof for quadratic non-residuosity only requires
one exchange of k integers.

Passwords are easier to guess than the solutions to intractable 
problems. Passwords are easier to remember than solutions to intractable
problems. Zero knowledge proofs are tedious. Their implementation on a smart
card is a sensible alternative, allowing two processors to interact. 

Luke OConnor
From:      [email protected] (Norm Finn)  9-Feb-1989  7:57:36
To:        [email protected]
I implemented key generation, encoding/decoding, and primitive key
management based on the Rivest, Shamir, and Adleman's MIT/LCS/TM-82
paper and Knuth's Seminumerical Algorithms on Data General 32-bit
computers.  I wrote an arbitrary-precision integer math package to
support it, using o(n**2) multiply and divide algorithms implemented in
assembly language.

The biggest time-pig is key generation.  An essentially random process,
it took anywhere from 20 minutes to eight hours to generate a keyset
for a 200-(decimal)digit key.  40-digit keys come out in a few seconds
to a few minutes.  The time was used in going two levels deep in making
sure that phi-1 had very large factors.  (Or something like that --
it's been four years since I looked at it).

Encryption speeds were reasonable for 40-digit keys, but took about 10
seconds per 80 characters for 200-digit keys.  The encryption time goes
up as the cube of the key length, if I remember correctly.

I had to invent a message format, there being no standard.  I used a
variable-length record format (each record starts with a binary record
length) to make it easy to chop up and combine records to accomodate
varying key widths.  That's where most of my own design effort went.

Key management is a pain; who can remember 200-digit keys?   I settled
for hiding keys behind an XOR mask generated by operating a well-known
key upon a user password.

The algorithm itself is easy to implement; the paper is very clear.
The most time consuming part of the implementation was writing the
arithmetic library and the test package to veryify that the library
worked in ALL the corners.

All-in-all, a fun project.
From:      GREENY <[email protected]>  10-Feb-1989 23:57:36
To:        <[email protected]>
Anyone who is wondering what Robert Morris, Jr. looks like should have a
looksee at Page 66 in Discover Magazine (January 1989 issue)...

Bye for now but not for long

BITNET: [email protected]
Internet: MISS026%[email protected]
From:      J.D. Abolins <[email protected]>  11-Feb-1989  0:06:14
To:        [email protected]
Today, I read an article in THE JEWISH PRESS (Brooklyn, NY) about
EL AL airlines withdrawal of sponsorship from an air safety
conference scheduled for next month in Israel.

The reason for the withdrawal is reportedly EL AL's concerns that
disclosure of its excellent security measures could be obtained
and used by hijackers and terrorists. Although the conference
organizers promise that the sessions would be closed meetings,
the concern still stands. Many Israeli security experts are
dsimayed at recent public exhibition of technology available to
terrorists. In one such case, a security firm (also scheduled to
be a major participant at the conference next month) displayed a
booby trapped suitcase and radio tape recorder. Although this and
similar displays were intended to emphasize the need for developing
better counter-measures, they also can provide useful information
to terrorists.

Whether it is computer security or airline security, the dilemma
of just how much information to share persists.

J.D. AbolinsNJ DEP
301 N. Harrison Str. #197or  Div. Water Resources
Princeton, NJ  08540CN-029
Trenton, NJ 08625
From:      [email protected] (Brad Schoening)  11-Feb-1989  0:17:37
To:        [email protected]
There was a report on NPR in early January about the Navy's attempts
to run EMP tests.  It seems that sometime in the early 80's the navy
developed a plan to build a giant EMP generator barge, tow it out
into the middle of the Chesapeak Bay near Baltimore and see what the actual
effects would be upon an actual destroyer (or similar ship).  Two problems
developed with this (1) The Chesapeak is the nations most fertile fishing 
escuary and the Navy had performed *no* tests to determine what the effect 
of thousands of volts per sq inch would be upon the little fishies (2) the 
destroyer was to be fully manned - because of course EMP's effects upon 
the crew would also be important. 

As I recall, there was such great community opposition to the project that 
it was eventually canceled.

brad schoening
[email protected]
From:      "Michael J. Chinni, SMCAR_CCS_E" <[email protected]>  11-Feb-1989  0:26:11
To:        [email protected]
Hobbit writes

	Needless to say I don't cotton to this idea -- I love to explore
	buildings and such, but always have a hard time convincing other
	people it's harmless and creates no security risk.  Comments?

You bet. If you think that it's ok to break into a building just to explore,
them I suppose you wouldn't mind if you caught someone breaking into your
house/apartment just to explore.

If you would mind then why is it ok for you but not for anyone else.

You may think that this is ok, but I see it as an invasion of the privacy of
the building owner (or do you believe that no one has the right to forbid you
to explore buildings as you see fit).  

I believe that what you describe IS a crime. A crime called 
'breaking and entering'.

			    Michael J. Chinni
	US Army Armament Research, Development, and Engineering Center
 User to skeleton sitting at cobweb    ()   Picatinny Arsenal, New Jersey  
   and dust covered terminal and desk  ()   ARPA: [email protected]
    "System been down long?"           ()   UUCP: ...!uunet!!mchinni

[Obligatory, I suppose, moderator add-on:  Most of the MIT buildings I was 
referring to are open all the time, and people wander through at all hours.
Of course this doesn't normally include the HVAC rooms and service shafts and
such that the hackers like to get into.  Buildings that are genuinely *closed*,
like the alumni pool, they do get upset about.  In general when one says
"exploring buildings", assume he's already *in* the building for a nominally
legitimate purpose.  In general.  Most of the time.  _H*]
From:      Anand Iyengar <[email protected]>  14-Feb-1989 10:06:16
To:        [email protected]
	Sears once sold a "programmable" lock.  Insert one key (called the
reset key (?) or some such), and twist.  Remove it, and insert a second key
that you want to use to unlock the door from now on, and the lock's "rekeyed".
I don't think that the "reset key" could be re-programmed, which makes the
usefullness of the whole system somewhat questionable to me...Anyone have one,
and want to comment?

From:      [email protected] (S Wosciechowski)  14-Feb-1989 10:13:51
To:        [email protected]
> After disassmbly, reassembly, and lots of referring to the
> medeco diagram, I knew enough to pick the little bugger open.

Congratulations.  In order to effectively open a Medeco cylinder in under
an hour, One must feel the bind on the pins.  As you may well be aware, in
picking a regular cylinder the bottom pins will not have any pressure on them
when the top pins are trapped in the shell. Using this knowledge one can
determine if the side bar is binding against the bottom pin in the false
notch or other part of the pin and not in its appropriate notch.
From:      [email protected]  14-Feb-1989 10:27:35
To:        [email protected]
I believe that entering another's PC or MAC to place a 'benign' virus
there would be akin to breaking and entering, and I also believe that
perpetrators should get the same kind of penaltyy.  How would you like 
it if someone got on your directory and renamed all your files?  
Wouldn't that be something 'benign'?  Nothing was destroyed, was it?

I am attending (this week) a computer virus symposium, and the first
day a Congressman from California spoke about a proposed law, HR 55, 
which would specifically make it a crime to prance about in others' 
computer areas, through 'break-in', through a virus, through anything.
Also this law would allow civil cases to recover damages from the
creator of such an item...
From:      [email protected] (Doug Gwyn )  14-Feb-1989 10:39:41
To:        [email protected]
> This rather simple idea appears to be out of reach of most locksmiths,
> though, who seem to unquestioningly believe the manufacturer's party line...

Now, be nice.  Most genuine professional locksmiths are well aware of
how to attack all kinds of locks.  The first step, when feasible, is to
gain a thorough understanding of the principles of operation of the lock.
Generally from there, along with some experience of picking techniques in
general, it is fairly obvious how to attack the lock.

For some reason "Cipher" (Simplex) locks are considered secure where I
work.  It's fun to manipulate them open, and it doesn't generally take
more than a couple of minutes once you know how.

[Moderator add-on: My impression was always that Cipher locks [electronic,
five bidirectional rocker switches in a big klunky box] were different from
Simplex locks [all mechanical, five buttons in a column or a circle].  Is
there some name crossover or am I confused?  _H*]
From:      "Michael J. Chinni, SMCAR_CCS_E" <[email protected]>  14-Feb-1989 11:07:36
To:        [email protected]
The following is a reprint (with permission) of an article that appears on the
"The Open Channel" page of the Jan. 1989 issue of "Computer" magazine. This
magazine is published monthly by the IEEE Computer Society.  The article's
reprint footnote is as follows:
	Reprinted from the Los Angeles Times, OpEd page, Sunday, December 4,
	1988.  At that time Parrish was serving as president of the IEEE
	Computer Society.

The author is Edward A. Parrish Jr., Past President, IEEE Computer Society.
The article is titled "Breaking into Computers is a crime, pure and simple".
The article is as follows:

	During the last few years, much has been written to publicize the feats
of computer hackers. There was, for example, the popular movie War Games, about
a teen-ager who, using his home computer, was able to tap into a military
computer network and play games with the heart of the system. The games got out
of control when he chose to play "thermonuclear war." The teen-ager, who wa
depicted with innocent motives, eventually played a crucial role in solving the
problem and averting a real nuclear exchange, in the process emerging as hero.
A real-life example in early November involved a so-called computer virus (a
self-replicating program spread over computer networks and other media as a
prank or act of vandalism), which nearly paralyzed 6,000 military and academic
	Unfortunately, perhaps because the effect of such "pranks" seems remote
to most people, it is tempting to view the hacker as something of a folk hero -
a lone individual who, armed with only his own ingenuity, is able to thwart the
system. Not enough attention is paid to the real damage that such people can
do. But consider the consequences of a similar "prank" perpetrated on our
air-traffic control system, or a regional banking system, or a hospital
information system. The incident in which an electronic intruder broke into an
unclassified Pentagon computer network, altering or destroying some files,
caused potentially serious damage.
	We do not raelly know the full effect of the November virus incident
that brought many computers on the Cornell-Stanford network to a ahlt, but
credible published estimates of the cost in man-hours and computer time have
been in the millions of dollors. The vast majority of professional computer
scientists and engineers who design, develop, and use these sophisticated
networks are dismayed by this total disregard of ethical practice and
forfeiture of professional integrity.
	Ironically, these hackers are perhaps driven by the same need to
explore, to test technical limits that motivates computer professionals; they
decompose problems, develop an understanding of them and then overcome them.
But apparently not all hackers recognize the difference between penetrating the
technical secrets of their own computer and penetrating a network of computers
tha belong to others. And therein lies a key distiction between a comuter
professional and someone who knows a lot about computers.
	Clearly a technical degree is no guarantee of ethical behavior. And
hackers are not the only ones who abuse the power inherent in their knowledge.
What, then, can we do?
	For one thing, we - the public at large - can raise our own
consciousness; Specifically, when someone tampers with soneone else's data or
programs, however clever the method, we all need to recognize that such an act
is at best irresponcible and very likely criminal. That the offender feels no
remorse, or that the virus had unintended consequences, does not change the
essential lawlessness of the act, which is in effect breaking-and-entering. And
asserting that the act had a salutory outcome, since it lead to stronger
safeguards, has no more validity than if the same argument were advanced in
defense of any crime. If after experiencing a burglary I purchase a burglar
alarm for my house, does that excuse the burglar? Of course not. Any such act
should be vigorously prosecuted.
	On another front, professional societies such as the IEEE Computer
Society can take such steps to expel, suspend, or censure as appropriate any
member found guilty of such conduct. Finally, accrediting agencies, such as the
Computing Sciences Accreditation Board and the Acceditation Board for
Engineering and Technology, should more vigorously pursue their standards,
which provide for appropriate coverage of ethical and professional conduct in
university computer science and computer engineering curriculums.
	We are well into the information age, a time when the computer is at
least as vital to our national health, safety and survival as any other single
resource. The public must insist on measures for ensuring comuter security to
the same degree as other technologies that are critical to its health and

			    Michael J. Chinni
	US Army Armament Research, Development, and Engineering Center
 User to skeleton sitting at cobweb    ()   Picatinny Arsenal, New Jersey  
   and dust covered terminal and desk  ()   ARPA: [email protected]
    "System been down long?"           ()   UUCP: ...!uunet!!mchinni
From:      "Dennis G. Rears (FSAC)" <[email protected]>  17-Feb-1989 21:57:53
To:        [email protected]

    It doesn't necessarily show trust.  I have them imprinted on some
my keys.  My post office box keys are an exception.  My address is my
PO Box and phone number is my work number.  If a dishonest person
gets my key he can't do anything as my names are not on the keys.

From:      Sorrel Jakins <[email protected]>  17-Feb-1989 22:21:38
To:        Security discussion <[email protected]>
I had occasion last month to be involved with a major 'shrink-wrap'
product that was infected with nVir. Fortunately, it was discovered
right away and the product was withdrawn and destroyed and reshipped.

This was a major embarassment for the manufacturer and had a definite
'bottom-line' effect. 'Shrink-wrap' is still the safest way to go, but
do not be lulled into a false sense of security....

Sorrel G Jakins            | Bitnet:   [email protected]
Brigham Young University   | Internet: [email protected]
ICBM: 40 40 W 111 50 N     | Bellnet:  (801) 378-7130
From:      [email protected] (Doug Gwyn )  18-Feb-1989 20:26:31
To:        [email protected]
-Yes, but that is because most owners give a simple three digit code for
-their lock, and there are only 125 simple three digit codes.

You miss the whole point.  These locks can be MANIPULATED open,
not GUESSED, in very short order.  This is not dependent on use
of one-digit members of the combination.

-Any mathmeticians out there want to compute the number of possible

As usual, even though this can be done, it is not relevant to security
(although it can produce a lower bound for the level of security provided).
From:      [email protected] (Mark A. Heilpern )  18-Feb-1989 20:51:32
To:        [email protected]
>For some reason "Cipher" (Simplex) locks are considered secure where I
>work.  It's fun to manipulate them open, and it doesn't generally take
>more than a couple of minutes once you know how.

Yes, but that is because most owners give a simple three digit code for
their lock, and there are only 125 simple three digit codes. It is
MUCH harder to guess, for example, 1-(2,4)-5, where the 2 and 4 are
pressed simultaneously. Or imagine trying to guess (1,3,5)-(2,4)!

Any mathmeticians out there want to compute the number of possible

The rules:

Each number can only be used once, but may be simultaneously used with
 any or all other (unused) numbers.

The code can be zero numbers, five numbers, or anywhere in between.

Happy computing :)
From:      _David C. Kovar <[email protected]>  18-Feb-1989 20:54:45
To:        [email protected]
>Well, to start with, any reasonable system tells you when you get on
>when you were last on.  The login-simulator doesn't know, and thus
>can't tell you.

  It's very easy to tell when someone last logged into a UNIX or VMS
system. At worst, finger the user and grab the line that says
"Last logged in at:". There are much more elegant ways.

>Better systems (such as Multics) have no way to login-from-within
>a-process.  Thus the login-simulator can't do anything after it gets
>your password but force a logout, since otherwise you'd know immediately

  So you just give the user a "bad password" message and kick him off.
He'll figure that he's typed it wrong, it's been changed by a system
administrator, or something similar. Gives you plenty of time to do
nasty things.

  By the by, is it really necessary to take so many cheap shots at UNIX?

-David C. Kovar
	Technical Consultant			ARPA: [email protected]
	Office of Information Technology	BITNET: [email protected]
	Harvard University			MacNET: DKovar
						Ma Bell: 617-495-5947

"The difficult we did yesterday, the impossible we're doing now."
From:      Alex Nishri <[email protected]>  21-Feb-1989 11:11:10
To:        [email protected]
Three copies of a garden variety nVir were included on the "QLTech MEGA-ROM"
CD-ROM, Volume 1 October 1988, produced by Quantum Leap Technologies, Inc.
This CD-ROM is a collection of public domain and shareware Macintosh software,
available for about $35.  Quantum Leap Technologies sent a letter out once the
virus was discovered, and subsequently released a replacement disc, labelled
Volume 2 December 1988.  Unfortunately for us here at the University of Toronto
Computing Services, the virus had already spread by that point.  We know the
virus has spread into our University Community, but have no way of estimating
how many people were affected.  Within the Computing Services itself about
twenty machines were hit.
From:      [email protected] (Doug Gwyn )  21-Feb-1989 11:22:15
To:        [email protected]
>I was wondering if anyone on the net has had the opportunity
>to try and open a S&G lock without the proper combination??

Yes, but S&G make and have made many different locks.  Are you talking
about one of those chrome padlocks or a vault lock or what?  A good safe
man can open most models of vault locks via manipulation.  There are a
few models that are extremely difficult to open without drilling at least
an inspection hole.

>Does anyone know if the S&G people can open it and reset the
>lock to the factory settings then return same to me.

They might have some staff who are ABLE to do it, but I doubt they WOULD.
Lock manufacturers tend to leave that sort of work to locksmiths.

>I came across one in a storage box

Use it for a paperweight or something.
From:      Barry Margolin <[email protected]_MULTICS.HBI.HONEYWELL.COM>  21-Feb-1989 11:31:09
To:        [email protected]
Actually, Multics does have a form of "login-from-within-a-process".  If
the login-simulator user has access to open a pseudo-TTY, it could read
the user name and password, open a STY, send the login command and
password through, and then turn itself into a transparent dial_out

And if the user doesn't have access to a STY, but does have access to an
autocall terminal line, it could do the same thing by dialing out over a
phone line.  And the same can be done with a network connection.

I suspect that most installations allow ordinary users to access at
least one of these.

The only real protection against a password-stealing Trojan horse is a
trusted path.  B2 security only requires trusted path for login, so our
solution was that a trusted path may be obtained by hanging up and
redialing, since dialups always connect you to the standard Answering
Service process.
From:      Lazlo Nibble <[email protected]>  21-Feb-1989 11:48:26
To:        [email protected]
> So, some kind person comes along and starts to distribute a virus.
> This makes everyone SO SCARED of accepting a non shrink-wrapped diskette
> that the piracy problem just goes away ...

It's already happened, at least in the Apple pirate community.  Last summer,
CyberAIDS and Festering Hate, two Apple //-specific viruses, were released
into the pirate community.  They were real killers, and Festering Hate is
apparently still floating around in some quarters.  But even though the
pirate community was hit (and hit HARD -- several of the largest pirate BBSes
in the country were knocked down before anyone even knew what was happening)
things are still trundling happily along today.

There are no simple solutions to software piracy.  All the ones I've heard
that sounded to me like they might work involved measures so draconian that
only the most singleminded anti-pirate types would consider them feasable.
Nothing short of a complete reprogramming of society's views on WHO OWNS
INFORMATION is going to put an end to it, and frankly I don't see that
happening in my lifetime . . . 

laz ([email protected])
From:      Brint Cooper <[email protected]>  21-Feb-1989 17:51:09
To:        [email protected]
> I think both Sandia Corporation and EG&G were involved in the project, the
> results of which are probably classified.

	The specific results may be classified but only because they
predict the vulnerability of US equipment.  The phenomenon has been
likened to a "super" lightning strike:  the flow of very large currents
with the expected sorts of damage.

	The Navy want(s,ed) to build an EMP test facility on the
Patuxant River in Maryland.  The first time around, they fudged their
environmental impact statement.  When the environmentalists got ahold of
it and held the Navy's feet to the fire, the revised statement admitted
that there was some danger to people in aluminum boats or boats with
metal masts in the area and that possible damage to fish and other
aquatic life was "unknown."  So far, the Navy's not building the EMP
simulator there.

From:      cray%[email protected] (Robert Cray)  21-Feb-1989 18:11:09
To:        [email protected]
>It seems that the prevailing opinion is that password aging is a complete
>waste of time. I think it can be of use it certain circumstances.

In theory, I think it is a waste of time, but this assumes that users choose
good passwords, and don't share them with others.  I know this isn't the
case for some of the 600 users at this site, and I don't know what can
be done about it.  I've just about finished a "set pass" replacement that
doesn't accept words in dictionary, etc. but what about the secretaries
and word processing people who share their password with everyone in their
group?  And what about people who choose acronyms, or just plain stupid
passwords?  We expire every 90 days, so at least it keeps the cycle fairly
short.  At a university you may be able to say "you chose a stupid password,
its your fault you were screwed", but I don't think that washes in the
real world.  Even backups (which we do daily) aren't good enough - one of
the documents our WP people work on has an 8000 page chapter; it wouldn't
take much to modify it just enough to look embarassing right before it went
to print...


From:      [email protected] (Robert Regn)  24-FEB-1989 23:11:22
To:        [email protected]
I would like to get the programs in the appendix of "Unix system security"
form P. Wood and S. Kochan  (Hayden Books).
But the written mail addresses
are no more valid.

Can someone 
	- send me these programs	or
	- give the new mail address ??


Robert Regn
[email protected]
[email protected]
From:      [email protected]  24-FEB-1989 23:23:24
To:        Security Digest <[email protected]>
>This thinking can be applied to numerous other types of locks including ...

Along these same lines, not long ago an individual received a certain
dubious notoriety for picking the "impossible to crack" (press term, not
mine) locks securing coin-operated public telephones. Is *anything* really
"impossible to crack"? I may be wrong, but this sounds to me like another
case of media hype surrounding a breach of relatively poor security.
Yes? No? Maybe?

[Moderator add-on: Yes, the "pay phone thief" was discussed at length around
a year ago.  Nobody ever really knew if it was for real.  I can dig up the
old msgs if anyone's *really* interested, but without real facts, further
arm-waving about it is discouraged...   _H*]
From:      Stephen Crawley <[email protected]>  24-FEB-1989 23:30:03
To:        [email protected]
> At least you know you've been had, and can get off
> and back on and change your (just-compromised) passsword.

That of course assumes that you are alert!  A typical user logs in with
his/her brains in neutral.

Anyhow, it is not enough to protect you against a more sophisticated attack.

A year or so back, users of the Cambridge University Computing Service were 
hit by a password grabber which infested the BBC Micros used as terminals.  
This program passed through all data in both directions, and kept a record 
in RAM of the characters that users typed in response to certain prompts 
(like "Password: ").  

> And the really good ones are set up so that each invocation of the
> login process gives you an authentication at the conclusion of
> your login (or maybe at your logout).  When you login again,
> after you give your userid and before you give your password,
> the login processor gives you its authentication counter-sign.

That doesn't work either.

If there is a possibility of someone EITHER tampering with your terminal OR
eavesdropping on your comm's system, you must have an authentication scheme 
that uses encrypted timestamps and challenges to get secure login.

-- Steve
From:      Tony Ivanov <[email protected]>  24-FEB-1989 23:50:10
To:        [email protected]
[Moderator note: Please reply to him directly...]

Hello out there!  I have developed a UN*X lookalike encryption algorithm
that does not have the eight character password limit.  I am interested
in critisism/comments on weak/strong points.  Here it is:

 * tcrypt - generate hashing encryption
 *	This function performs an encryption that produces hashed passwords that
 *	look like the ones produced by the UN*X algorithm.  The major difference
 *	is that it allows input passwords of unlimited length (as opposed to the
 *	UN*X algorithm which only uses the first eight characters).
 *		char *tcrypt (key, salt)
 *		char *key, *salt;
# define tcrypt_char(a)	(_tcrypt_char[((int)a)&63])
char	_tcrypt_char[] = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789./";

char *
tcrypt (key, salt)
	char	*key;
	char	*salt;
	static char	buff[14];		/* Buffer to hold encrypted password. */
	char		*pb;			/* Pointer into buffer. */
	char		*pk;			/* Pointer into key (unencrypted password). */
	char		tmp;			/* Value from last encryption loop. */
	char		s;			/* Alternates between first and second character of salt. */
	int		size_key;		/* Length of the key. */
	int		count;			/* Loop variable. */

		/* Set up initial conditions. */
	strcpy (buff, "Initial_value");
	pk = key;
	tmp = 0;
	size_key = strlen(key);

		/* Repeatedly encryt buffer. */
	for (count=0;  count < 100;  count++)		/* Re-encrypt passwd this many times. */
	{	s = salt[count&1];
		for (pb=buff;  pb < buff+14;  pb++)
		{	tmp = *pb = tcrypt_char ( *pb + *pk + s + tmp + ((*pk + s) >> (1+(count&1))) + (pk-key) );
			if (pk >= key+size_key) pk = key;

		/* Set first two characters to the salt, and terminate string. */
	buff[0] = salt[0];
	buff[1] = salt[1];
	buff[13] = '\0';
	return (buff);
/* My opinions...             *  Tony Ivanov   MS-4B       *  ...ucbvax!   */
/* shared by my company?!...  *  Grass Valley Group, Inc.  *  tektronix!   */
/* you've got to be kidding!  *  P.O. Box 1114             *  gvgpsa!      */
/* "[email protected]"  *  Grass Valley, CA  95945   *  gvgpvd!tony  */
From:      AMSTerDamn System <[email protected]>  25-FEB-1989  0:01:04
To:        dlists/[email protected]
[ Via AMSTerDamn v2.1A ]
[ with AMS-auto3.2D/SAM2A/AMSv2A ]
From: [email protected]
Subject: Viruses and System Security (a story)

The following story was posted in news.sysadmin recently.

The more things change, the more they stay the same...

Back in the mid-1970s, several of the system support staff at Motorola
(I believe it was) discovered a relatively simple way to crack system
security on the Xerox CP-V timesharing system (or it may have been
CP-V's predecessor UTS).  Through a simple programming strategy, it was
possible for a user program to trick the system into running a portion
of the program in "master mode" (supervisor state), in which memory
protection does not apply.  The program could then poke a large value
into its "privilege level" byte (normally write-protected) and could
then proceed to bypass all levels of security within the file-management
system, patch the system monitor, and do numerous other interesting
things.  In short, the barn door was wide open.

Motorola quite properly reported this problem to XEROX via an official
"level 1 SIDR" (a bug report with a perceived urgency of "needs to be
fixed yesterday").  Because the text of each SIDR was entered into a
database that could be viewed by quite a number of people, Motorola
followed the approved procedure: they simply reported the problem as
"Security SIDR", and attached all of the necessary documentation,
ways-to-reproduce, etc. separately.

Xerox apparently sat on the problem... they either didn't acknowledge
the severity of the problem, or didn't assign the necessary
operating-system-staff resources to develop and distribute an official

Time passed (months, as I recall).  The Motorola guys pestered their
Xerox field-support rep, to no avail.  Finally they decided to take
Direct Action, to demonstrate to Xerox management just how easily the
system could be cracked, and just how thoroughly the system security
systems could be subverted.

They dug around through the operating-system listings, and devised a
thoroughly devilish set of patches.  These patches were then
incorporated into a pair of programs called Robin Hood and Friar Tuck.
Robin Hood and Friar Tuck were designed to run as "ghost jobs" (daemons,
in Unix terminology);  they would use the existing loophole to subvert
system security, install the necessary patches, and then keep an eye on
one another's statuses in order to keep the system operator (in effect,
the superuser) from aborting them.

So... one day, the system operator on the main CP-V software-development 
system in El Segundo was surprised by a number of unusual phenomena.
These included the following (as I recall... it's been a while since I
heard the story):

-  Tape drives would rewind and dismount their tapes in the middle of a

-  Disk drives would seek back&forth so rapidly that they'd attempt to
   walk across the floor.

-  The card-punch output device would occasionally start up of itself
   and punch a "lace card" (every hole punched).  These would usually
   jam in the punch.

-  The console would print snide and insulting messages from Robin Hood
   to Friar Tuck, or vice versa.

-  The Xerox card reader had two output stackers;  it could be
   instructed to stack into A, stack into B, or stack into A unless a
   card was unreadable, in which case the bad card was placed into
   stacker B.  One of the patches installed by the ghosts added some
   code to the card-reader driver... after reading a card, it would flip
   over to the opposite stacker.  As a result, card decks would divide
   themselves in half when they were read, leaving the operator to
   recollate them manually.

I believe that there were some other effects produced, as well.

Naturally, the operator called in the operating-system developers.  They
found the bandit ghost jobs running, and X'ed them... and were once
again surprised.  When Robin Hood was X'ed, the following sequence of
events took place:

  !X id1

  id1:   Friar Tuck... I am under attack!  Pray save me!  (Robin Hood)
  id1: Off (aborted)

  id2: Fear not, friend Robin!  I shall rout the Sheriff of Nottingham's men!

  id3: Thank you, my good fellow! (Robin)

Each ghost-job would detect the fact that the other had been killed, and
would start a new copy of the recently-slain program within a few
milliseconds.  The only way to kill both ghosts was to kill them
simultaneously (very difficult) or to deliberately crash the system.

Finally, the system programmers did the latter... only to find that the
bandits appeared once again when the system rebooted!  It turned out
that these two programs had patched the boot-time image (the /vmunix
file, in Unix terms) and had added themselves to the list of programs
that were to be started at boot time...

The Robin Hood and Friar Tuck ghosts were finally eradicated when the
system staff rebooted the system from a clean boot-tape and reinstalled
the monitor.  Not long thereafter, Xerox released a patch for this

I believe that Xerox filed a complaint with Motorola's management about
the merry-prankster actions of the two employees in question.  To the
best of my knowledge, no serious disciplinary action was taken against
either of these guys.

Several years later, both of the perpetrators were hired by Honeywell,
which had purchased the rights to CP-V after Xerox pulled out of the
mainframe business.  Both of them made serious and substantial
contributions to the Honeywell CP-6 operating system development effort.
Robin Hood (Dan Holle) did much of the development of the PL-6
system-programming language compiler; Friar Tuck (John Gabler) was one
of the chief communications-software gurus for several years.  They're
both alive and well, and living in LA (Dan) and Orange County (John).
Both are among the more brilliant people I've had the pleasure of
working with.

Disclaimers: it has been quite a while since I heard the details of how
this all went down, so some of the details above are almost certainly
wrong.  I shared an apartment with John Gabler for several years, and he
was my Best Man when I married back in '86... so I'm somewhat
predisposed to believe his version of the events that occurred.

Dave Platt 
  Coherent Thought Inc.  3350 West Bayshore #205  Palo Alto CA 94303
From:      [email protected]  25-FEB-1989  9:29:48
To:        [email protected]
I don't know about SCOPM Plus, but from conversations
with NCSC people I concluded that if submitted for
evaluation nowdays SCOMP would not have been approved
as an A1 system.

				Erez Levav @ Motorola, UDC

[email protected]
[expect addresses to change in a few months]
From:      *Hobbit* <[email protected]>  25-FEB-1989  9:49:48
To:        security
   Our campus key shop guru told me that we have even a more advanced
   Medeco than the standard, and was almost foolproof...

I believe that these are the biaxials. They are no big deal; the chisel points
are offset forward or back by .025".  It effectively gives each pin twice the
keying versatility, since the key cut can be the right depth and twist, but if
it's not under the chisel tip, you lose.  A master key for this system would
have two cuts right next to each other that would address either offset [and I
believe they would be at the same height, since it's difficult to cut two
different heights only .050" apart and have enough "meat" left to turn the
pin].  With regard to picking, it essentially makes no difference.  In fact, it
was a biaxial that I first started working with to develop the current

Lessee, (6 cuts * 3 rotations * 2 offsets) = 36 positions per pin, to the 6th
power gives you something like 2 gigapossibilities...

From:      [email protected] (Jean_Jac. Quisquater)  25-FEB-1989 10:13:19
To:        [email protected]
We (Jean-Paul Delescaille and Jean-Jacques Quisquater) were able
to find 3 collisions in DES using a network of workstations 
during some weeks.

Definition of a collision: given a message M and an cryptographic
algorithm f with 2 parameters M and K (the key), a collision is a
pair (K1, K2) such that

  f (M, K1) = f (M, K2),

that is, for a fixed message M and using a cryptographic 
algorithm f, the key K1 and the key K2 give the SAME encrypted 

Jean-Jacques devised a new probabilistic distributed asynchronous
algorithm for finding collisions without any sorting and with a 
small storage (a la Pollard). We used a fast implementation of 
DES in C (by Jean-Paul: about 2000 * (encryption + change of key)

We used the idle time of a network of 20 SUN-3's and 10 microVAXes 
(a la Lenstra and Manasse). Total: about 100 Mips during one month.

2  encryptions performed (about 20 potential collisions) only in

The message M is 0404040404040404 (hexadecimal form) for
the 3 collisions.

Collision 1: found Fri Jan 13 23:15 GMT (birthday of Jean-Jacques!
Yes, it is another birthday attack (Hi! Don Coppersmith)).

   cipher = F02D67223CEAF91C
   K1     = 4A5AA8D0BA30585A
   K2     = suspense!

Collision 2: found Fri Jan 20 19:13 GMT

   cipher = E20332821871EB8F
   K1, K2 = suspense!

Collision 3; found Fri Feb  3 03:22 GMT


Conclusion: Friday is a good day for finding collisions :-)

Well, there is a problem because there is no proof we effectively
found such collisions. 

Question 1: Find a protocol for proving or for convincing you
that we know K2 for collision 1 (zero-knowledge protocols are useful
in this context).

Question 2: Find a protocol for proving or convincing that we know
K1 and K2 for collision 2 (idem).

Question 3: Find a protocol for proving or convincing that we know
3 different collisions (idem).

Useful information: the nice paper by Brassard, Chaum and Crepeau,
``Minimum disclosure proofs of knowledge'', 1987.

The complete information will be given at EUROCRYPT '89, Houthalen, 
Belgium, with the restriction that the submitted abstract is
accepted :-) The paper will be sent in April if you want it.

Thanks are due to Paul Branquart, Frans Heymans, Michel Lacroix, 
Vincent Marlair, Marc Vauclair, the members of PRLB for permission
and active help in the effective implementation of the distributed 
algorithm on their workstations.

Warning: There is no implication about the security of DES used
for encryption. Indeed these experiments only prove that DES is a
good random mapping (a necessary property for any cryptographic
algorithm). However the use of DES for protecting the integrity of files
is not very easy and needs very careful studies.

Jean-Jacques Quisquater,

(Program chairman of EUROCRYPT '89)
From:      [email protected] (Bill.Stewart.[ho95c])  25-FEB-1989 10:29:47
To:        [email protected]
I assume you mean the kind mounted on a door, as opposed to
a padlock or something?  We had to break one once - one of
the screws holding it to the door came loose and wedged
inside the mechanism, so it wouldn't open.  Took the
locksmith about 1.5 - 2 hours to drill it.  Shouldn't be
hard to reset, though - find a good locksmith.
#				Thanks;
# Bill Stewart, AT&T Bell Labs 2G218 Holmdel NJ 201-949-0705!wcs
#	News.  Don't talk to me about News.
From:      <[email protected]>  28-FEB-1989 23:22:13
To:        [email protected]
Dear fellow Netlander:

I am developing research for a thesis, tentatively titled, "Computer Security
in the Process Control Environment".  I am seeking anyone's assistance in
obtaining information relevant to this topic, as there currently exists no
published data.  Specifically, I would like to reach people who have (or have
had) involvement in Computer Integrated Manufacturing (CIM), Computer Aided
Manufacturing (CAM), process control, and related fields.

Helpful information could include policies and procedures (current or past),
actual experiences, etc., regarding this area, in its broadest interpretation.
Suggestions gladly considered.  Please feel free to pass along a copy of this
letter to anyone who might be of further help.

*ANY* information, even if just deemed peripheral, would be of great value, as
such data can lead to other relevent information.  Please, if you think you
might have some helpful info, or think you might know someone or somewhere that
more info can be obtained, send me a note!

Data obtained will be compiled and published in Spring 1989, as my thesis.

Thank you for your attention, and please excuse any inconvenience...

|  Michael Kielsky                 |
 \   Bitnet: [email protected]  \
   \                                  \
     \   1902 East St. Catherine Ave.   \
       \   Phoenix, Arizona  85040        \
         \   (602) 276-4663 (Home)          \
           \   (602) 891-6927 (Work)          \
            |  All opinions expressed are the  |
            |  author's and in no way reflect  |
            |  the opinions of the Sane.  :-)  |
From:      [email protected] (Paul Kerchen)  28-FEB-1989 23:22:39
To:        security
Here at UC Davis, we are doing research on computer viruses under the
direction of Lawrence Livermore Nat'l Lab.  Here at UCD, we write 
anti-viral programs and virus detectors.  We ship them to LLNL and
they test them there on a completely isolated system with viruses
which they have.  Now, supposed that someone accidentally attached an
Ethernet cable to this isolated system.  What happens now?  A virus is
released on the Internet unintentionally.  No one gets any
satisfaction from seeing this happen.  However, there is a legitimate
purpose for writing these viruses.
If R. Morris really didn't mean to release his worm onto the Internet,
imagine the horror he probably felt when he realized what he had done
(put yourself in his shoes! :-) No flames, please.  I don't want to
debate RTM's guilt or innocence.).  So, what's the point?  The point
is that there are legitimate reasons for writing viruses, worms, etc.
How can one defend against a virus unless they have one to look at?

Paul Kerchen				| [email protected]
From:      [email protected]  28-FEB-1989 23:24:46
To:        security
[This was forwarded from comp.protocols or some such.    _H*]

>Anyone know of any references to layer encapsulation in official
>OSI or CCITT documents.  The problem is as follows:

>LLC type II is reliable and one might want to provide security
>on a per logical link basis,  Unless you want to provide reliability
>in the security layer itself, to make sure that applications which
>run in a secure environment, you want to put the security layer
>between LLC and the MAC layer, but at that point in the stack
>the protocol software should only be looking at the MAC addresses
>and security cannot be provided on a per logical link basis.
>And if the security layers lives at the top of LLC, then the
>security layer has to provide reliability.  

>I vaguely remember having this problem at Bell Labs when working
>on Data Teleconferencing and the solution was layer encapsulation
>where the security procedures would encapsulate the protocol
>layer, but I simply don't remember where this was described in
>the CCITT or OSI documents.  If someone could give a pointer,
>I would be grateful.

I am following the current deliberations of the 802.10 committee on this
issue.  Tony Bono had what I consider a good original proposal for an
architecture to deal with the issue.  Apparently, the committee could
not shoot down the proposal, but because he did not give a detailed
functional specification (which was not what I understood to have been
originally requested), the committee decided that they would only
provide security procedures at the boundary between LLC and the MAC
(based on pairwise MAC addresses) and not at the boundary between the
Network Layer and LLC based on LSAP/DSAP pairs for a given MAC layer
connection.  Personally, I can think of many reasons why one might want
to provide pairwise LSAP/DSAP security rather than simply point-to-point
MAC address based security.  It seems to me perfectly reasonable that
Network Management communications streams, OSI communications streams or
TCP/IP communications streams might all require different security
procedures at the boundary between LLC and the network layer. 

Personally, I would think distributed multi-level security would be a
nice thing.  Providing security procedures on a per LSAP/DSAP basis
would give the possibility of multi-level security at the link-layer, so
that a given host might be able to realize that a given data stream from
a host was trusted at a secret level because the user had logged into
the console in a room guarded by guys with machine guns while another
data stream from the same host was not trusted at all because the user
had dialed in from outside. 

I see this situation all the time.  Everytime someone wants to
incorporate some new idea into OSI which actually give some reason to
switch from TCP/IP to OSI, it gets shot down at the committee level. 
Now I understand why the best standards are those which were ad hoc
standards first, and only much later standardized by the international
committees.  Any comments?
From:      [email protected]  28-FEB-1989 23:57:22
To:        [email protected], ([email protected])

In answer to your questions...

1.  Background behind viri:  Everybody's stories differ.

2.  Good Books on Viri:  None.  Now, to answer your question:  Books
    are present, but they may not be good:  I have one called:
    "The Computer Virus Crisis" authors: Fites/Johnson/Kratz 
    Publisher: Van Nostrand Reinhold, NY NY  I bought this one because
    it has skads of references in the back, & its copyright Jan 1989.
    I found it in a local B. Dalton store (in the nonfiction section)

3.  Legislation:  You asked the right person.  My office-mate is now
    looking through his stash of info, and already has found a 
    20-page paper titled, "Computer Crime Legislation" dated Nov 1987.
    I won't bore the readers of this list with a long-winded 
    description of which states have what laws, but in general, there
    are: "The Computer Fraud and Abuse Act"
         Fed Law TITLE 18 Part I Sect. 1029; 
         (same law)       Part I Chapt 47 Sect 1030
                                 Chapt 119 sects 2510-2521
                                 Chapt 121 sects 2701-2709
    All states except Arkansas, Indiana, Vermont, and West Virginia
    had at least one statute as of the time of the paper.

4.  Newer Legislation: Rep Wally Herger (R-Cal) introduced HR 5061 
    last July (The Computer Virus Eradication Act of 1988) but 
    Congress was not able to vote on it last year.  He re-introduced 
    the same bill (now called HR 55) on Jan 3, 1989; 42 House members
    are currently co-sponsors.  Who knows when this one will be voted
    upon.  This is different from the above law because this one 
    covers EVERYONE, while the above law only covered Government 

5.  The "Computer Security Act of 1987" (HR 145), which you may have
    heard of, has nothing to do with Computer Fraud or Viruses.

6.  I heard a rumor through the mill that Morris is getting off with 
    nothing but a slap on the wrist because the places he affected 
    don't want to get together to really give it to him.

for more info, send mail to:  [email protected]
From:      FLORY <[email protected]>  1-MAR-1989 12:01:07
To:        [email protected]
In response to "Commander Spock"'s question about sources of information
on why people write virus's, I suggest he look at a few recent magazine
articles (I really doubt any books have been on the topic as of yet)

In the Summer issue of 2600 magazine there is an article by "The Plague"
called "How to Write a Virus: The Dark Side of Viruses".  He claims to have
written a viruse called CyberAIDS which attacks the Apple II series, but
besides his "qualifications" you can get a pretty good idea of the twisted
kind of mind who enjoy this kind of thing (Mr. "Plague" claims to have no
moral objections to trashing people's hard work)  The article goes into
the theory of virus writing (not system specific)  A careful reading between
the lines can provide a psycological outline of one kind of virus writer.

you can get a back issue of 2600 by writing to 2600 Magazine, PO Box 752,
Middle Island, NY 11953-0752.

You also may want to look up the Winter 1988 issue of "High Frontiers
Reality Hackers" for an article called "Cyber Terrorists / Viral Hitman"
Reading it between the lines also reveals a lot about the type of person
who would voluntarily release a virus.

David James Flory

PS I don't support, condone, or agree with any of these authors, I am
just bringing them up for a view of why people would write these things.
From:      "H.Ludwig Hausen +49_2241142426"            <[email protected]>  1-MAR-1989 12:11:12
To:        [email protected]
Hello netters, we are going to start a European
initiative on software certification (SWC) and therefor we would
appreciate to receive any information on
 - needs for SWC
 - objectives for SWC
 - effective procedures for SWC
 - tools for SWC
 - who is doing SWC
 - who should  do SWC
 - etc.
Certification, in our view, includes all methods and procedures do
validate, verify, test examine, measure or  assess software as a
product or software beeing developed by a 'certified' development

Thanks for any help.
 H.  L U D W I G    H A U S E N      ..................................
:                                    Telephone +49-2241-14-2440 or 2426:
: GMD Schloss Birlinghoven           Telefax   +49-2241-14-2618 or 2889:
: D-5205 Sankt Augustin 1            Teletex   2627-224135=GMD VV      :
:        West  GERMANY               Telex     8 89 469 gmd d          :
:                                    E-mail    [email protected]  :
:                                              [email protected]   :
 . . . . . . . . . . . . . . . . .  . . . . . . . . . . . . . . . . . .
:    GMD (Gesellschaft fuer Mathematik und Datenverarbeitung)          :
:    German National Research Institute of Computer Science            :
:    German Federal Ministry of Research and Technology (BMFT)         :
From:      "Kevin S. McCurley" <[email protected]>  1-MAR-1989 12:28:27
To:        [email protected]
The following conference may be of interest to this distribution list:
                             CRYPTO '89
                          CALL FOR PAPERS

The Ninth Annual Crypto Conference sponsored by the International
Association for Cryptologic Research (IACR) in cooperation with the
IEEE Computer Society Technical Committee on Security and Privacy, and
the Computer Science Department of the University of California, Santa
Barbara, will be held on the campus of the University of California,
Santa Barbara, on August 20-24, 1989.  Original research papers and
technical expository talks are solicited on all practical and
theoretical aspects of cryptology.  It is anticipated that some talks
may also be presented by special invitation of the Program Committee.

INSTRUCTIONS FOR AUTHORS:  Authors are requested to send ten copies of
a detailed abstract (not a full paper) by March 17, 1989, to the
Program Chairperson at the address given below.   Abstracts should
contain sufficient detail, as well as references to and comparisons
with relevant extant work, to enable Program Committee members to
appreciate their merits. It is recommended that abstracts start with a
succinct statement of the problem and discussion of its significance
and relevance to cryptology, appropriate for a non-specialist reader.
In order to facilitate blind refereeing, the names of authors and
their affiliations should only appear on the cover page of the paper;
it should be possible to remove this page and send the papers to
Program Committee members.  Limits of 10 double-spaced pages and 2500
words (not counting the bibliography and the cover page) are placed on
all abstracts. If the authors believe that more details are essential
to substantiate the main claims of the paper, they are asked to
include a clearly marked appendix that will be read at the discretion
of the Program Committee.  Abstracts that significantly deviate from
these guidelines risk rejection without consideration of their merits.
Abstracts received after the March 17 deadline  WILL NOT BE
CONSIDERED, unless they are postmarked not later than March 13 and
arrive a reasonable time thereafter.  Authors will be informed of
acceptance or rejection in a letter mailed not later than May 26.

A compilation of all abstracts accepted will be available at the
conference.  Authors of accepted papers will be given until July 14,
1989 to submit revised abstracts for this compilation.  Complete
conference proceedings will be published in Springer-Verlag's Lecture
Notes in Computer Science series at a later date.  The Program
Committee will consider abstracts that have also been submitted to
other conferences.  However, if a submission is accepted for
presentation at more than one conference, the authors may present the
results more than once, but may publish them in at most one

The Program Committee consists of
  Josh Benaloh (University of Toronto)
  Russell Brand (Special session chairperson, Lawrence Livermore Laboratory)
  Gilles Brassard (Committee chairperson, Universite de Montreal)
  Claude Crepeau (Massachusetts Institute of Technology)
  Whitfield Diffie (Bell Northern Research)
  Joan Feigenbaum (AT&T Bell Laboratories)
  James Massey (ETH Zentrum, Zurich)
  Jim Omura (Cylink Corporation)
  Gustavus Simmons (Sandia National Laboratories)
  Scott Vanstone (University of Waterloo)

Send abstracts to the                      For other information,
program chairperson:                       contact the general chairman:
----------------------------               ---------------------------
Gilles Brassard, Crypto '89                Kevin McCurley
Departement IRO                            IBM Research, K53/802
Universite de Montreal                     650 Harry Road
C.P. 6128, Succursale ``A''                San Jose, CA  95120-6099
Montreal (Quebec)                          U.S.A.
CANADA H3C 3J7                             telephone: (408) 927-1708
telephone: (514) 343-6807                  Internet: [email protected]
email: [email protected]           Bitnet:   [email protected]
From:      [email protected] (The Polymath)  1-MAR-1989 13:51:12
To:        [email protected]
}If you install an alarm
}system... you may be eligible for a discount (on your insurance).

I asked my insurance company about this when I lived in an apartment.
We'd had some burglaries and people were putting up window bars (the
landlord wouldn't )-: ).  They told me there was no insurance discount for
window bars.  A friend (Hi, Marvin!) explained the rationale:

If you have a fire, you generally lose everything and the insurance
company is stuck for the maximum payout.  So, smoke alarms get you a
premium reduction.  Burglars, on the other hand, generally only take a few
items.  The insurance company isn't all that much out of pocket (unless
you're a multi-millionaire with lots of expensive toys to be stolen) and
the relevant premium reduction isn't worth offering.  Bars also hinder
fire fighters.

Except for hindering fire fighters, I'd guess the same logic applies to
alarm systems.  They might save you some inconvenience by scaring away a
burglar, but they generally don't save the insurance company enough for
them to take an interest.

Not that they're a bad idea, mind you.  They just won't save you any
insurance money.

The Polymath (aka: Jerry Hollombe, [email protected])  Illegitimati Nil
Citicorp(+)TTI                                                 Carborundum
3100 Ocean Park Blvd.   (213) 452-9191, x2483
Santa Monica, CA  90405 {csun|philabs|psivax}!ttidca!hollombe
From:      "Craig Finseth" <fin[email protected]>  1-MAR-1989 14:17:15
To:        [email protected]
Cc:        [email protected]
Unfortunately, you missed the point.  The supplied permissions:

>    drwxr-xr-x 13 root         1024 Jan  9 13:31 /
>    drwxr-xr-x  2 root         3584 Jan 23 03:42 /etc
>    -rw-------  1 root         2899 Jan 10 12:48 /etc/passwd

will break most UNIX systems as /etc/passwd must br readable to the

    -rw-r--r--  1 root         2899 Jan 10 12:48 /etc/passwd

Shadow password files are in theory not required.  In practice, when
someone can read the entire file (and hence all encrypted passwords),
run a password cracking program over it (that allows me to test
candidate passwords at at rate a thousand times faster than simply
attempting to log in), and people pick lousy passwords to begin with,
the assumptions behind the publicly-readable encrypted passwords break
down.  Hence, one would like to move the encrypted passwords to a
place where no one can read them, but otherwise leave the password
file unchanged (so that UNIX still runs fine).

> This scheme will have virtually no benefit on a well-administered UNIX
> system, whereas the cost is moderate and may be very high (if early
> implementations are easy to subvert).

Since I feel that the assumptions behind the current UNIX system are
violated, I disagree with this statement.  I feel that the (shadow
password file) scheme should be considered *mandatory* on a
well-administered UNIX system and the cost is minimal (you need to
change login, passwd, and su, less than a week's work even if you
don't have source).  While subvertable in principle, the shadow
password file scheme is definitely less so than the current system.

Craig A. Finseth			[email protected] [CAF13]
Minnesota Supercomputer Center, Inc.	(612) 624-3375

[Moderator note: Sun already has one in their version 4.0 and up; it sits
in /etc/security/passwd.adjunct and is protected -rw-------.  It seems
reasonable that producers of other unix systems would soon come up with
something similar...  _H*]
From:      Mark Nelson <[email protected]>  2-MAR-1989  0:29:51
To:        [email protected]
    A friend of mine (without access to news) plans to write a survey
paper on network security techniques.  Does anyone have any good references
they could recommend?  I know this is pretty vague, but I don't think he
has narrowed the topic yet.  I will summarize if there is any demand.

Mark Nelson      [email protected]
From:      goldstein%[email protected] (Andy Goldstein)  2-MAR-1989  0:36:03
To:        [email protected]
In a recent posting, James Galvin solicits implementations of and
experiences with the RSA public key algorithm. Please be aware, folks,
that the RSA algorithm and most of its applications are protected by
a patent held by MIT and licensed to RSA, Inc. While I don't think anyone
could mind if you read the textbooks and implemented your own version
for some private hacking around, any sort of public distribution or
serious use without a license from RSA, Inc. is likely to draw
unfavorable attention from their lawyers.

Don't get me wrong - I've met R & S and they're incredibly nice guys.
But the corporation paid MIT a considerable amount of money for its
license and it's going to get its money's worth. Anyway, the RSA
patent is one of the few that's really worthy of being patented.
From:      <[email protected]> (Tom, Tech. Support)  2-MAR-1989  0:39:42
To:        [email protected]
        In this state, trespassing is considered a crime and that crime,
by itself, _can_ sustain a charge of burglary.  This would especially be
true in the case where a search incidental to the arrest produced a set
of lock picks.
        In that situation I would charge with Criminal Trespass, Burglary,
and Possession of Instruments of Crime.  Safe to assume that a plea bargain
would result, the felony charge of Burglary would be dropped and the
deft. would plead guilty to the misdemeanors.  (Unless I forgot to Mirandize
him, which is another issue).

	I would say it would depend on a jury.  If burglar tools (lock picks,
and the like) are in possession, then there _IS_ a presumed intent.  And,
in our state, as I noted earlier, the entrance into a building with intent
to commit _any_ crime constitutes burglary. Therefore, an unlawful entry
into a building while in possession of burglar tools pretty well nails it.

>Consider the *incidental* thief who is also regularly and
>legitimately employed as a carpenter/plumber/construction worker/whatever
>and just happens to carry all these around in his vehicle

	I agree with what you have said in context; but once the tools of the
ligitimate trade have been removed from the vehicle and used to gain access
to a locked building you have another situation.  No luck needed - the
tools become burglar tools ("instruments of crime" per PA Crimes Code).
From:      Joe Keane <[email protected]>  2-MAR-1989  5:49:52
To:        [email protected]
There's really no way to protect a PC's hard disk, since the user has full
control of the machine.  The right thing to do is get some networking software
and a dedicated file server.
From:      Jon Loux <[email protected]>  2-MAR-1989  6:06:57
To:        [email protected]
In regards to locking hard disks on IBM type PC's used for student labs,
there are several security packages available for this kind of thing.  At
the University of Connecticut, we have been testing a product called PC/DACS.
(Pyramid Development Corp. 20 Hurlbut St, West Hartford, Ct. 06110).

This product will allow you to create user ID's and profiles similar to a
Mainframe configuration.  There must be at least one security administrator,
who can create new ID's and write resource rules.  A PC can be set up with a
default user ID which gets automatically logged on when the machine is booted
from the C drive.  We use this feature in the labs, where we what the PC to
boot without any user intervention.  The default ID is given read only access
to the entire hard disk.  In one lab, they set it up so that the default ID
has write access to a \TEMP directory so they can keep data for graphics
programs on the hard drive while running the programs.  The /TEMP directory
gets purged at every boot.

There is a boot protect option, which, when activated, will make your hard
drive unrecognizable if you boot from the A drive.  I think it relocates the
FAT, or something insidious like that.  There is also a disk encrypt option
which will scramble your entire hard disk for you, if you're into scrambled
hard disks.  I haven't played with this option, since we really don't need
anything quite that cloak and dagger here.  We are mostly guarding against
accidentally deleted files and cluttered hard disks.

A new feature in the latest release of PC/DACS allows you to write resource
rules for floppy drives as well.  I've been thinking of using this to
disallow the running of programs from floppies.  If it works out, this may be
an acceptable way of limiting the spread of viral programs.

By the way, The State of Connecticut has contracted Pyramid Development Corp.
for the use of PC/DACS in all State agencies.  This is not intended to be an
endorsement of the product, merely a critique.
From:      [email protected] (Mark A. Heilpern )  2-MAR-1989  6:09:23
To:        [email protected]
One way is to put all 'restricted' programs on one hard disk, or one section
of the hard disk, and simply unmount that section when students are in the
area. This is rather drastic, but the most secure.

Another method: assign group access to the files and place all users who
need to access them inside this group.

One final, and least secure method: (If you are assured to what times
students will have access) set up an 'at' daemon to remove/restore
all permissions to the required files.  NOTE: If you choose this method,
think about the possible login via modem at these times...

From:      [email protected] (Li GONG)  2-MAR-1989  6:09:52
To:        [email protected]
Prof. Wheeler mentioned a scheme which is being recommended to customers
of a credit company to securely remember ALL your PINs.  This could be
adopted to remember those passwords a user has to change periodically.

The trick is simply as this:  suppose you want to remember your PIN for
VISA card which is 1234 and that for ACCESS card which is 3456 and that
for Diners which is 7890.

1) Write down A to Z and choose a word (gray, for example).
2) Write down your PINs in such a way that each PIN is on a separate line
   and the 4 digits are under the four letters g, r, a, y.  Leave other
   places blank.

          A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
  VISA    3           1                     2             4
  ACCESS  5           3                     4             6
  Diners  9           7                     8             0

3) Fill in all blank spaces with random digits.

Now the task to remember a number of passwords is reduced to remember an
easy word.  Also, this list can be made public with virtually no loss in
security ( although try to keep it secret).  There are many variations, for
example to allow repeated letters in a word.  Well, then what ?
| Li GONG (+44223-334650)     University of Cambridge, Computer Laboratory |
| InterNet/CSnet : lg%[email protected]  (or |
| UUCP : ...!ukc!!cam-cl!lg   Bitnet/EAN : lg%[email protected] |