The 'Security Digest' Archives (TM)

Archive: About | Browse | Search | Contributions | Feedback
Site: Help | Index | Search | Contact | Notices | Changes

ARCHIVE: Rutgers 'Security List' (incl. misc.security) - Archives (1989)
DOCUMENT: Rutgers 'Security List' for January 1989 (68 messages, 48317 bytes)
SOURCE: http://securitydigest.org/exec/display?f=rutgers/archive/1989/01.txt&t=text/plain
NOTICE: securitydigest.org recognises the rights of all third-party works.

START OF DOCUMENT

-----------[000000][next][prev][last][first]----------------------------------------------------
From:      Phil Hochstetler <sequent!phil@cse.ogc.edu>  3-Jan-1989 10:29:18
To:        misc-security@tektronix.tek.com
Given all this mention of "fast DES code", how about someone
posting how to obtain this code?  Not all of us got a copy of
it when it went by.
-- 
Phil Hochstetler
Sequent Computer Systems
Beaverton, Oregon
-----------[000001][next][prev][last][first]----------------------------------------------------
From:      doug@lenti.uucp (Doug Davis)  3-Jan-1989 10:49:16
To:        texbell!killer!misc-security@cs.utexas.edu
I just posted a fix to the recient /bin/mkdir security problem that
was posted to news.admin. The fix was posted in both news.admin
and comp.sources.misc.

Those of you who have a setuid'ed root /bin/mkdir, or a non 4.2 BSD (or
greater) system, will most likely be interested in this fix.

doug davis
--
Lawnet  
1030 Pleasent Valley Lane.
Arlington Texas 76015
817-467-3740
{ sys1.tandy.com, motown!sys1, uiucuxc!sys1, killer!texbell } letni!doug
  "Talk about holes in UNIX, geeze thats nothing compaired with the security
      problems in the ship control programs of StarFleet."
-----------[000002][next][prev][last][first]----------------------------------------------------
From:      *Hobbit* <hobbit@pyrite.rutgers.edu>  3-Jan-1989 11:09:16
To:        security
I have trouble believing this one myself.  That person that spreads such a
rumor should include more in-depth technical details, like a genuine report
from the Rockwell people as to whether this is possible or not.  Any modem
that can respond in that manner to "commands" from a *foreign* modem [not
to mention being smart enough to inject code for the machine it's connected
to into the bit stream?!??!] is far too smart for its own good.

Someone with an in at Rockwell or whoever makes the relevant chip set give
'em a call, okay??

_H*
-----------[000003][next][prev][last][first]----------------------------------------------------
From:      "W. K. (Bill) Gorman" <34AEJ7D@CMUVM>  3-Jan-1989 11:29:16
To:        PROFS Discussion List <PROFS-L@UGA>
Is anyone familiar with a commercially available, two key
(public key/private key) encryption/decryption system that might
either run on, or be adaptable to run on, PROFS under VM/HPO and
CMS? This would be primarily for internal use, as sending encrypted notes
through the network in general raises other problems.
Those-Who-Know-That-They-Know here are becomiong concerned about
this issue.

Thanks in advance.

Bill.
-----------[000004][next][prev][last][first]----------------------------------------------------
From:      <TOM@FANDM.BITNET> (Tom, Tech. Support)  3-Jan-1989 11:49:16
To:        security@pyrite.rutgers.edu
>>Check with your local police, because in many locales
>>"possession of burglar tools" is considered a crime.
>>(I disagree with such laws; it is commission of burglary
>>that should be the crime, not ownership of a flashlight.)

        I presume that you are using some sarcasm here, but there are still
two points I'd like to make as an ex-police officer of 9 years.

1. Commission of a burglary _IS_ a crime.  In fact, it's a Felony in
Pennsylvania, and it is defined as _any_ entry into a building with the
intent to commit a crime.  This would mean any crime, from rape to theft to
criminal mischief.  That's a statute with some teeth!

2.  Posession of burglar tools is also a crime in this state.  As with your
flashlight example, there are many legitamate uses for almost anything ...
even slim jims, pry bars, and dynamite have proper use.  It is the
circumstance surrounding the posession that makes the offense.  A slim jim
in the hands of someone who normally and routinely makes a living with
locks does not imply an attempt to steal a car, but if he's found at 3:00AM
with no owner around and a black box for hot wiring ... .  BTW the same
logic applies to deadly weapons.  I have personnaly arrested and convicted
on posession of a baseball bat, but rest assured that my defendant was not
on the ball field.

*        HAVE A GOOD DAY        *
*                               *
* Tom Mahoney                   *
* Computer Electronics Tech.    *
*                               *
* FRANKLIN & MARSHALL COLLEGE   *
* Computer Services             *
* Technical Support Center      *
* Lancaster, PA  17604-3003 USA *
*                               *
* Bitnet Address: TOM@FANDM     *
*********************************
-----------[000005][next][prev][last][first]----------------------------------------------------
From:      CTM@cornellc.cit.cornell.edu  3-Jan-1989 12:09:16
To:        "Security List." <security@pyrite.rutgers.edu>
     Some of the people on the virus-l list have suggested that the modem
virus is impossible and is indeed a form of virus passed around by HUMANS
in the form of endless messages about the modem virus.

     Since the messages about the virus have hard names connected to them
it would seem that this report could be checked out with thoroughness.

     It is a serious security risk to get people thinking there is a security
risk when there isn't.  While they are looking for the bug that ain't,
the bug that is crawls in the back door.

     In my opinion this modem virus is probably real and should be
looked into.

     I am using a 2400 baud modem to write this.
-----------[000006][next][prev][last][first]----------------------------------------------------
From:      ellis@godot.psc.edu (James Ellis)  3-Jan-1989 21:29:18
To:        misc-security@uunet.uu.net
I see many folks suggesting that password files should be
regularly scanned to check for easily guessed passwords.
And a number of sites have one style or another of a password
cracking program for just this purpose.

But is it not more appropriate for this effort to be put into
the passwd(1) program itself to prevent people from using poor
passwords in the first place?  This certainly takes less cpu time
and has the advantage of giving users immediate feedback on the
quality of their passwords.  BRL wrote a nice version of passwd with
a number of such checks some years ago.  MCNC added a history of
previously used passwords and some other checks.  It seems to work
well with few user complaints.
-----------[000007][next][prev][last][first]----------------------------------------------------
From:      Glenn Hyatt <hyatt@udel.edu>  3-Jan-1989 21:49:18
To:        G.CHIASSON@xx.drea.dnd.ca
Cc:        security-request@pyrite.rutgers.edu
>Even searching the source code isn't a guarantee: a really determined 
>foe could modify the compiler or run time library ...

At the bank where I work we put very rigorous controls over updates to
all code used "in production."  When someone is given authorization to
change or add code, a second person reviews the change, tests it, and
moves it into production.  The change actually made is then reviewed
via audit trails and comparing link modules.

Now, I'm a data security manager, and my boss pointed out that the integ-
rity of the audit trails is a sine qua non in all of this.  So I pointed
out that the integrity of audit trails depends on the integrity of the
code that produces the audit trail, which depends on the link modules for:
the code that produces the audit trail, the compilers that compile it, the
linker that links it, the address-translation mechanisms, the I/O modules
involved . . . .  Oh, yes, and how do we know that the piece of paper in
our output bin is the one produced by our high-integrity audit trail?

I'm getting out of data security.  My brain hurts.
							- Glenn
-----------[000008][next][prev][last][first]----------------------------------------------------
From:      "Homer W. Smith" <CTM@cornellc.cit.cornell.edu>  5-Jan-1989  1:02:01
To:        "Security List." <security@pyrite.rutgers.edu>
     Our system forces us to change the passwords every 90 days.
However it does not keep track of the old passwords.

     Thus I comply and change the password, and then I change
it right back again to the original.

     I know people who have used the same password for all their
accounts on every computer they have ever touched for 20 years.

     Not bright, but I can understand the attitude.

     I guess people have not learned that passwords are not secure.
Or that the only threat is random people who out of boredom might
want to sign on.  The dedicated hacker with years of programming
aimed at cracking every encryption scheme in existance is beyond
their ken.
-----------[000009][next][prev][last][first]----------------------------------------------------
From:      Harold Pritchett <HAROLD@uga.uga.edu>  5-Jan-1989  1:22:35
To:        security@pyrite.rutgers.edu
>Password aging forces me to have different passwords for all of these
>(they age at different rates).

Why not change all of them to the same thing when the first one expires (if
what you really want is one password on all systems.)  I personally like
the idea of different passwords for each of my privileged accounts, and a
single password for all of the non-privileged accounts.  That way, if someone
gets my one of my super accounts, they don't get all of them.

I also have no problem with writing my passwords down and putting them in
my wallet.  If someone wants them bad enough to mug me for them, they can
have them...  It's not as if I have national security information in my
accounts.

Harold C Pritchett         |  BITNET:  HAROLD@UGA
BITNET TechRep             |    ARPA:  HAROLD@UGA.UGA.EDU
The University of Georgia  |    BELL:  (404) 542-3135
Athens, GA 30602           |
-----------[000010][next][prev][last][first]----------------------------------------------------
From:      Chriz@cup.portal.com  5-Jan-1989  1:42:21
To:        security-request@rutgers.edu
In a very old science magazine, there was an article about 
an experiment involving a camera lens and a piece of black tape.  
The black tape was used to cover the surface of the front of the
lens. Small holes were cut in a regular pattern in the tape.
The lens was then attached to a camera, and a line pattern was
used as a target for the modified lens to focus on. Film recorded
an image on the focal plane.  The lens was able to record the
line pattern clearly.  The experimenter concluded that a huge
telescope could be built in space by precisely aligning fragments
of a "lens" in 3-d space, and recording information that passed
through the precisely aligned lens fragments.
   Here is the point.  If one views an intelligence agency as a
giant lens with information only accepted at certain points (like
the lens with black tape on it), and like a lens, the information
is passed through and summarized on a focal plane (the top of the
organization) one has the basis for playing dirty tricks on the
organization.  First, the organization is compartmentalized, and
thus the receiving people on the surface of the lens have no idea
that what they are seeing is part of a larger pattern. Thus, the
receiving agents can be counted on to transmit the data received
regardless of how strange an element the data forms on the focal
plane.  Second, if one can pinpoint the openings in the black
tape, one can feed a series of simultaneous data at the face of
the "lens" (like the line pattern mentioned above), and have a
reasonable  assurance that the pattern will be reproduced at the
top of the spy organization.
Here is a plan of attack based on this principle:
Goal:
To leave the sustained impression on those that keep an eye on
things that there is a major new spy organization working on a
major operation in the U.S.
This will be done within the confines of the law, and without
spending too much time or money doing it.  A church group,
charity, or other group of idealistic people could do it.
Step 1: Gain attention.
Note: phone books can be obtained in quantity, but virtually any
other publication with low temperature data will work.
a.  In fifty cities, drop a small town (preferably from UTAH or
Missouri) phone book in an envelope, with the address
"Counterintelligence, Washington, D.C." on it.
b.  Send the same phone book to each embassy on embassy row in
Washington D.C., and to each consulate around the country.
c.  Air freight copies of the same phone book to neutral central
American or Latin American countries.
d.  Have groups who ordinarily attract the attention of the
authorities place a copy of the phone book in their (sometimes
broken into) offices.
Step Two:
At protest rallies, make sure that at least one person, and
preferably twenty or thirty, carry the phone book.
Step Three:
Publish all protest letters, pamphlets, notices of meetings with
a page from the phone book (with names and letters crossed out,
or blacked out) reproduced on the back side of the publication.
Step Four
Suddenly cease having anything to do with the phone book.
-----------[000011][next][prev][last][first]----------------------------------------------------
From:      mhw@wittsend.lbp.harris.com (Michael H. Warfield (Mike))  9-Jan-1989  8:27:36
To:        misc-security@gatech.edu
>	- make your passwd program check that the password
>		is different from the ESP (you don't need a
>		clear-copy ESP around to do this - just
>		crypt with the ESP's salt)

     Huh, not a good move.  This would require that you use the same salt for
every user and ES password on the system.  The "crypt" based security is only
marginally secure at best, to compromise the encryption algorithm by forcing
the salt to a constant value is asking to get it cracked.  You would be much
better off crypting the ESP's in a fixed reversible maner AND protecting them
through permissions (I like 400 root owner) and other obfuscations (hide it
in a binary library or some other "inocuous" file).  May not make it
impossible to find but sure can make it tough and not compromise the user
passwords in the process.

---
Michael H. Warfield  (The Mad Wizard)	| gatech.edu!galbp!wittsend!mhw
  (404)  270-2123 / 270-2098		| mhw@wittsend.LBP.HARRIS.COM
An optimist believes we live in the best of all possible worlds.
A pessimist is sure of it!
-----------[000012][next][prev][last][first]----------------------------------------------------
From:      "Keith F. Lynch" <KFL@ai.ai.mit.edu>  9-Jan-1989  8:47:36
To:        Security@pyrite.rutgers.edu
Cc:        KFL@ai.ai.mit.edu
Forwarded from the VirusBoard BBS at (225) 617-0862 [sic]

Date: 11-31-88 (24:60)              Number: 32769
  To: ALL                           Refer#: NONE
From: ROBERT MORRIS III               Read: (N/A)
Subj: VIRUS ALERT                   Status: PUBLIC MESSAGE

Warning: There's a new virus on the loose that's worse than anything
I've seen before!  It gets in through the power line, riding on the
powerline 60 Hz subcarrier.  It works by changing the serial port
pinouts, and by reversing the direction one's disks spin.  Over
300,000 systems have been hit by it here in Murphy, West Dakota alone!
And that's just in the last twelve minutes.

It attacks DOS, Unix, TOPS-20, Apple II, VMS, MVS, Multics, Mac,
RSX-11, ITS, TRS-80, and VHS systems.

To prevent the spread of this dastardly worm:

1) Don't use the powerline.
2) Don't use batteries either, since there are rumors that this virus
   has invaded most major battery plants and is infecting the positive
   poles of the batteries.  (You might try hooking up just the
   negative pole.)
3) Don't upload or download files.
4) Don't store files on floppy disks or hard disks.
5) Don't read messages.  Not even this one!
6) Don't use serial ports, modems, or phone lines.
7) Don't use keyboards, screens, or printers.
8) Don't use switches, CPUs, memories, microprocessors, or mainframes.
9) Don't use electric lights, electric or gas heat or airconditioning,
   running water, writing, fire, clothing, or the wheel.

I'm sure if we are all careful to follow these 9 easy steps, this
virus can be eradicated, and the precious electronic fluids of our
computers can be kept pure.

--RTM III
-----------[000013][next][prev][last][first]----------------------------------------------------
From:      J.D. Abolins <OJA@NCCIBM1.BITNET>  9-Jan-1989  9:02:21
To:        SECURITY@pyrite.rutgers.edu
Reply to Mr. Gorman's comments about the "secureness" of the personnel

True. There are factors that affect the level of the "secureness" of
personnel, whether for a computer center, a casino, or whatever. Some
of them are...

1. Self-motivated breaches of security:
   * Character of the each employee-- susceptibility to greed, to
     mutiny (ie.; revenge for wrongs, real or perceived), ideology, etc.
   * Atitudes towards reasonable security procedures. Some people tend
     to be lax in this area because of ignorance of the reasons for
     security practices, resentment of being told to follow procedures,
     apathy, overconfidence, power plays, laziness, foolish bravado,
     poor communication of the procedures by management, frustrating
     security procedures, encouragement by others not to use procedures,
     time pressures, effects of chemical dependency, depression, etc.
   * Ideology, which was mentioned above, does not have to be a matter
     of political ideology. It can encompass religious views, ethical
     views, and relational views. Difference in viewpoints/ideology is
     not neccesarily a security hazard. It becomes a security hazard
     when the conflict between the employee's ideology and the real
     or perceived ideology of the employer clash and the employee seeks
     to decide. The outcome of the decision is what determines if it
     threatens security. (Some may seek to sabotage by action or
     inaction; other may quit the job; still others may submerge their
     beliefs, leading to cognative disonance.)

2) Susceptability to influence by others:
   * The influence of family and friends. This can play off of the
     factors mentioned above. Family financial problems can make
     an employee more susceptible to greed. Family strife (eg.;
     divorce procedings, marital strife) may affect the employee's
     judgement, making him/her more susceptibilty to taking the
     anger out on the employer. Also, it is among familiy and friends
     where an employee might say too much about his employers security
     procedures.
   * The possibility of threats to self and/or, especially,to family
     opening the door to influencing an employee. This is a very
     tough issue. First, the impact of such threat, the perception
     of choosing between family and the job, etc. weigh heavily upon
     any normal person. Second, most people, outside of regimented
     situations such as the military, see that they are not paid well
     to risk their own lives, let alone that of their spouse or
     children.
   * The prevelence of "social engineering" by many intruders under-
     scores a major vulnerability. "Social engineer" is the practice
     of wheedling, coaxing, threatening, etc. pieces of useful info
     out of a company's employees. The techniques used can vary
     greatly. One of the main safeguards is the clear communication
     and practice of basic security precepts. For example, passwords
     are not to be divulged over a phone nor by mail to anyone,
     no matter who they claim to be, is a start.

Some of these factors present special challenges for the civilian
employer. While some companies are now taking a careful look at their
employees' personal lives, looking for signs of financial or marital
difficulties, chemical dependencies, etc., this comes with great risk
to the civil liberties of the employees. This factor is compounded by
the polycotomous view of modern man- the the man/woman in the office
has no linkage to the same person at home; that problems can be left at
the door of the office. Yet some companies find that by gently keeping
of possible turmoil in an employee's life, giving counseling/treatment
opportunities, and by reassinging the person to less critical tasks,
that they can help the employee and themselves.

Another challenge is giving ways for the employees to notify the
employer about security threats. This is important for cases of
social engineering and for case of threats against the employee and his/
her family. If they do not have a way of seeking help, they may be
more likely to acquiesce to the demands. Also, in the case of "social
engineering", the employee should be aware of what it is and that
it is better to report such attempts to a security manager.

J.D. Abolins
301 N. Harrison Str. ; 197
Princeton, NJ  08540

This is purely may off-the-cuff opinion. It does not represent the views
ofthe NCC, the DEP , or anybody else.
-----------[000014][next][prev][last][first]----------------------------------------------------
From:      gloom!cory@encore.encore.com  10-Jan-1989 18:34:37
To:        security@rutgers.edu
How many of you have ever either written or run into 'login
simulators'?  Of those of you who haven't, how many of you could write
one?  (Does everyone have their hands up now?)

Are there any systems out there that implement some way of verifying
that the program that you (the prospective user) are talking to is
really the login program? a program that SHOULD be trusted with your
password?  

Anyone got any good ideas on how to do this?

+C
-- 
Cory ( "...Love is like Oxygen..." ) Kempf
UUCP: encore.com!gloom!cory
	"...it's a mistake in the making."	-KT

[Moderator note: It's one of the first things the engineering frosh do on our
vaxcluster.  If the faculty involved took the time to educate these people a
little better about the computer they were supposed to use, significantly
fewer people would fall victim to this -- as it is, they run around so
pitifully clueless, that such games usually work.   _H*]
-----------[000015][next][prev][last][first]----------------------------------------------------
From:      etg!acheron!scifi!njs@uunet.uu.net (Nicholas J. Simicich)  10-Jan-1989 18:54:08
To:        misc-security@uunet.uu.net
On the Independent Network News, the other night, there was an
interesting(?) article about luggage checking.  

Someone proposed checking all luggage by exposing it to a powerful
Electro-Magnetic pulse.  The theory is that you run all luggage
through a building, and inside the building, suitably shielded
regarding both electrical and blast effects, you, well, apply an
Electro-Magnetic pulse to the luggage that is strong enough to set off
any explosive (as well as ammunition and so forth) concealed in the
luggage.  The implication was that the explosive didn't have to be
hooked to a detonator.

They had some academic type who was proposing this on, and interviewed
him.  No one asked the question which was obvious to me:  What does
this do to every piece of electronic gear which is misfortunate to be
in its path?  My first guess is that a pulse which is strong enough to
set off explosive (there was reference to powerful Navy radar setting
off shells in the TV article) will also burn out every piece of
electronic gear from digital watches to calculators and, of course,
our favorite computers.  It would probably also erase floppy disks.
-- 
Nick Simicich --- uunet!bywater!scifi!njs --- njs@ibm.com (Internet)
-----------[000016][next][prev][last][first]----------------------------------------------------
From:      annala@neuro.usc.edu (A J Annala)  12-Jan-1989  0:20:40
To:        misc-security@ucbvax.berkeley.edu
>for example California has a licensing requirement.  To obtain this license 
>one is required to submit their fingerprints a form to the state.  The state
>runs a simple background check to make sure you are not a "criminal."

I am a little curioys about this message ... the president of the california
locksmithing association told me a few months ago that locksmiths were now
required by california law to hold a valid california contractor's license
... failure to obtain such a license could subject one to arrest for simple
possession of locksmithing (read burglars) tools.

AJ
-----------[000017][next][prev][last][first]----------------------------------------------------
From:      hsc@mtund.att.com (Harvey Cohen)  12-Jan-1989  0:36:45
To:        misc-security@att.att.com
I don't have a copy of the original article, but I think the
version posted here (misc.security) has been edited to remove
all references to inside accomplices, physical break-ins, and
other means by which Mitnick gained initial access to systems.
The remaining text is misleading, I think, in implying that
Mitnick somehow started only with public-domain information 
and used only skill to breach security.
This is all too common in publicity about computer security 
breaches.  The notion that sensitive computer systems are
somehow accessible to any hacker by skill alone is romantic
and newsworthy, so the fact that access really depended on 
getting phone numbers or passwords from an inside accomplice
or by physical breakin is downplayed or even ignored.
The techniques of Mitnick and others like him resemble in many
respects the techniques of the professional magician,
and the press plays along by emphasizing the "magic."
-- 
Harvey S. Cohen, AT&T Bell Labs, Lincroft, NJ, mtund!hsc, (201)576-3302

[Moderator note:  Well, *I* didn't edit it down -- it came that way..   _H*]
-----------[000018][next][prev][last][first]----------------------------------------------------
From:      ncc!myrias!dbf@pyramid.com (David Ferrier)  12-Jan-1989  1:01:58
To:        mnetor!alberta!misc-security@uunet.uu.net
>Password aging minimizes the amount of time that your password is open
>to attack.  You may have a well-chosen password, but the longer it is
>used, the more likely it is that someone has [obtained it]...

This sounds good, but no matter how they try to justify or explain it,
password aging is one of those things that system administrators
do that look really good to system administrators, auditors, 
and security consultants, but in practice does not give enough 
benefit to justify the tremendous inconvenience and loss of time 
caused to users and the organization.

Security measures are put in place to prevent losses.
If the cost over time of a security measure exceeds the 
probability of loss over time times the value of the assets,
use of the security measure is bad management. 
Password aging is an example of a security measure, 
which, except for the CIA or other exceptional organizations, 
usually costs more to implement than the value of the assets protected.

What does password aging buy you? 
--------------------------------

- it helps reduce risk by preventing access to
the system and data by unauthorized users. 

Examination of past security incidents invariably shows 
that almost all damage done to systems or data was done
by authorized users with passwords, not by the spooks that
password aging is supposed to defend against.

What are the risks of access by unauthorized users?
------------------------------------------------

- theft of machine cycles, unauthorized access to data, 
unauthorized modification or destruction of data.

In most systems, the wastage of machine cycles by authorized
users who are inexperienced or inefficient, or read dozens of USENET articles
every day, far exceeds the possible cost of system use arising out of
unauthorized access.

As for data: signon passwords are only the first line of defense.  

Depending on the system, a user often has limited access to 
data. Unless unprotected data are not backed up, contain vital
trade secrets, or there is no audit trail log generated of 
modifications to critical data, access by an unauthorized user is 
be much of a problem--not enough, anyway, to justify the
cost of password aging.

What is the objective improvement to security given by
password aging?
--------------

- who knows? How can you measure the likelyhood of a password
being compromised when it is not changed regularly? A similar
study might be done on people with wall safes who do not change 
the combination on a regular basis. 

What is the cost of password aging?
----------------------------------

- administrative: staffing a responsive corporate security
department who can give out new passwords to users who tend to forget theirs
when they have to change them regularly

- user: need to build into project schedules enough slack to
allow for loss of productivity due to being unable to access
the system because a password has expired

- organizational: replacing people who get fed up with
the security run-around and leave

Anything constructive to say about password aging? 
--------------------------------------------------

The following concepts came from working with a password aging system used by
a Toronto computer utility that prevented reuse of
any password for 20 cycles. Worse, it even prohibited 
use of near matches--"moon" and "fool" for
example. Users had to keep a list of old passwords, because
as a final diabolical twist, the system only gave you five
tries to assign a valid new password when the old one expired,
at which point use of your id was suspended.

- If you must have password aging, keep it within reasonable bounds.
As with any other corporate program, force the people proposing it
to do a cost justification, and make a business case if 
they can for forcing people all over the company do regular password changes.

- Make sure it is an option that you can
control on an individual or departmental basis,
so that only people with high risk data or extensive access
rights are put to the inconvenience of changing passwords
frequently, or at all. This control should extend to the number 
of generations of old passwords that are kept on file to ensure 
the new password does not replicate a previous password.
-- 
David Ferrier                            Edmonton, Alberta
alberta!myrias!dbf                       (403) 428 1616

[Moderator note: It looks like the upshot of this discussion is that aging
 isn't really much help...   _H*]
-----------[000019][next][prev][last][first]----------------------------------------------------
From:      Michael DeCorte <mrd@sun.soe.clarkson.edu>  14-Jan-1989 19:25:21
To:        misc-security@rutgers.edu
>It also seems that few unversities or other institutions of higher education
>admit to viruses being a major problem.  I don't know of any courses offered
>in the subject of computer security and virus detection.  Are there any at
>your school?

No, it would be overreacting to the problem.  Every student here at
Clarkson get a Z-200 (AT clone).  Every one gets a Unix account.
Noone as reported a case of the flu among these 4-thosand AT's.

Now we were lucky that we didn't get hit my the RTM worm so I suppose
you could say he have been hit 1/2 a time.

So what am I trying to say here?  Let's just be rational about this
and not over react.  The panic over viruses seems very similar to the
panic poisons being put into food (read: tylenol).  Everyone was
worried that it would distroy our entire food distribution network.
When was the last time you heard anything about it?  
--

Michael DeCorte // (315)265-2439 // P.O. Box 652, Potsdam, NY 13676
Internet: mrd@sun.soe.clarkson.edu  // Bitnet:   mrd@clutx.bitnet        
-----------[000020][next][prev][last][first]----------------------------------------------------
From:      ole!powell@teltone.com (Gary Powell)  14-Jan-1989 19:38:48
To:        security@pyrite.rutgers.edu
In case you think no one would go to the trouble of listening to your 
conversation, the local paper (Seattle Times) has run a couple of articles
on the folks who listen in, and what is said.  Apparently the best
conversations involve couples who fight, lie etc.  ("Honey, I'm going to be
late from work, Oh Yeah?, Yes I'm at ______'s bar."....)

I have yet to see any articles on prossecution of listeners.

Will digitizing the signal allow for more traffic? I have heard that L.A.
is in trouble with all the cellular phone use, and that the lines fill up.

My understanding is that with a "cordless" phone no warrant is required for a
"tap".  Does anyone know if that applies to cellular phones?  (I have seen
articles where the neighbors inadvertently picked up converstations about drug
deals, informed the police who then made recordings.)  --

Gary Powell
UUCP!uw-beaver!tikal!ole!powell         Seattle Silicon Corp. (206) 828-4422
-----------[000021][next][prev][last][first]----------------------------------------------------
From:      Don Hopkins <don@brillig.umd.edu>  14-Jan-1989 20:15:17
To:        V4039@TEMPLEVM.BITNET
Cc:        security@pyrite.rutgers.edu
     A question of relevance to this discussion is along the following
   lines.  Is it not the ethical responsibility of our government to
   establish laws and guidelines which software must pass before being
   distributed?

NO WAY! I think it's a BAD idea for the government to regulate software
distribution in an attempt to prevent viruses.  I think the proposed
prevention is worse than the cure.  How can government regulation do
anything to prevent computer viruses without being oppressive, costly,
and ineffective? It'll always be up to programmers, users, and system
administrators to take the appropriate precautions (like archiving
source code, and making regular backups), to minimize the loss of time
and data if and when the occasional virus happens along.  How can the
government make virus prevention the software distributer's
responsibility, by regulating software production?

   There should be some sort of committee made up of
   individuals from government and private industry who are
   responsible for certifying software.  For gosh sakes, even floppy
   disks must under some sort of certification!

First off, there's no way a person or committee can certify a piece of
software virus free, with 100% certainty.  The function of a computer
virus is totally open-ended. There are too many ways in which one could
operate.  Computer programs can be extremely complicated, undocumented,
and obfuscated, and the environment in which they run has to be
considered as well. 

It's extremely silly to compare floppy disk certification with
software certification!  Software certification is *NOT* something
that can be automated, like trivially checking a floppy disk for bad
sectors.  How many people do you know who could have looked at the
source code to the 4.3BSD finger daemon and realized that it could be
used to get a shell, because it was doing a gets() into a local string
variable?  Would you stake your career on somebody else's program
being virus-free?

Besides being impossible, software certification would be unthinkably
costly in terms of time and money. Even partially certain certification
is a tedious, intensive process that requires a great deal of time and
understanding and the attention of an experienced programmer. How much
do you think you'd have to pay a committee of expert programmers to
grovel over other peoples code looking for hidden traps? Don't you think
people with such skills have better things to do with their valuable
time?

How long do you think it would take for a committee to certify one single
program virus free (even just partially certain)? Does it have to be
well documented? Or even bug free (ha ha!)? Can a program be too big to
certify? How many programs would need to be certified each year?  How
long would the waiting list be? How would the list be prioritized?
Think of all the nasty legal ramifications -- trade secrets,
nondisclosure agreements, conflict of interest, etc...  And think of the
potential for corruption, discrimination, censorship, and other abuses.

All that red tape would add an indefinitely large amount of time to how
long it would take to get a product to the market. It would annihilate
the software industry! Companies struggle to get their products on the
market as fast as they can. The bureaucratic delay would be fatal to
small companies that depend on the income from their products to
survive.

Do you have any idea of the size of the software industry???! Flip
through a copy of Computer Shopper some time! How many virus
verification committees would there have to be to screen all the new
software products on the market?  They'd have to hire up half the
competent programmers in the world just to be on committees, at salaries
competitive with the ones offered by software companies that are already
having a hard enough time finding people to hire.

Just think about how much the overhead of "virus-free" certification
would add to the price of software! Somebody's got to pay for it! Of
course it would all be passed on to the users in the end. If software
publishers had to pay for certification, then only companies the size
of Microsoft would be able to afford all the salaries, legal fees,
payoffs, and bribes that it would take to get a product to market
before the hardware it ran on was obsolete.  So should the government
pay for it? Would you propose a software virus certification tax?
(sheez, and I thought Stallman was radical! ;-) 

	-Don
-----------[000022][next][prev][last][first]----------------------------------------------------
From:      dplatt@coherent.com (Dave Platt)  14-Jan-1989 20:23:47
To:        misc-security@ames.arc.nasa.gov
< Since there is no regulartory agency whose job it is
< to certify software and it's potential for harboring viruses and
< legitimate bugs, proprietary software becomes just as easy to infect at
< the publishing house as any of your own disks.

Hmmm.  This flies in the face of my own (limited) direct experience with
viruses in the Mac world, and with what I've heard from other Mac users.

In the Mac world, I know of only one instance in which a virus was
propagated through a commercial/skrink-wrap package... this was the case
in which a copy of Brandow's "macMag World peace" INIT virus infected a
copy of the master distribution disk for Aldus FreeHand, and was shipped
to customers.

On the other hand, there are _many_ cases in which the nVIR and SCORES
viruses have travelled around via public domain software, shareware, and
via copies of commercial applications that people were carrying around
on diskette for their own convenience.  I received two shareware games
from a friend, and (before running them) found that they were infected
by SCORES.  Last week, the brother of one of our employees dropped by to
use one of our Macs to print a Microsoft Word document on our
laserprinter;  his copy of MS Word (on his diskette) was infected by the
nVIR virus, and we almost suffered an invasion of our Macs' hard disks.
Fortunately, all of our Macs are running with Vaccine, which stopped the
infection.

Stan writes that "there is no regulatory agency whose job it is to
certify software ...  proprietary software becomes just as easy
to infect at the publishing house as any of your own disks."  The first
part of this statement is true, but I really don't agree that the
conclusion follows, for a number of reasons:

1) Most people buy a relatively small number of commercial programs
   (say, a dozen or two), but tend to trade data and program (shareware,
   PD, etc.) diskettes with other users fairly frequently.  Commercial
   programs tend to be installed on a hard-disk relatively soon after
   purchase, while shareware/PD diskettes float back and forth between
   machines quite a bit.

2) Commercial program disks usually go through a single "choke-point";
   a master disk is created for a specific version of the program, and
   then copies are made from the master (through one or more generations
   of copying).  Thus, if the master disk is checked for virus
   infections and found to be virus-free, the copies made from the
   master will also be virus-free.  It's certainly possible for a
   single infected master-disk to cause many users' machines to become
   infected... but it's equally easy to ensure that the master is
   uninfected.

3) Stan's statement implies very strongly that it's necessary to certify
   a program (through some sort of regulatory agency) in order to ensure
   that it's virus-free (and/or bug-free).  This is flat-out WRONG!
   Virus-detecting programs are available both commercially and for
   free; I have a suite of about a dozen antivirals for the Mac, which I
   use fairly regularly.  I'm about as certain as I can be that my Mac
   library is virus-free.  I trust the antivirals I have in-hand (and
   their authors, and the knowledgeable people on the Net) far more than
   I would trust an anonymous regulatory agency.

4) I'm curious about Stan's reference to "legitimate bugs".  Are we
   going to assert that some regulatory agency will be competent to
   detect all (or most) bugs?  or that such an agency should have the
   ability to forbid shipment of buggy software?  Good luck, in both
   cases!

My personal belief is that we do NOT need yet another regulatory agency
to look at commercial software prior to sale and ensure that it's
virus-free.  For one thing, this agency would be unable to keep up with
the changing and evolving nature of computer viruses... their tests
would almost certainly be out-of-date and obsolete by the time that they
were approved and put into general use.  For another, the volume of
programs that this agency would need to handle would be utterly
incredible, and they'd almost certainly have a really serious backlog.
Would _you_ want to have to wait 6 months to ship a new release of your
software (or even a bug-fix that your customers were screaming for)
because the certification agency was overbooked?

To make matters worse, the presence of such an agency would do _nothing_
to keep viruses from spreading over noncommercial channels... via
public-domain software exchanges, bulletin-board systems, or diskettes
passed from hand to hand.  Based on my experiences with viruses, and
what I've heard from other people, virus distribution via commercial
shrink-wrap products is only a very small portion of the problem.

What I believe that we _do_ need is better education about viruses, more
care and vigilance on the part of users and software distributors, and
continued dissemination of good anti-virus software tools.  I've been
extremely impressed by the willingness of good software authors to write
and hand out anti-viral utilities... mostly with no expectation of any
financial return.  These utilities (e.g. Vaccine, Interferon, AntiPan,
etc.) have saved my bacon at least twice!

By analogy: it's rarely been possible to make a dent in the incidence of
sexually-transmitted diseases by making it more difficult to have sex,
or by insisting that people be tested for infection prior to having sex.
Education and honest information, on the other hand, has led people to
change their behaviors and to avoid high-risk actions; as a result, the
incidence of STDs tends to drop in well-educated populations.  Nobody
_wants_ to get an STD;  if you teach people how to avoid exposure to
disease, they'll usually do so.

Similarly, nobody wants to have their computer infected by a virus; if
you teach them what behaviors to avoid (e.g. indiscriminate swapping of
diskettes with other folks; use of pirated software) and what behaviors
to practice (running good antiviral programs), they'll usually do so.

> Are there any [virus courses] at your school?

I haven't heard of any such courses to date.  There has been quite a bit
of discussion (in comp.risks, for example) on the subject of teaching
students about the ethics of computer use... for example, why it's a
really bad idea to write viruses (even if it seems like a "neat idea").

I suspect that the current wave of viruses is a recent enough phenomenon
that most colleges/universities haven't yet had the time to really
consider what impact it should have on their curriculum, and that we
will see some courses addressing this whole area start to pop up within
the next year or two.

System-administration people at quite a few universities have mentioned
(in comp.sys.mac, for example) that they've been having serious problems
with viruses spreading through their student-use computer populations.
Some of these people (e.g. John Norstad at Northwestern University) have
been leaders in the fight to analyze and protect against these viruses.

< For gosh sakes, even floppy disks must under some sort of certification!
< It's kind of silly to certify the integrety of floppy disks when we are
< allowed to purchase disks with software that might very well have a
< virus due to the lack of regulations and standards in this area.

Wellll... there's at least one misunderstanding/misstatement-of-fact in
the above paragraph.  Although there is an ANSI standard for the testing
of diskettes, there is no requirement (as far as I know) that diskettes
be tested and certified according to this standard before being sold.
The standard exists for a good reason... so that there is an agreed-upon
indicator of diskette quality, that can be referred to by diskette
manufacturers and by the diskette-purchasing public.  I certainly
wouldn't purchase diskettes from a vendor that didn't test and certify
its product to the ANSI standard... but I know of no reason why such
diskettes couldn't/shouldn't be marketed.  If people refuse to buy
uncertified diskettes, then eventually most or all vendors will test and
certify their diskettes to the currently-agreed-upon standard.

It's important to note, however, that the fact that a standard exists,
and the fact that vendors claim that "our diskettes are certified to the
ANSI standard", does _not_ mean that the diskettes are necessarily of
good quality or will provide reliable service.  I recently received a
copy of a very interesting technical report on 3.5" diskettes.  The
report indicated (with statistics to back up their statements) that
there's a great deal of quality variation between vendors, and that many
disks that are sold as "100% certified to ANSI standards" are actually
of mediocre quality and appear to fail the ANSI certification when
purchased and tested.  It may very well be that some manufacturers are
lying... they may be shipping disks that don't meet ANSI standards, and
saying that the disks are certified.  THIS is something that should
really be dealt with, at some level... perhaps ANSI should consider
filing a cease&desist order against companies whose products are clearly
substandard but which are being sold as having met the standard.

I _really_ question the assumption that the solution to every problem is
to set up a new regulatory agency, and to funnel the output of a whole
industry through the agency's hands.
-- 
Dave Platt    FIDONET:  Dave Platt on 1:204/444        VOICE: (415) 493-8805
  UUCP: ...!{ames,sun,uunet}!coherent!dplatt     DOMAIN: dplatt@coherent.com
  INTERNET:   coherent!dplatt@ames.arpa,    ...@sun.com,    ...@uunet.uu.net 
  USNAIL: Coherent Thought Inc.  3350 West Bayshore #205  Palo Alto CA 94303
-----------[000023][next][prev][last][first]----------------------------------------------------
From:      jbrown@jato.jpl.nasa.gov (Jordan Brown)  16-Jan-1989  3:17:48
To:        misc-security@ames.arc.nasa.gov
>it not the ethical responsibility of our government to establish laws
>and guidelines which software must pass before being distributed?

No.  Impractical.  Caveat Emptor.

> We have laws regulating production of auto's and other consumer products

No we don't.  *Some* consumer products are subject to forcible recall,
but there is no certification.

>For gosh sakes, even floppy disks must under some sort of certification!

Voluntary, and self-administered.  There are ANSI standards for floppies,
sure.  Government doesn't enforce them, market does.

I hesitate to even respond to this kind of thing.  Who's going to do
this testing?  The government?  Using whose money?  The producer?
Who's going to watch to make sure he does it, and does it "right"?

How do you *exhaustively* test a complex piece of software?  Software
is more complex than *any* consumer product, up to and including small
jets.  (Excepting, of course, the software used by the avionics.)  For
an airplane, you can say "Must stand without deformation loads of 3 Gs
at max weight" and "Must recover hands-off from a spin of less than one
turn" and other absolute things.  Can't do that for non-trivial
software.  (Of course, any good software producer will do some amount
of testing.  However, in a scenario like you propose, you must codify
how much testing is to be done, and what the acceptance criteria are.)

The real answer is rational application of existing liability laws,
combined with market forces.  If Ford makes a car that tends to blow up
when rear-ended, people sue them and win.  If a dairy sells bad milk,
people sue them and win.  If Chevy makes a car that breaks all the time,
people stop buying Chevys.

Do you really want the computer business (this really applies to hardware
as well as software) to be like the drug business, where worthwhile
products are simply discarded because the expense to certify them exceeds
the possible revenue, and where things regularly take ten YEARS to come to
market?  Sure, this kind of care is necessary for life-critical applications,
and they get it.  For business apps?  Maybe.  For games?  Are you kidding?
-----------[000024][next][prev][last][first]----------------------------------------------------
From:      lypowy@cpsc.ucalgary.ca (grep)  16-Jan-1989  4:11:17
To:        security@rutgers.edu
Here at the University of Calgary there are no specific courses on Computer
Security per se, however the Department does have a reading course at
the undergraduate level (CPSC599.nn) that can handle any topic.  I myself
became interested in Computer Security only recently, and upon finding no
courses in this area approached a member of the faculty with my dilemma.
He gave me the information on CPSC599, and this last semester we studied
viruses and other forms of 'Malicious Logic'.  
The very idea of teaching a course on this topic is a contentious one, the
like of which, I am sure, we have all seen kicked around in one form or
another in other newsgroups/on other systems.

> Is it not the ethical responsibility of our government to establish laws
> and guidelines which software must pass before being distributed?

Ethics are always a problem.  Ethically, perhaps it is, however the delays
right now on the release of a piece of software are sometimes unbearable.  Can
you imagine the latency when each piece of software must be federally approved?
Also, there is the argument that in enforcing such guidelines we might be
limiting the sophistication of the software itself.

> same should be true of software.  There should be some sort of committee made
> up of individuals from government and private industry who are responsible
> for certifying software.  For gosh sakes, even floppy disks must ...

Keep in mind that the auto industry has had a number of years of production to
call upon for their experience; their problems have been monitored and treated
over a much longer period than what we are dealing with here.  The widespread
problem of Computer Viruses et al is fairly recent.  

> It's kind of silly to certify the integrety of floppy disks
> when we are allowed to purchase disks with software that might very well have
> a virus due to the lack of regulations and standards in this area.

This is somewhat confused.  Your point is valid, but the example is perhaps
not the best.  It IS important to certify floppy disks (the physical media)
to maintain some sort of quality control; they do not certify what goes onto
the disk (be it programs or data).  You always take a chance with the software
you buy - sometimes the risks are minor ("Will this program be as useful to me
as the salesperson claimed it would be?"), sometimes major ("Is this program
clean or has it been infected by some sort of Virus?").  

There are a number of good points in your message, Stan!  Many of the areas
that you address may be in their embryonic stages as we speak.

Greg Lypowy
University of Calgary Computer Science Department
2500 University Drive N.W.			       lypowy@cpsc.UCalgary.CA
Calgary, Alberta, T2N 1N4	 ...!{ubc-vision,ihnp4}!alberta!calgary!lypowy
-----------[000025][next][prev][last][first]----------------------------------------------------
From:      tat@pccuts.pcc.amdahl.com (Tom Thackrey)  17-Jan-1989 11:14:28
To:        misc-security@ames.arc.nasa.gov
>it not the ethical responsibility of our government to establish laws and
>guidelines which software must pass before being distributed?

On the contrary, it is the ethical and moral responsibility of good
government to avoid censorship and encourage creativity.  A highly
regulated software industry would be just like the auto industry a very
small number of very large corporations producing very uninteresting
software that all looks alike.  The assumption that government regulation
can protect against bugs, time bombs, infections, poor documention or
user carelessness is naive.  Remember the Corvair, Pinto, DC-10, Electra,
Tacoma Narrows bridge, Kansas City Holiday Inn, Titanic, Love Canal,
Lawn Darts, and Challenger were all products of regulated industry.

An independent testing organization along the lines of Consumer Reports
might be useful.  Publicity will probably be the most effective tool to
kill off bad or poorly designed products.

-- 
Tom Thackrey sun!amdahl!tat00
[ The opinions expressed herin are mine alone. ]
-----------[000026][next][prev][last][first]----------------------------------------------------
From:      gwyn@smoke.brl.mil (Doug Gwyn )  17-Jan-1989 11:31:47
To:        misc-security@uunet.uu.net
>I have personnaly arrested and convicted on posession of a baseball bat,
>but rest assured that my defendant was not on the ball field.

You could get away with that through the vagueness of the definition
of "deadly weapon" (a proper interpretation of which must take intent
or actual use into account; every kitchen contains potentially "deadly
weapons").  But if the law said that possession of a baseball bat was
a felony (and some laws are almost that stupid), then you would have
to engage in "selective enforcement", which is fine when the officer
exhibits good judgement, but many people now don't want to have to
rely on that.  (Too many cases of poor judgement have been reported.)
My point is that a law should specify accurately what the intended
effect is, rather than to address this indirectly by trying to
manipulate possible causes.  Indirect restrictions invariably also
interfere with perfectly legitimate actions by non-criminals, and that
is not usually logically necessary in order to fight actual crimes.

To get back to the topic of computer security, laws and regulations
that do not DIRECTLY address criminal actions involving computers will
very likely hamper the application of computing power to meet human
needs.  That would be very sad ..
-----------[000027][next][prev][last][first]----------------------------------------------------
From:      gwyn@smoke.brl.mil (Doug Gwyn )  18-Jan-1989  3:17:46
To:        misc-security@uunet.uu.net
>Is it not the ethical responsibility of our government to establish laws and
>guidelines which software must pass before being distributed?

Not in any country that considers the freedom of its citizens to be a
virtue.  I leave it to you to decide if that applies to the USA today.

>For gosh sakes, even floppy disks must under some sort of certification!

No, they don't HAVE to undergo "certification" (which is actually just
MEDIA VERIFICATION performed by the manufacturer).  However, quality
assurance is quite important for manufacturers who plan to remain in
business for a long time, and in fact most long-term successful
manufacturers do realize this and have established quality assurance
programs.  Seldom does this involve the government; it's just good
business.

>... due to the lack of regulations and standards in this area.

Government regulations seldom help ANYthing, and usually exacerbate
the very problems they were intended to solve; haven't you noticed??
-----------[000028][next][prev][last][first]----------------------------------------------------
From:      Barry Margolin <Margolin@PCO_MULTICS.HBI.HONEYWELL.COM>  18-Jan-1989  3:47:37
To:        security@pyrite.rutgers.edu
Stan Horwitz asks about the possibilty of regulations against computer
viruses.  A bill was introduced this fall in Congress to make computer
viruses and worms illegal.  I recently saw the entire text posted to a
Usenet newsgroup, but I don't remember which.

Of course, the federal bill only covers viruses transmitted in
interstate commerce, but it seems likely that states would follow suit.
                                        barmar
-----------[000029][next][prev][last][first]----------------------------------------------------
From:      ki4pv!tanner@bikini.cis.ufl.edu  18-Jan-1989  3:54:05
To:        security@pyrite.rutgers.edu
)  Is it not the ethical responsibility of our government to establish
) laws and guidelines which software must pass before being distributed?
No.  The legitimate functions of government are two:
	(a) provide for common defense against external powers, and
	(b) enforcement of contracts.

If two parties wish to exchange money for any sort of buggy software
(and most software is buggy), that is their business.  All the
government can do is assure that said buggy software is delivered as
promised.

) I know that the government has guidelines for itself about the
) integrity of software for it's internal systems.
Right.  Any two parties can, if they wish, negotiate any sort of
contract they desire.  If they wish to agree that the software should
be warranted free of bugs for a period, or that it should be of such
a coding style, they are certainly free to do so.  If the government
is one of the parties to a contract, they may arrange such terms as
please them.

) We have laws regulating production of auto's and other consume
) products and services.  The same should be true of software.
Simply asserting that there should be such regulation, in the absence
of convincing argument (perhaps sent under separate cover) is not
sufficient.  Demonstrate that there should be regulation of software.

) For gosh sakes, even floppy disks must under some sort of
) certification!
Floppy disks are regulated only by private industry.  If a
manufacturer wishes to call his disks "certified", he is free to.
If he wishes to go farther and actually certify something, then
you can check and see if his claim is true.

Further: it is much easier to check floppy disks for bad spots of
such a size or larger than it is to check a piece of software (given
as object only) to be sure that it is entirely free of bugs and
anti-social behaviour (virii, worms).

If you are concerned about the quality of software you are
purchasing, then wishing for government regulation is the wrong way
to deal with your problem.  The government will simply assist in
restricting the supply, causing a price increase.

Instead of hoping for the government to solve your problems, you
should negotiate a contract somewhat more reasonable than those
"shrink-wrap" licenses you find in so many packages.  You may also
be well served to insist on a source distribution; this latter is
my personal advice.

					Dr. T. Andrews, Systems
					CompuData, Inc.  DeLand
-----------[000030][next][prev][last][first]----------------------------------------------------
From:      J.D. Abolins <OJA@NCCIBM1>  18-Jan-1989  4:26:45
To:        SECURITY@pyrite.rutgers.edu
Although I don't have stastics to compare virus cases between those
vectored by commercial/shrink-wrapped software and those vectored by
shareware/public-domain software, shareware often gets pinned with the
blame for spreading viruses. Sometimes, various writers have panned
any software that doesn't come in a shrink-wrapped package and with
a hefty three-digit price as "leper software". <My apologies to anyone
who has Hanson's disease.>

Taking a closer look shows that the issue is not as clear cut as these
critics claim.

1) The term "Shareware" does not describe the quality of the software.
   Rather, it describes the mode of distribution. That's all. (I guess
   some critics sucumb to the notion, "If it's good, it has to be
   expensive." and its inverse.)

2) Commercial/Shrink-wrapped software is distributed via a short chain
   to the user. Author-> Manufacturer-> Distributor -> Retailer -> User.
   If Shareware is obtained via a short chain, as in the case of buying
   it directly from the author or from a commercial distributor, the
   risks are almost equal to that of the commercial/shrink-wrapped
   software. (The main factor that increase the risk slightly is that
   many shareware software writers don't use or have isolated computer
   systems dedicated for their programming only.)

3) While the number of REPORTED incidents of viruses carried by
   commercial/shrink-wrapped software is very low (to solid cases
   anda couple of alleged ones), the number of people and systems
   affected by the known cases are large. In the case of viruses
   vectored by shareware, I have yet to hear of one originating with the
   author's system. Instead, the viruses are introduced further down
   the chain of distribution (in a user's group, on a BBS, etc.) and
   the spread is spotty rather than massive as in the case of the
   commercial/shrink-wrapped software.

Stan Horowitz asked if laws can be enacted to set standards for software
so that viruses could be prohibited. A good question. Here are some
factors from a non-lawyer otherwise informed about computer laws...

1) Computer law is so new and complex that lawmakers are having a
   hardtime catching up. One challenge is to define the problem,
   the crime, the standards for determining guilt, and the appropriate
   action. This has to be done wisely, otherwise we can get laws that
   anybody who installs buggy software a criminal. <Since even DOS
   has bugs, that produces a lot of criminals.>

2) The issue of general software standards have been discussed elswhere
   on BITNET. (I think RISKS had mentioned it a couple of times.>
   Great Britain is considering such standards. Defining those standards
   into law is more difficult. In the case of "certified" diskettes, the
   hardware and other constrains have well defined the boundaries. It's
   very hard to find the boundaries with something as abstract as software.

3) Except for deliberate corporate "virus wars" as presented in a PC
   Magazine's colum's scenario, commerical software companies do not
   seek to put viruses on their products. The viruses get in by
   accident, by sabotage, etc. Here, liability and tort procedures
   apply.
-----------[000031][next][prev][last][first]----------------------------------------------------
From:      jef@helios.ee.lbl.gov (Jef Poskanzer)  19-Jan-1989  3:30:12
To:        misc-security@ucbvax.berkeley.edu
This may already be common knowledge, but...

Some terminal emulator programs have an amusing bug.  When they see the
text "NO CARRIER" at the beginning of a line, they stop listening to
the modem.  Like this:

NO CARRIER

If your emulator has this bug, you are no longer on line, and are not
reading this.  Yes, this sounds far-fetched, but I can personally
assure you all that it's not just another chain-letter variation like
the modem virus story.  I discovered this on the WELL a while back when
I opened a topic called "NO CARRIER", and then got mail from a user
complaining that whenever he tried to read the topic his modem hung
up.  He was not computer-literate enough to have been making a joke.
Recently another user reported the same problem.

This represents a security risk of the denial-of-service type.
Fortunately it's easy to fix -- just toss the buggy terminal program
and get a better one.
---
Jef

             Jef Poskanzer   jef@rtsg.ee.lbl.gov   ...well!pokey
                                 Burma Shave.
-----------[000032][next][prev][last][first]----------------------------------------------------
From:      hollombe@ttidca.tti.com (The Polymath)  19-Jan-1989  3:32:53
To:        misc-security@sdcsvax.ucsd.edu
}1. Commission of a burglary _IS_ a crime.  In fact, it's a Felony in
}Pennsylvania, and it is defined as _any_ entry into a building with the
}intent to commit a crime.  This would mean any crime, from rape to theft to
}criminal mischief.  That's a statute with some teeth!

Not as many or as sharp as you might think.  It can be difficult to prove
_intent_.  Even if you catch the criminal red handed, in the act of
committing a crime, you still have to prove they _intended_ to commit the
crime _when they entered the building_ to make burglary stick. (Of course,
they're still guilty of the crime you caught them in the midst of).

"Honest, Judge, I just ducked in to get out of the rain.  Then I saw all
that stuff lying there and decided, 'What the heck ...'".

Theft?  Yes.  Burglary?  Try and prove it. (Granted, possession of
burglar's tools would certainly be an indicator of intent and probably be
considered sufficient proof of same).

-- 
The Polymath (aka: Jerry Hollombe, hollombe@ttidca.tti.com)  Illegitimati Nil
Citicorp(+)TTI                                                 Carborundum
3100 Ocean Park Blvd.   (213) 452-9191, x2483
Santa Monica, CA  90405 {csun|philabs|psivax}!ttidca!hollombe
-----------[000033][next][prev][last][first]----------------------------------------------------
From:      *Hobbit* <hobbit@pyrite.rutgers.edu>  19-Jan-1989  3:38:29
To:        security@pyrite.rutgers.edu
Tom Mahoney points out [EMPHASIS mine] --

   ... burglary ... is defined as _any_ entry into a building WITH THE
   INTENT to commit a crime.  This would mean any crime, from rape to theft ...

This seems to be how most statutes are laid out.  Thus, if it can be proven
that I entered a building to, for instance, shortcut through a block instead
of having to walk around the end, and exited the building out another door,
the most I've committed is trespassing.  *Whether or not* I had my lockpicks
in my pocket at the time.  Correct?

However: I do keep a homebuilt slimjim in my car [and in my office] and have on
several occasions helped people bail themselves out of a lockout.  I've never
used same to enter a car unless someone authorized wanted me to do so.  But how
would I prove this to you if you simply encountered me and said slimjim
returning to my car or office after rescuing someone's keys, and they've long
since driven away?  Would you arrest on the simple basis of this random hunk of
metal in my hand?  A good proportion of these "rescues" have taken place in the
wee hours, since I and people I associate with are usually awake at those times
anyway.

At MIT and many other places, non-destructive entry for its own sake is a
long-standing tradition among the students.  The "hacker ethic" promulgated
there is such that common thievery is shunned, although unfortunately there are
always exceptions that keep the campus security people paranoid.  The way the
laws are written, it seems to me that simple *entry* isn't a crime unless some
*other* crime was committed inside the entered premises.  [Signing one's name
on a concrete wall that nobody looks at is normally the most that happens on
these building-hacking expeditions.]  The real screw comes when a defendant
claims to have entered purely for its own sake, and all too often it's left to
subjective judgement by the authorities.  Needless to say I don't cotton to
this idea -- I love to explore buildings and such, but always have a hard time
convincing other people it's harmless and creates no security risk.  Comments?

_H*
-----------[000034][next][prev][last][first]----------------------------------------------------
From:      "W. K. (Bill) Gorman" <34AEJ7D@CMUVM>  19-Jan-1989  4:25:20
To:        SECURITY Digest <SECURITY@OHSTVMA>
>        I presume that you are using some sarcasm here...
Absolutely none in this case...

As a member of a law enforcement oriented family (father, brother) and
with some security experience myself, there are one of two *more*
points to be made here.

>...and it is defined as _any_ entry into a building with the
>intent to commit a crime.  ... That's a statute with some teeth!

And a Prosecutor's worst nightmare. Proving intent, or capacity therefor,
is no easy task.

>2.  Posession of burglar tools is also a crime in this state.
>...  BTW the same logic applies to deadly weapons.

Unless *specifically* defined by statute as a banned tool or device,
applying this logic to everyday items can easily give rise to subjective
interpretation, alias descretionary enforcement. The weapons
example is a case in point (granting of licenses is a very subjective
process in many areas), as can be drugs, alcohol, or almost anything else.
Your dynamite reference offers proof of the converse: dynamite is federally
regulated and those possessing it are either licensed to do so or they are
in violation of federal, and probably state, statutes; a more clear cut and
sharply defined situation than possession of something or other which may
or may not be capable of use in an illegal manner by an individual who
may or may not intend to use whatever-it-is illegally at a time and
place which may or may not be suggestive of criminal activity depending
or circumstances which may or may not be apparent to an officer who may or
may not have training available to enable s/he to respond appropriately
in the given circumstances. The point of all this is to enable an officer
on the scene to make a good, solid arrest of someone who richly deserves
it; an arrest that will stand up later in court, not one that may either
be later shown to have been in error or worse, is thrown out of court
and releases a criminal back into society.

Don't bother sending me flames about this - my brother and I have
both paid our dues.

*******************************************************************************
* A CONFIDENTIAL COMMUNICATION FROM THE VIRTUAL DESK OF:                      *
*******************************************************************************
...............................................................................
|W. K. "Bill" Gorman                 "Do             Foust Hall # 5           |
|PROFS System Administrator        SOMETHING,        Computer Services        |
|Central Michigan University      even if it's       Mt. Pleasant, MI 48858   |
|34AEJ7D@CMUVM.BITNET                wrong!"         (517) 774-3183           |
|_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_|
|Disclaimer: These opinions are guaranteed against defects in materials and   |
|workmanship for a period not to exceed transmission time.                    |
|.............................................................................|
-----------[000035][next][prev][last][first]----------------------------------------------------
From:      Chriz@cup.portal.com  20-Jan-1989 11:02:57
To:        security-request@rutgers.edu
Consider the following value based finite state automaton:

finite state name         finite state action
00                        My Father Sells Rubbers to Nato
01                        My Mother Pokes holes with a pin
10                        My Sister Performs the Abortions
11                        My God, How the Money Rolls In!

In each state the following value functions utilized by the
empathizers of the young NATO soldier(and his girlfriends) are
apparent:

00                        Young soldier needs rubbers for sex so NATO
                          supplies them
01                        Young soldier trusts NATO
10                        Girlfriend doesn't love Young Soldier
11                        Young soldier must pay 

Now according to Hopcroft and Ullman (Introduction to Automata Theory,
Languages, and Computation) a finite automaton is "a finite set of states
and a set of transitions from state to state that occur on input symbols
chosen from an alphabet _sigma_."[p.16]

Hopcroft and Ullman utilize finite states in purely symbolic terms, citing
that the set of transtions occur on input symbols.  It is possible, in
examples such as the above, that the symbols could be incentives to
make a certain value judgment, and that the states could be a value-based
or value-motivated action.

>From this the idea of a perpetual strategy of manipulation, in which
those people whose values are identical with those of the FSA are
manipulated and molded to produce a desired result,ad infinitum. 
Notice that the system set up by the empathizer with the young 
soldier does not even depend on fooling the young soldier.  
All he sees is that his girlfriend somehow go pregnant, and he 
has to pay for an abortion.  The finite state automaton does 
not depend on PT Barnum style deception, and it does
not depend on the gullibility of the victim.  

In fact, even if the young soldier felt it was odd, he would
still be forced to abort.  Only an extensive investigation
would reveal the entire scheme.  This is the way to manipulate
and the way to victory.
-----------[000036][next][prev][last][first]----------------------------------------------------
From:      zeeff@b_tech.ann_arbor.mi.us (Jon Zeeff)  21-Jan-1989 18:59:22
To:        misc-security@uunet.uu.net
I'm looking for a program that will take and store crc and permission
values for all the files on a file system and then at some future date,
produce a report indicating what files have changed.  Something like:

/etc/inittab has changed date and crc value
/xx has been removed
/zz has been added
/ff has changed crc value but not date
/bin/su has changed permissions
/gg has changed ownership

Such a program would have obvious uses for security and file system
corruption checks.

-- 
  Jon Zeeff			zeeff@b-tech.ann-arbor.mi.us
  Support ISO 8859/1		zeeff%b-tech.uucp@umix.cc.umich.edu
  Ann Arbor, MI			umix!b-tech!zeeff
-----------[000037][next][prev][last][first]----------------------------------------------------
From:      mrc@blake.acs.washington.edu (Mark Crispin)  21-Jan-1989 19:19:19
To:        misc-security@husc6.harvard.edu
     Passwords do not belong in *any* file in the filesystem.  Any
file in the filesystem can be compromised by careless protections (or
secret access left by someone who temporarily got into root).

     There are only two entities which have any business reading
passwords; the system backup utility and the system agent which
validates a password.  All other entities which wish to validate
passwords should do so via requests to the system password validation
agent (that is, login, FTP server, etc. should never have access to
passwords).

     Three operating systems which I am familiar with stored the
password as part of out-of-band information in the user's primary
directory.  In one OS, the programs which validated passwords were
privileged (equivalent to being setuid'd to root) and a user invoking
those programs received the appropriate privileges for the duration of
that program's execution.  In the other two, all operations involving
passwords were implemented as system calls inside the operating
system.  The programs executing those system calls had no special
privileges; they were simply user interfaces.  Any user could write a
useful program using those calls, although (for other reasons) there
were restrictions on writing programs which run un-logged-in.

     Although the equivalent of root could read passwords, the
passwords were encrypted, by a much simpler algorithm than Unix (in
fact, in earlier versions the passwords were stored as plaintext!).
The difference was that it was much harder to get at the passwords in
any form.  Security-conscious sites could configure their system so
that password reads could only be done by a process on a trusted
terminal (e.g. the console) -- and privileged logins and/or privileged
access could similarly be restricted.

     None of these measures, by themselves, make much of an
improvement in security; but collectively it does make a difference!

-- Mark --
-----------[000038][next][prev][last][first]----------------------------------------------------
From:      nevin1@ihlpb.att.com  21-Jan-1989 19:39:19
To:        security@pyrite.rutgers.edu
>I don't know of any courses offered ...  Are there any at your school?

There were none at my university.  The problem is some people feel that
their systems will be compromised by the 'wrong' people knowing about
computer security; they would rather keep people in the dark and hope
that no one discovers a way to break into computer systems.  It boils
down to "is this knowledge dangerous?"  In the short term, perhaps; in
the long term, however, knowing what the security problems are is the
only way to combat them.

>We have laws regulating production of auto's and other consume products
>and services.  The same should be true of software.

Are we willing to pay the price for this 'protection'?  First off, how
does one really go about verifying software?  Programs that output
programs, such as compilers, all have to gain trust (see Ken Thompson's
"Reflections on Trusting Trust", CACM Turing Award Lecture, 8/84).  Who
is willing to wait the months, or even years, that it would take to
verify a compiler?  Who would pay for this verification?  Small
software houses could not afford to have their software verified;
should they just go out of business?

Are we also willing to wait for our bug fixes/virus vaccines?  Since
these are also programs, they too must undergo the verification
process.

Are we also willing to pay for liability insurance for the software
industry, in much the same way we pay for liability insurance for the
pharmaceutical and medical industries?  (Note:  this may become an
issue even without government regulation.)  The price of software would
go up exponentially; all but the biggest companies would start writing
software for 'internal' use only (reinventing the wheel a thousand
times over), and the growth in the software field would finally peak
out.

>There should be some sort of committee made
>up of individuals from government and private industry who are responsible for
>certifying software.

I would rather see something like an Underwriters Laboratories for
software, where one can get certification if one desires.  This would
alleviate some of the problems associated with government regulation,
and we can see if certification really makes a difference to the
market.

You don't get regulations and standards for nothing.  It is a heavy
price to pay; is it really worth it?
___
NEVIN ":-)" LIBER  AT&T Bell Laboratories  nevin1@ihlpb.ATT.COM  (312) 979-4751
-----------[000039][next][prev][last][first]----------------------------------------------------
From:      koreth@ssyx.ucsc.edu (Steven Grimm)  22-Jan-1989 11:39:21
To:        misc-security@ames.arc.nasa.gov
Well, here at UCSC, we have something called a ".secret file".  This is
a file called ".secret" in your home directory, which is printed instead
of the standard "Password:" prompt.  When users first log in, they are
instructed to enter a word or phrase meaningful to them (not their
password), which is placed in the .secret file.  With .secret files,
fake login programs are easy to detect, because unless the program is
running as root (in which case it would have access to all the accounts
on the system anyway), it won't be able to read and print the proper
.secret file.

If there's sufficient interest, I can dig up the changes I made to
SunOS /usr/bin/login, which should apply almost exactly to other login
sources.  (I can't take credit for the idea, even though I implemented
it on one of our machines; I believe Jim Haynes, haynes@ucscc.ucsc.edu,
came up with this scheme.)

---
These are my opinions, which you can probably ignore if you want to.
Steven Grimm		Moderator, comp.{sources,binaries}.atari.st
koreth@ssyx.ucsc.edu	uunet!ucbvax!ucscc!ssyx!koreth
-----------[000040][next][prev][last][first]----------------------------------------------------
From:      Pete Nielsen                         <CSMSPCN@UCLAMVS.BITNET>  22-Jan-1989 19:19:23
To:        SECURITY@UBVM.BITNET
One way of bi-dirrectional validation is to exchange authentication
strings, rather than send the password.  The way this works is that
at logon, the workstation sends a string-to-be-encrypted along with the
userid.  The session access control, encrypts that string with it's
copy of the users password, and returns it, along with a new string-to-
be-encrypted.  The workstation, receiving the first string, de-crypts it
with it's copy of the users password, if there is no match, you're
talking to a trojan, If there is a match, the workstation, encrypts
the access control programs string, and returns that.

The password never flows over the net. And both parties to the
conversation are validated.
-----------[000041][next][prev][last][first]----------------------------------------------------
From:      smb@ulysses.homer.nj.att.com (Steven M. Bellovin)  22-Jan-1989 19:39:23
To:        clyde!misc-security
> Are there any systems out there that implement some way of verifying
> that the program that you (the prospective user) are talking to is
> really the login program?

Any system rated B2 or higher by DoD has such a concept.  It's called
the ``trusted path'' or some such.  Briefly, the user *never* sees
a login prompt presented by the system.  In order to receive one, the
user must take some action guaranteed to reach the real ``trusted computer
base''; this is the only way to receive a login prompt.  Many B1 systems
have similar concepts, though they aren't required to.  Systems rated B3
or higher use trusted path for other operations as well, such as changing
passwords.

> Anyone got any good ideas on how to do this?

There are a variety of techniques.  Some systems use a ``secure attention
key'' -- some key sequence (two breaks within a couple of seconds, for
example) or other action to get the TCB's attention.  There were several
papers at the June '88 Usenix on similar topics; you may want to look them
up.  Particularly near and dear to my heart is (of course) my paper on
the ``Session Manager''; it solves several other problems as well.  There's
currently a discussion on comp.unix.wizards on the same topic; I posted
a brief description of my session manager to that group, so I'll forbear
to repeat myself.

		--Steve Bellovin
		att!ulysses!smb
		smb@ulysses.att.com
-----------[000042][next][prev][last][first]----------------------------------------------------
From:      flynn@cos.com (Susan F. Symington)  22-Jan-1989 19:59:23
To:        security@pyrite.rutgers.edu
One way to write a login program that is verifiable would be
to have a random number field associated with each user (in
addition to that user's login id, password and whatever else
there may be).

The login procedure would proceed as follows:
	1) Program prints "login:" prompt to the screen

	2) User types in his user id

	3) Program looks up the random number associated with that
	user and prints it to the screen

	4) User then checks whether the number is correct.  If it is,
	the login program is thereby authenticated.
	
	5) The login program prompts for the password and the the user can 
	feel safe in supplying his password to the login program.

	6) Upon receipt of the correct password, the login program
	permits the user to access to the computer. In addition, the 
	login program also
	calculates a new random number, stores it in the allocated
	field associated with the user and prints it to the screen so
	the user will know what number the login program should send
	down the next time he tries to log on.

Of course if you have the budget and don't want to have to rely on
the user to remember a random number that changes with every
login, then you can use some sort of additional physical device
such as a datakey or smartcard to store the number.  The computer
could read it off of the medium directly and it would never have
to be displayed for human eyes to behold. If you want to get
fancy, you could encrypt it before transmitting it.

Susan Symington, COS
-----------[000043][next][prev][last][first]----------------------------------------------------
From:      schoi@cmx.npac.syr.edu (Seokrim Choi)  25-Jan-1989  5:59:35
To:        security@pyrite.rutgers.edu
Hi, folks

  I'm not sure this is right place to ask this, but..
 Does anybody know a place which sells electronic security equipment ?
 (ie. burglar alarm, fire alarm, various sensors  etc etc..)
 If I can get their catalog first, that'll be perfect.
 Any comments will be greatly appreciated.  Thanks.

-----
    Seokrim  Choi  >  schoi@cmx.npac.syr.edu
-----------[000044][next][prev][last][first]----------------------------------------------------
From:      Stephen Wadlow <sw0y+@andrew.cmu.edu>  25-Jan-1989  6:19:35
To:        security@pyrite.rutgers.edu
Over the summer I managed to obtain a few Medeco cylinders for
examination.  After disassmbly, reassembly, and lots of referring to the
medeco diagram, I knew enough to pick the little bugger open.  It just
took a *long* time (on the scale of hours for a fully loaded cylinder).  I
have heard stories of people being able to pick a fully loaded medeco in a
matter of minutes, and have been trying to figure out how the h-ll they do
it (assuming it's true).

Anyone have any pointers or techniques they'd like to share?

				steve

======================================================================
Stephen G. Wadlow               Internet: stephen.wadlow@andrew.cmu.edu
System Manager			Bitnet:   wadlow@drycas
Center for Fluorescence Research -- Carnegie-Mellon University
-----------[000045][next][prev][last][first]----------------------------------------------------
From:      stiatl!john@gatech.edu  25-Jan-1989 22:19:42
To:        security@pyrite.rutgers.edu
The problem with this scheme, even if it would not damage electronics is
that it's very easy to defeat.  Simply wrap the explosive in a conductive
and/or magnetic material like Mu-metal and voila!  No Boom.

In a related issue.  Much press has been given to this new Thermal Neutron
explosive detector.  Being a Nuke by training, the underlying assumptions
give me some heartburn.

The principle of operation is that nitrogen-rich explosives, when 
irradiated with a thermal neutron flux will have some nitrogen atoms
transmuted to the N-16 isotope.  N-16 has a half-life of a few second
and decays with a highly energetic and characteristic gamma ray.
This gamma ray is detected, processed and used to generate the alarm.

Here's the rub.. Thermal neutrons are easily stopped by materials with
high cross-sections like cadmium.  No neutrons in the explosives = no
N-16.  No N-16 = no detection.  Since it is likely that Californium is
the source of neutrons, the flux is not likely to be high because of 
cost (several thousand dollars per microgram).  Thus the flux could probably
be stopped by a cadmium foil.  By implication, a perfect explosive would
be some plastique or C4 shaped like a candy bar and wrapped in cadmium 
"foil".  this would look normal to visual and X-ray inspection and would
defeat the neutron detector.

So the question arises "Is this TOO easy or am I missing something".  
Considering the government's involvement, it's quite likely the 
former.  I'm wondering if there is anybody on the net familiar with 
the specific design of the detector.  If so, am I missing something?

It seems to me that a much more suitable detector would be one of the
proven nitrate sniffers.  These things have been in use at Nuclear
plants for at least 7 or 8 years.  I can testify as to their sensitivity
after setting one off with power residue from a weekend target practice
session.  The functionality test the guards used at one site was to 
try to carry one of these nitroglycerin despensing angina patches thru 
the gate.  It would detect the fumes from the few milligrams of nitro
in these patches.

For even better sensitivity, the nitro sniffer could be coupled to the
altitude chamber now in use.  The vacuum chamber would enhance the mobility
of emitted explosive molecules and the exhaust of the vacuum pump would
contain a concentrate of the chamber atmosphere.  As an added benefit,
these things cost a few thousand bux, not a million or more like the 
neutron device.

Comments?

------------------------------------------------------------------------------- 
John De Armond, WD4OQC                              Sales Technologies, Inc 
...!gatech!stiatl!john                          Atlanta, GA  (404) 841-4000
-----------[000046][next][prev][last][first]----------------------------------------------------
From:      James M Galvin <galvin@twg.com>  28-Jan-1989  0:34:39
To:        security@rutgers.edu
I am sure that people have programmed the RSA Algorithm, but has anyone done
an implementation they are willing to share?

Also, for those who have done implementations, have you done any kind of
benchmarking, or can you share any good or bad experiences?

Thanks,
Jim
-----------[000047][next][prev][last][first]----------------------------------------------------
From:      Russell Brand <wuthel!brand@capmkt.com>  28-Jan-1989  0:51:47
To:        koreth@ssyx.ucsc.edu
Cc:        misc-security@ames.arc.nasa.gov
   With .secret files, fake login programs are easy to detect, because
   unless the program is running as root (in which case it would have
   access to all the accounts on the system anyway), it won't be able to
   read and print the proper .secret file.

It seems that the attacking program could just "rsh" or such and send
the username.  It would then get the secret back to show to the user.

What am I missing?
-----------[000048][next][prev][last][first]----------------------------------------------------
From:      felix!chuck@dhw68k.cts.com (Charles Vertrees (Chuck))  28-Jan-1989  1:13:26
To:        security@pyrite.rutgers.edu
What happens when military electronics is exposed to a big EMP, like from a
nuclear blast?  You can unit test in chambers an such, but how does the design
really work in place?  The Air Force decided to find this out by building a
huge wooden trestle next to Kirtland AFB in Albuquerque in the 70's.  Really
something to look at.  This thing stuck out over the Tijeras canyon, which runs
immediately south of the airport, roughly paralleling the main E/W runway.  I
never got to see it up close, but as you fly in or out of the airport, you can
usually see it.  (Albuquerque is joint use, military/commercial.)

The plan was sort of simple:  Test how a complete unit, electronics and all,
(read that airframe) reacts to an EMP while flying in the air.  This trestle is
big enough for them to park a B-52 on the thing and have it end up 50 or 60
feet off of the ground.  They had a taxi strip from the main airport area and
would simply tow whatever the test subject was to be out there and push it out
onto the trestle.

I remember reading about the construction of this thing and about how they were
going to zap the planes, but it has been a long time.  Something about lots of
capacitors and big coils.  Normal stuff.  The trestle was almost more
interesting because of the problems they had building it.  Seems that it was
very difficult to find enough carpenters who could still build such a thing.
The design call for 100% wood.  No metal.  No nails, no bolts.  This was
required because of the strength of the pulses they were going to be using.
Urban legend had it that they would literally pull the nails from the structure
if they had been present.  I don't know if this was true, but, electrically,
they would effect the test, since a flying aircraft usually doesn't have pieces
of metal floating around it in space.

I think both Sandia Corporation and EG&G were involved in the project, the
results of which are probably classified.

Chuck V.
-----------[000049][next][prev][last][first]----------------------------------------------------
From:      J.D. Abolins <OJA@NCCIBM1>  28-Jan-1989  1:33:06
To:        SECURITY@pyrite.rutgers.edu
Although I can't speak for other states, I remember from my EMT
(Emergency Medical Technician) training that in New Jersey, the
possession of a "slim-jim" or similar extrication tools OFF DUTY
could considered possession of burglery tools. The instructors
warned those of the EMT trainees who would be working with first
aid squads to leave the extrication tools with the rig or the
squad building.

>From another experience, I also learned that other people's knowledge
of one's tools or abilities regarding locks is a big liability.
(I may have told this before, if so please have patience...) While
in college, I worked as a porter (janitor) in a hospital. One day,
the houskeepers had accidentally locked themselves out of the laundry
room while a washing machine was overflowing. The suds were coming out
of the room, flowing under the door. I offered to try to open the door.
Fortunately, one of the housekeepers warned me that it would be far
better to allow the flooding to continued and wait for the hospital to
call in a locksmith. The wise reasoning for this advice was that if
the hospital knew that I was "good with locks", the next time the
drug inventory came up short or something was missing, I would become
a major suspect.

(Interesting enough, I find a similar hazard in the computer field.
I am not a computer security manager/technician by title in my
job. So there is some risk that if anything <G-d forbid> should
happen with our computer systems, some of the Department's administra-
torswould point their fingers at me. "After all, he writes articles
about computer viruses; he studies them; he must be the culprit."But
that a bridge yet to be crossed, if ever.)
-----------[000050][next][prev][last][first]----------------------------------------------------
From:      "AMSP2::CHRISTEVT" <christevt%amsp2.decnet@wpafb_ams1.arpa>  28-Jan-1989  1:56:12
To:        "security" <security@rutgers.edu>
I'm MIT '86 and what Hobbit says about the hacker ethic there is completely 
true! Not only is common thievery shunned, but those unfortunate enough to be 
caught (by CPs or other hackers, which is probably worse) doing such things 
are black-listed by the other hackers themselves.

NON-DESTRUCTIVE entry and common sense are strongly encouraged by MIY hackers 
and those who don't go along with this idea (in theory and, most importantly, 
in PRACTICE) don't get much help from those who do. And the sign-ins are 
indeed the most that usually happens while building hacking...the only people 
who look for these are other hackers, to see if they know who's already been 
there and such.

I also love to explore buildings and such and my mother is one of those 
hard-to-convince-it's-harmless-and-creates-no-security-risk ones; but if one 
is careful and "ethical" (as described above, in part), it really IS 
harmless!!!

However, I have to agree with my mom on one point: what is accepted at 
colleges/universities/educational institutions may not and probably will not 
be accepted by civilian police and similar. What we call hacking the police 
may call breaking and entering.

By the way, if any hacks are being planned in the Dayton, OH, or Southern 
California (specifically around San Pedro, Long Beach, etc) areas, please let 
me know!!! Thanx!

                                   ET B ME
                                     VIC
                                      !

Victor ET Christensen         		"To the last I grapple with thee,
christevt@wpafb-ams1.arpa     		From Hell's heart I stab at thee,
christevt@p6.ams.wpafb.af.mil           For Hate's sake I spit my last breath 
christevt%amsp6.decnet@wpafb-ams1.arpa     at thee!!!"   ~ Kahn
-----------[000051][next][prev][last][first]----------------------------------------------------
From:      nugent@anubis.uchicago.edu  28-Jan-1989  9:11:13
To:        Stephen Wadlow <sw0y+@andrew.cmu.edu>
Cc:        security@pyrite.rutgers.edu
I understand from our local locksmith that Medeco still offers a
reward for someone who can pick a Medeco lock on request and I think
you get several hours.  I have to admit, I'm curious how you do
it since there is no way to apply pressure to the side bar (the part
that checks the rotation of the pins) beyond what the springs supply.

For those not familiar with Medeco locks, the keys incorporate a left, right
or center rotation as well as the usual up and down positioning for each
pin, so picking the lock requires adjusting 6 angles as well as 6 different
heights.

Collecting the Medeco reward probably requires that you pick one of their
newer Biaxial locks, which include the forward and backward pin alignments
as well as left and right.

Todd 
-----------[000052][next][prev][last][first]----------------------------------------------------
From:      aad@stepstone.com (Anthony A. Datri)  28-Jan-1989  9:31:12
To:        security@pyrite.rutgers.edu
I ran into several of them at CMU.  The most notable was one that
ran on the PC's often used as terminals.  The person behind it would collect
the passwords and ravage the victim's directory.  The best way to guard against
this was to carry your own copy of Kermit.
-----------[000053][next][prev][last][first]----------------------------------------------------
From:      <TOM@FANDM.BITNET> (Tom, Tech. Support)  28-Jan-1989  9:51:12
To:        SECURITY@pyrite.rutgers.edu
I see one problem in directly addressing any criminal activity.  Many laws
on the books are, like it or not, subject to selective enforcement based on
intent.  Using the baseball bat example:

Section 906 (Possessing Instruments of Crime) of the Pennsylvania Crimes Code
states that "A person commits a misdemeanor of the first degree if he possesses
any instrument of crime with intent to employ it criminally".  (The section
defines "instrument of crime" as "anything specially made or adapted for
criminal use or anything commonly used for criminal purposes possessed
under circumstances not manifestly appropriate for lawful uses it may have."
The section also defines the offense as possession of "a firearm or
other weapon concealed upon his person with intent to employ it criminally"
And "other weapon" is defined as "anything readily capable of lethal use
and possessed under circumstances not manifestly appropriate for lawful
uses it may have". (There's the baseball bat conviction).  Since this
discussion started relative to possession of locksmith tools, I should
point out that this section clearly relates to those devices, yet there is
no SPECIFIC exemption for locksmiths.  I doubt that any locksmith in the
course of business would have a problem.  (There is a section on master
keys for vehicles which DOES exempt locksmiths).

The Pennsylvania Crimes Code has many other examples of this type of selective
enforcement - Criminal Attempt, Criminal Solicitation, Conspiracy, Loitering
or Prowling at Night, and Disorderly Conduct to name a few.  These types of
laws tend _not_ to be subject to poor judgement and they are usually brought
in conjunction with other crimes.(How else can you _prove_ criminal intent).

I think any laws relating to computer security would, because of the
complexity of the subject, be equally vague if not more so.  Clearly, virus
writing would appear to be a violation of something.  On the other hand,
could something like the "Peace Virus" in the Macintosh fall into some hard
and fast law?  It did no damage and offended no one except for the
emotional fury resulting from someone secretively getting into our private
business (Should it be it a violation of the law to anger another?  What is
really "private"?)

I wonder:
* If any set of statutes can really cover everything that _should_ be
illegal considering that nothing _is_ illegal unless there is a law against it.
* If any set of statutes can be written without some vagueness somewhere.
* If any computer security statute can be written without some clever and
enterprising hacker getting around it the minute some new technology is
available (which happens almost daily).
* If perhaps the only statute we need would make it unlawful to access
_any_ file that the perpetrator has no legitimate right to access.
* What constitutes "legitimate right" ... Even that sounds vague!

As an aside, there is a philosophy (which I don't subscribe to) that says
the only reason there are criminals on the street is that the criminal
justice system has failed.  Perhaps it could be said that the only reason
computer security is ever breached is because the System Managers and
Security Administrators have failed.  I doubt that, but ...

*        HAVE A GOOD DAY        *
*                               *
* Tom Mahoney                   *
* Computer Electronics Tech.    *
*                               *
* FRANKLIN & MARSHALL COLLEGE   *
* Computer Services             *
* Technical Support Center      *
* Lancaster, PA  17604-3003 USA *
*                               *
* Bitnet Address: TOM@FANDM     *
*********************************
-----------[000054][next][prev][last][first]----------------------------------------------------
From:      J. D. Abolins <OJA@NCCIBM1.BITNET>  28-Jan-1989 11:11:13
To:        SECURITY@pyrite.rutgers.edu
I have only recently heard about the proposal to use EMP (electro-
magnetic pulse) to detect bombs. So Ihave little information. Yet
the comments that this method can detect bombs by detonating them is
far-fetched. Yes, the EMP method would be useful against electronic
timers for detonation. But if the explosives are not attached to any
electronic device, it appears that EMP would do little.

But the concern about legitimate electronic equipment being damaged
is valid.

-------------

About my recent posting about personnel risks in security, it can
be editted and reprinted. All I ask is that the publication would
send me a copy of the particualr issue. I am posting this because I
received a request for use of the posting. Unfortunately, the person
making the request does not have a BITNET id, so I can only reply
through this listing. But if anyone does have questions, you can
contact me at:

J.D. Abolins
301 N. Harrison Street, #197
Princeton, NJ 08540  USA
Phone: (609) 292-7023  weekdays
-----------[000055][next][prev][last][first]----------------------------------------------------
Date:      Thu, 19 Jan 89 11:38:23 est
From:      tim@csc-lons.arpa (Tim Dennison)   28-Jan-1989 19:31:16, tim@csc-lons.arpa (Tim Dennison)
To:        misc-security@sdcsvax.ucsd.edu, misc-security@sdcsvax.ucsd.edu
In Virginia anyone can own and possess a "slim jim".  I am an ambulance
attendant part time and bought my slim jim from my uniform store.
Sounds good huh????

If you get caught committing a crime (any crime) with the slim jim the
crime automatically becomes a felony.  An example is in order.  In
Virginia, entering someones car without their permission is "vehicle
tampering" a misdemeanor, if you use a slim jim to open the door it
becomes a felony.  

In my opinion this is a good way to handle "tools".  A slim jim is one
of the most important extracation tools I have on my ambulance, and in
my car.  If it became illegal to possess one I would be in trouble. 

This allows us law abiding citizens to help people, but penalizes those
who choose to leave the accepted social norms.

Tim Dennison
tim@csc-lons.arpa
-----------[000056][next][prev][last][first]----------------------------------------------------
From:      Bernie Cosell <cosell@bbn.com>  28-Jan-1989 19:51:15
To:        misc-security@uunet.uu.net
I heard a rumor from a friend that DEC's ENET is being plagued by a worm that
is running rampant.  Anyone have any details?  comfirm/deny?  whatever?

   __
  /  )                              Bernie Cosell
 /--<  _  __  __   o _              BBN Sys & Tech, Cambridge, MA 02238
/___/_(<_/ (_/) )_(_(<_             cosell@bbn.com

****************************************************************************
[Moderator add-on: Do you mean the following, by any chance?  I wasn't going
to forward this since it was rather out of date even when I got it, but perhaps
the thing is still going around.  You did say DEC's network, and this says
HEPNET, but decnet is decnet, so it may have hopped the tracks at some point.
Caveat vaxer, then...

_H*]
****************************************************************************

Date:         Fri, 23 Dec 88 19:54:27 est
Sender: Virus Alert List <VALERT-L@IBM1.CC.Lehigh.Edu>
From: lecgwy!lyons%RUTGERS.EDU@IBM1.CC.Lehigh.Edu
Subject:      VIRUS WARNING: DECNET Worm (forwarded from VALERT-L)

The following information relates to the DECNET worm which
hit the HEPNET and infects DEC VMS systems.

Note that in addition to the information presented here, the possibility
exists that a non-HEPNET system may have been infected.  You should
check your system for a file by the name of HI.COM, and a process
running with the name MAIL_178DC.  If you find either of them, your
system more than likely has been infected.  Read on for further
background, as well as a more thorough explanation.

Thanks to Ed DeHart at CERT, Fred Ostapik at ARPA-NIC, and all others
who helped assemble this information.

----
Marty Lyons, Lockheed Electronics Company, 1501 U.S. Highway 22,
CS #1, M/S 147, Plainfield, N.J. 07061-1501 (201) 757-1600 x3156
LYONS@LECGWY.LEC.LOCKHEED.COM or LYONS%LECGWY.UUCP@AUSTIN.LOCKHEED.COM

Worm-fix distribution list:
  CERT, CMU (cert@sei.cmu.edu)
  John Wagner, Princeton (wagner@pucc.bitnet, wagner@princeton.edu)
  Chris Tengi, Princeton (tengi@deepthought.princeton.edu)
  Nick Cardo, JVNC Supercompuer Center (cardo@jvncc.csc.org)
  Chuck Hedrick, Rutgers (hedrick@rutgers.edu)
  Steve Keeton, NJIT (syssfk@njitx.njit.edu)
  Seldon Ball, Cornell (system@crnlns.bitnet)
  Nick Gimbrone, Cornell (njg@cornella.bitnet)
  Sandi Ivano, Yale (???)
  Anio Khullar, CUNY Graduate Center (ank@cunyvms1.bitnet)
  Shakil Khan, CUNY Queens College (khan@qcvax.bitnet)
  Meredith Coombs, Stevens Tech (???)
  Ken Ng, NJIT (ken@orion.bitnet)
  Dave Capshaw, Lockheed-Austin (capshaw@austin.lockheed.com)
  Marty Lyons, Lockheed Electronics (lyons@lecgwy.lec.lockheed.com)
  Randi Robinson, CUNY (rlrcu@cunyvm.cuny.edu)
  BITNET Laison Distribution List (laison@bitnic.bitnet)
  BITNET Linkfail List (linkfail@bitnic.bitnet)
  BITNET Virus Alert List (valert-l@lehiibm1.bitnet)
  UUCP/Stargate Announcements (announce@stargate.com)

> From rutgers!sei.cmu.edu!ecd Fri Dec 23 17:59:18 1988
> Date: Fri, 23 Dec 88 17:28:48 EST
> To: lecgwy!lyons, steinauer@ecf.icst.nbs.go
> Subject: Re:  NASA Virus

The following information has been provided by one of the VMS experts
on the Internet.  Due to the holidays,  the CERT has not been able to
verify the information.  If you do verify the information please let
us know.

Thanks,
Ed DeHart
Software Engineering Institute / Computer Emergency Response Team
cert@sei.cmu.edu
412-268-7090
=======================================================================

There is a worm loose on NASA's SPAN/DoE's HEPNET network, which is an
international DECnet-based network.  The worm targets VMS machines, and
can only be propagated via DECnet.

The worm itself appears to be benign, in that it does not destroy files
or compromise the system.  It's purpose appears to be to deliver a
Christmas message to users starting at midnight on 24 Dec 1988.  It
does have a hook in it to monitor it's progress;  it mails a message
back to a specific node (20.117, user PHSOLIDE) containing an identifying
string of the "infected" machine.

The worm exploits two features of DECnet/VMS in order to propagate itself.
The first is the default DECnet account, which is a facility for users who
don't have a specific login ID for a machine to have some degree of
anonymous access.  It uses the default DECnet account to copy itself to a
machine, and then uses the "TASK 0" feature of DECnet to invoke the remote
copy.

There are several steps which you can take to protect yourself from this
kind of attack.  The easiest (and most restrictive) is to disable the
default DECnet account on your machine altogether.  This can be done with
the following commands from the SYSTEM or other suitably privileged account:

        $ Run SYS$SYSTEM:NCP
        Purge Executor Nonprivileged User Account Password
        Clear Executor Nonprivileged User Account Password
        ^Z

This requires that everyone who accesses your resources via DECnet to have
a legitimate login ID or proxy login account on your machine (proxy logins
are discussed in detail in chapter 7 of the _Guide to VMS System Security_).

[There's more, but it's long-winded instructions on how to dink decnet
objects which should be obvious to most of us.  I'll forward the whole wazoo
to people who ask for it; you can also probably dig it out of the valert-l
archives [if there are any].   _H*]
-----------[000057][next][prev][last][first]----------------------------------------------------
From:      PGOETZ@LOYVAX.BITNET  30-Jan-1989 12:00:35
To:        hobbit@pyrite.rutgers.edu
        I had a few keys copied, and I noticed a little item on sale by the
key blanks:  A keyring with a label to write down your name and address.
The idea is that if you lose your keys, the honest soul who finds them will
mail them back to you.  Of course he would not go to your house, unlock
the door, take what he wants, and drive away in your car.  The trust some
people show in their fellow man is truly touching...

Phil Goetz
-----------[000058][next][prev][last][first]----------------------------------------------------
From:      len@csd4.milw.wisc.edu (Leonard P Levine)  30-Jan-1989 12:05:28
To:        misc-security@uunet.uu.net
Does anyone know if any vendor makes a lock that reads a key and stores its 
keycode.  (I mean read the bumps)

+ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
| Leonard P. Levine               e-mail len@evax.milw.wisc.edu |
| Professor, Computer Science             Office (414) 229-5170 |
| University of Wisconsin-Milwaukee       Home   (414) 962-4719 |
| Milwaukee, WI 53201 U.S.A.              Modem  (414) 962-6228 |
+ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +

[Moderator add-on: A *lock*?  If you mean a *tool*, then yes, it's called
a pair of calipers and a cut chart.   _H*]
-----------[000059][next][prev][last][first]----------------------------------------------------
From:      Jim Shaffer <SHAFFERJ@BKNLVMS.BITNET>  30-Jan-1989 12:36:36
To:        security@pyrite.rutgers.edu
>In the Mac world, I know of only one instance in which a virus was
>propagated through a commercial/skrink-wrap package... this was the case
>in which a copy of Brandow's "macMag World peace" INIT virus infected ...

Just to set the record straight, this depends on your definition of
"commercial."  Due to infection of a computer at Microsoft, a lot of
European beta-test copies of Microsoft Word 4.0 for the Mac are
reportedly infected with nVIR. (See recent issues of Virus-L[@LehiIBM1.Bitnet])
Also, through the same channel I've heard of a CD-ROM for the Mac that
was a collection of many public-domain or shareware utilities, of which 11
on the disk were infected by a virus (I forget if it was Scores or nVIR).
This of course defeats the purpose of the CD-ROM,
because they must be copied to other disks and disinfected while the purpose
of the CD-ROM was to have everything in one nice package.
-----------[000060][next][prev][last][first]----------------------------------------------------
From:      Russell Brand <wuthel!brand@capmkt.com>  30-Jan-1989 12:54:14
To:        misc-security@uunet.uu.net
Crypto 89

Santa Barbara, CA  (August)

I am running a special session on practical and innovative uses of
cryptography in computer security.  I am still looking for another
couple of speakers.  If you are doing or have done something
appropriate, please contact me.

Ideally I am looking for papers in one of three categories:

	1) we had this clever idea that uses cryptography and it
	worked.

	2) our clever cryptogrpahic application failed for surprising
	reasons. 

	3) we have an application where there seems lie there should
	be something clever we could do with cryptography, but we
	haven't been able to find what we need in the literature.

Of course I would be pleased to hear other innovative ideas. 

thank you

russell brand

brand@lll-crg.llnl.gov
415 548 1361

(Please forward/repost this as appropriate)
-----------[000061][next][prev][last][first]----------------------------------------------------
From:      stachour@cs.umn.edu (Paul Stachour)  30-Jan-1989 13:10:06
To:        misc-security@rutgers.edu
Well, to start with, any reasonable system tells you when you get on
when you were last on.  The login-simulator doesn't know, and thus
can't tell you.  At least you know you've been had, and can get off
and back on and change your (just-compromised) passsword immediately.

Better systems (such as Multics) have no way to login-from-within
a-process.  Thus the login-simulator can't do anything after it gets
your password but force a logout, since otherwise you'd know immediately
that it was a fake, since it doesn't have access to your start-up.ec,
and other personal files (unless of course you are on a junky system
where either the system or common practice makes all your personal
files readable by the world).  Login from within a process isn't
ever needed, unless the system has such a poor sharing mechanism
that you need to do that to get the kind of access you need.
For example, that's why Multics has the "installer" in a SysDaemon
process that no-one can ever login to.  No system should ever
allow the equivlent of the unix "login to root", but then Unix
was built as a cheap cut-down Multics without most of the reliability
that Multics was designed for.

And the really good ones are set up so that each invocation of the
login process gives you an authentication at the conclusion of
your login (or maybe at your logout).  When you login again,
after you give your userid and before you give your password,
the login processor gives you its authentication counter-sign.
If it can't give you the right one, then you know it's a fake.
And you don't give it your password.

Old technology.  But like most of computer science, forgotten
by most.  ...Paul
-----------[000062][next][prev][last][first]----------------------------------------------------
From:      Daniel Ray <tnl!norstar@gen.uvm.edu>  31-Jan-1989  4:49:46
To:        security@pyrite.rutgers.edu
It seems that the prevailing opinion is that password aging is a complete
waste of time. I think it can be of use it certain circumstances. Years ago
when I was first exposed to UNIX, I lusted after having root privs. The admin
encouraged me to learn everything I could, but, with good reason, would not
give away the root password to regular employees. One day I carefully watched
him su to root, and because the password was awkward to type, I was able to
figure out what it was. Later I became root and trounced around the system.
This lasted only a couple weeks, however, because there was a regular policy
of changing the root password, and I was locked out from then on.

If a privileged account has a well-chosen password, it is usually unlikely
for a hacker to be able to guess it. However, a coworker might be able to
piece it together if he sees the admin use it enough times, or if it is TOO
"well-chosen" and is hard to type quickly. Regularly changing such a password
will effectively limit this kind of "coworker attack". Password aging, per se,
may not be needed if a regular policy of changing privileged passwords is 
used.

norstar
The Northern Lights, Burlington Vermont               |     
tnl dialins: 802-865-3614 at 300-2400 bps.          ` | /   
------------------------------------------        --- * --- 
uucp: uunet!uvm-gen!tnl!norstar or                  / | .   
{decvax,linus}!dartvax!uvm-gen!tnl!norstar            |     

[Moderator add-on: Facilities with this kind of "security" problem should
seriously reconsider who they hire on as "co-workers".    _H*]
-----------[000063][next][prev][last][first]----------------------------------------------------
From:      *Hobbit* <hobbit@pyrite.rutgers.edu>  31-Jan-1989  5:27:14
To:        security@rutgers.edu
This is sort of a followup to Steve Wadlow's medeco question.  I should
point out a common truth about most mechanical locks that in theory allows
any of the "high-security" ones to be opened.  I call it the Differential
Pressure algorithm...

The "method" for Medecos involves manipulating until you have every tumbler
either at a correct position or a "false" position, whereupon the cylinder
cocks over a little bit and binds.  These false positions are highly touted as
a security feature in any lock that has them -- the manufacturers perceive
these as complete dead ends to anyone trying to pick the lock, and that if you
hit any of them you've lost and have to completely start over.  Wrong!
Consider this: the difference between a real position and a false one is
usually a few thousandths of metal.  A tumbler at a correct position will no
longer be binding the cylinder closed, and will have plenty of perceivable free
play, while a false-notched tumbler will be held tightly in place and exhibit
no slop at all.  The Method involves finding these falsely-positioned tumblers
and correcting them as they appear, usually by twisting them around with a
pointy probe, until they allow the sidebar to drop.  Naturally due to random
machining slop you may lose a couple of other tumblers while backing out far
enough to clear the false notch and get over to a real one, but they can be
re-corrected.  The point is that even while "stuck" at the false positions, one
can "map" which positions are definitely false, and optionally correct them.

This thinking can be applied to numerous other types of locks including but not
limited to Simplex, most cheap combination padlocks and bicycle locks, Abloys,
and quite possibly plenty I haven't had a chance to examine.  This rather
simple idea appears to be out of reach of most locksmiths, though, who
seem to unquestioningly believe the manufacturer's party line...

By the way, there is no longer a "reward" for picking Medecos.  Even the
Medeco people acknowledge that "it happens occasionally" when a bored locksmith
decides to have a go at one.  Neither is there a reward for Abloys, even if
they're said to be harder yet...

_H*
-----------[000064][next][prev][last][first]----------------------------------------------------
From:      scarter@caip.rutgers.edu (Stephen M. Carter)  1-Feb-1989 11:08:15
To:        misc-security@rutgers.edu
Also, what are the various different types of Medecos?  Our campus key
shop guru told me that we have even a more advanced Medeco than the
standard, and was almost foolproof (although they said the same thing
about our 7 pin Yale cylinders a few years ago :-))
-----------[000065][next][prev][last][first]----------------------------------------------------
From:      Jeff Martens <martens@cis.ohio_state.edu>  1-Feb-1989 11:19:53
To:        misc-security@cis.ohio-state.edu
Does anyone know what the present status of SCOMP Plus is?

SCOMP was an operating system rated A1 secure by NSA NCSC several years ago.
It ran on a modified Honeywell DPS-6.  SCOMP Plus is an upgrade to a 32 bit
DPS-6 Plus which should have been released by now.  Has it been?  Has it 
been evaluated by NCSC?

Thanks.

--Jeff	(martens@cis.ohio-state.edu)
-----------[000066][next][prev][last][first]----------------------------------------------------
From:      Philip Peake <philip@axis.fr>  1-Feb-1989 11:39:53
To:        misc-security@pyrite.rutgers.edu
Just a passing thought ...

Software merchants, particularly the home and small buisness types
have a serious problem - piracy.

Current methods obviously don't work.
Copy protection methods just give rise to a sales boost for the latest
copy programs which know how to defeat this sort of thing.

Copyright as never worked - even for books !

So, some kind person comes along and starts to distribute a virus.
This makes everyone SO SCARED of accepting a non shrink-wrapped diskette
that the piracy problem just goes away ...

Think about it ...
-----------[000067][next][prev][last][first]----------------------------------------------------
From:      dcdwest!sarge@ucsd.edu (Sergeant Bob Heddles)  1-Feb-1989 11:42:02
To:        security@pyrite.rutgers.edu
I was wondering if anyone on the net has had the opportunity
to try and open a S&G lock without the proper combination??

Does anyone know if the S&G people can open it and reset the
lock to the factory settings then return same to me. I came
across one in a storage box and with the price of these locks
it would be a shame to have to trash it, since it is still
in perfect condition, only won't open since everyone that had
the combination is no longer working here and/or has forgotten
it over the years..

		Any and All help is appreciated..

			Thanks in advance

			Sgt. Bob Heddles (Security)

-- 
Bob Heddles                        | ITT Defense Communications Division
ucbvax!ucsd!dcdwest!sarge          |     10060 Carroll Canyon Road
dcdwest!sarge@UCSD.EDU             |       San Diego, CA 92131
Opinions expressed are mine alone. No one else wants them..

END OF DOCUMENT