|
|
ARCHIVE: Rutgers 'Security List' (incl. misc.security) - Archives (1987)
DOCUMENT: Rutgers 'Security List' for May 1987 (25 messages, 39309 bytes)
SOURCE: http://securitydigest.org/exec/display?f=rutgers/archive/1987/05.txt&t=text/plain
NOTICE: securitydigest.org recognises the rights of all third-party works.
START OF DOCUMENT
-----------[000000][next][prev][last][first]---------------------------------------------------- From: <SYSTEM%CRNLNS.BITNET@wiscvm.wisc.edu> 11-May-1987 08:29:49 To: security@red.rutgers.edu
I just received the following message. Does anybody have any more information? Selden E. Ball, Jr. (Wilson Lab's network and system manager) Cornell University NYNEX: +1-607-255-0688 Laboratory of Nuclear Studies BITNET: SYSTEM@CRNLNS Wilson Synchrotron Lab ARPA: SYSTEM%CRNLNS.BITNET@WISCVM.WISC.EDU Judd Falls & Dryden Road PHYSnet/HEPnet/SPAN: Ithaca, NY, USA 14853 LNS61::SYSTEM = 44283::SYSTEM (node 43.251) **************** From: Jerry Bryan <BRYAN@NOCMI> Subject: Respite from 80-column wars To: SELDEN BALL <SYSTEM@CRNLNS> The following is a (partial) quote from an IBM ad I saw in the April 27-th issue of "InformationWeek". I assume it will (or has) run in many other periodicals as well. "Good news for those who value privacy ...... Thanks to recent legislation, the laws that cover data security now cover more. There are stiff new penalties and new protections. Prying into electronic mail is now as criminal as opening the U.S. Mail and even the government cannot intrude without a warrent...." "... as criminal as opening the U.S. Mail ..." is pretty heavy stuff. Does this have anything to do with BITNET? Is this the correct list on which to raise such a question (e.g., what about discussions of mail encryption, etc.)?
-----------[000001][next][prev][last][first]---------------------------------------------------- From: Henry Mensch <henry@ATHENA.MIT.EDU> 13-May-1987 15:07:20 To: security@RED.RUTGERS.EDU
This is what the MIT community (in general) was told about how this law affects our work. It sounds to me like the ECPA is talking about BITNET also. (Of course, the Act has no clue about "ownership" of data--they don't ever seem to define it). This is a copy of a letter published in Tech Talk. Anyone who did not read that memo should look read it. Be sure to note that operators of electronic communication systems now have legal responsibilities for the privacy of data. [Thanks also to Joe Harrington who forwarded a copy. _H*] MEMORANDUM To: The MIT Community From: James D.Bruce, Vice President for Information Systems Re: The Electronic Communications Privacy Act The Electronic Communications Privacy Act of 1986 was enacted by the United States Congress in October of last year to protect the privacy of users of wire and electronic communications. Legal counsel has advised MIT that its computer network and the files stored on its computers are covered by the law's provisions. Specifically, individuals who access electronic files without appropriate authorization could find themselves subject to criminal penalties under this new law. At this time, we can only make broad generalizations about the impact of the Act on MIT's computing environment. Its actual scope will develop as federal actions are brought against individuals who are charged with inappropriate access to electronic mail and other electronic files. It is clear, however, that under the Act, an individual who, without authorization, accesses an electronic mail queue is liable and may be subject to a fine of $5,000 and up to six months in prison, if charged and convicted. Penalties are higher if the objective is malicious destruction or damage of information, or private gain. The law also bars unauthorized disclosure of information within an electronic mail system by the provider of the service. This bars MIT (and other providers) from disclosing information from an individual's electronic data files without authorization from the individual. MIT students and staff should be aware that it is against Institute policy and federal law to access the private files of others without authorization. MIT employees should also note that they are personally liable under the Act if they exceed their authorization to access electronic files.
-----------[000002][next][prev][last][first]---------------------------------------------------- From: Dick Peters <SPGRAP%UCBCMSA.BITNET@wiscvm.wisc.edu> 13-May-1987 16:48:30 To: SECURITY@RED.RUTGERS.EDU
This clearly affects BITNET as it does any network. On the other hand, mail is as private on BITNET as any other network which does not employ encription. On bitnet, mail is private and cannot be looked at by other general computing users on the system (at least the IBM portions). Just as on other systems, the privileged user (super-user), who is usually in the systems programming staff, can examine mail. I believe this flaw exists on most computing architectures. I believe that all installations will have to examine this law and determine the risks to their staff and organizations.
-----------[000003][next][prev][last][first]---------------------------------------------------- From: wbaker%ic.Berkeley.EDU@BERKELEY.EDU 14-May-1987 08:28:42 To: Security@RED.RUTGERS.EDU@ucbvax.Berkeley.EDU
So basically, nobody REALLY knows whats going on with these things, just there is alot of folklore about them floating about. Not too usefull if you ask me... And to quote Doug Humphrey: "... I doubt seriously that the pattern of a strobe light is used for IFF (sic) in the case of traffic controllers since strobe lights are hard to modulate reliably due to the fact that they are based on high voltage systems that generally use the ionization of gas to determine when the strobe goes off, and are thus not very accurate in a timing sort of way." I might suggest you recheck your facts, and possibly reconsider. A visit your local service station should be convincing enough, however if not then a trip to the local camera store might offer more evidence. A book on high-speed photography might also be in order; most book stores carry them. Geezzzuuus. W
-----------[000004][next][prev][last][first]---------------------------------------------------- From: Rob Aitken <aitken%noah.arc.cdn%ubc.csnet@RELAY.CS.NET> 14-May-1987 12:39:10 To: security@RED.RUTGERS.EDU
Re: Recent quote from IBM ad in "InformationWeek" Regardless of the legal penalties for prying into electronic mail, it seems to me that enforcement will be difficult if not impossible. The nature of messages makes them readily readable by anyone, much more akin to postcards than sealed letters. I will still refrain from mailing anything that I would not want in the public domain. Rob Aitken, Alberta Research Council, Calgary AB
-----------[000005][next][prev][last][first]---------------------------------------------------- From: DAVID%NERVM.BITNET@wiscvm.wisc.edu (David Nessl, Univ. of Fla.) 14-May-1987 14:19:30 To: SECURITY@RED.RUTGERS.EDU
The IBM ad you read was talking about the Electronic Communications Privacy Act of 1986, passed by Congress on 02-Oct-1986, and signed into law on 21-Oct-1986, and became effective 90 days later. It's known as Public Law 99-508. It's basically in two parts: (1) Ammendments to the existing U.S. Code, Title 18, chapter 119, starting with section 2510, which deals with interception of communications, formerly dealing with just the telephone (wiretapping), but has been updated to be more general: "common carrier" is now "electronic communications service provider", i.e. _any_ service which lets users send or receive electronic communications; and electronic communications are no longer limited to "oral", and now specifically include the use of computer facilities. Anyone intercepting, ordering the interception, or using the data from an interception, when not acting as the service provider for maintenance or protection of the system, is still committing a criminal offense, and can still be sued in civil court. (2) The addition of chapter 121 to U.S. Code, Title 18, starting with section 2701, which protects stored electronic communication, i.e. before the communication is sent and after it is received; chapter 119 (above) handles communications in transit. The act amends several other sections. I've just mentioned the ones related to running a computer or computer network. Also please note that I'm not an attorney, just a systems programmer. However -- we've been unfortunate enough to have a case here in which this law may get tested. Any comments as to the strengths/weaknesses of this law, particularly as related to interception of an employee's electronic communications, would be greatly appreciated. David Nessl BITNET: david@nervm Internet: david%nervm.bitnet@wiscvm.wisc.edu (Disclaimer: the above views do not relect those of my employer.)
-----------[000006][next][prev][last][first]---------------------------------------------------- From: ssr@tumtum.cs.umd.edu 14-May-1987 14:40:19 To: gymble!harvard!axiom!security!;@mimsy.umd.edu, AWalker@RED.RUTGERS.EDU
The traffic lights here in the DC metro area are activated by stobes but use a multiple repetition sequence (i.e. two flashes per sec. followed by a three second blank) to ferret out phreaks and other undesirable signals. The strobe must also be of a considerable candlepower (i.e. a photo flash won't even get close). During rush hour the real busy intersections are radio synched in order to keep the flow of traffic steady. The freq. is somewhere in the 490 MHz area. The actual information is only a simple set of 20 - 25 tones that are transmitted in pre-set intervals over 2 - 3 minuets and then repeat. All the associated traffic lights have directional antennas aimed at the base station (which is on Ft. Reno Dr. and Wisconsin Ave. for anyone interested). It strikes me that one could use a scanner to find the tone associated with ones favorite traffic light and just use a low power x-mitter to override the traffic light as one approaches. ssr
-----------[000007][next][prev][last][first]---------------------------------------------------- From: James M Galvin <galvin@UDEL.EDU> 14-May-1987 16:29:41 To: security@red.rutgers.edu
> Prying into electronic mail is> now as > criminal as opening the U.S. Mail and even the government > cannot intrude without a warrent...." Sorry, but prying into electronic mail can be a necessary evil. If a host is using a less than optimal mail system (of which many are), then when things get stuck or broken, someone has to look at the addresses in the message. This may or may not require reviewing the message. Note that the situation is not analogous to the "dead letter office" of a postal service, since all mail should contain a return address. It may not be correct, meaning both completely inaccurate or simply unparsable, but that is a separate issue. As for electronic mail privacy in general, I would love a good discussion, moderator permitting. I know plenty about it (and lack of it). What would you like to know? Jim [Isn't there a more appropriate mailing list where such things are discussed continually and at length? If not, then go for it... _H*]
-----------[000008][next][prev][last][first]---------------------------------------------------- From: cheshire@OLDBORAX.LCS.MIT.EDU (Richard Cheshire) 14-May-1987 19:43:20 To: SYSTEM%CRNLNS.BITNET@wiscvm.wisc.edu, security@red.rutgers.edu
Great! There's legislation to stop it! Harrah! After all, look at how much drug legislation there is, and how it has decreased drug trafficing. There are more and more laws regulating automobiles, so there will be fewer accidents. That legislation also treats head on the "Human Nature" issues. How? Just by making things illegal. Cheshire A.K.A The Cheshire Catalyst
-----------[000009][next][prev][last][first]---------------------------------------------------- From: Fred Blonder <fred@brillig.umd.edu> 14-May-1987 23:18:15 To: awalker@red.rutgers.edu
Date: Fri, 17 Apr 87 17:00:38 CST From: paul@uxc.cso.uiuc.edu (Paul Pomes - The Wonder Llama) . . . One possible variation on using a timing light to trip the lights would be to filter out the visible portion of the spectrum leaving UV and IR. Depending on the sensitivity of the detector and the transmission properties of intervening materials, the sensor could be triggered by an invisible means. The "obvious solution" (well, I admit there'd be problems) would be to have a directional SOUND sensor on the traffic lights which listens for a siren. Since non-emergency use of a siren is already illegal in most places, coupled with he fact that it's difficult to use a siren without anyone noticing ( :-) ) traffic-light phreaks won't (shouldn't (mightn't)) be much of a problem. It'd also be one less thing to hang on emergency vehicles. ---- Fred Blonder (301) 454-7690 seismo!mimsy!fred Fred@Mimsy.umd.edu
-----------[000010][next][prev][last][first]---------------------------------------------------- From: McNelly.OsbuSouth@Xerox.COM 15-May-1987 16:10:16 To: AWalker@RED.RUTGERS.EDU
I heard a rumor that their answer to "traffic light phreaks" is to set the traffic lights to turn red for all four directions upon detection of the strobe. Emergency vehicles can still proceed through the empty intersection, and there is negative incentive for traffic light phreaks to mess with the lights. -- John --
-----------[000011][next][prev][last][first]---------------------------------------------------- From: Jeffrey R Kell <JEFF%UTCVM.BITNET@wiscvm.wisc.edu> 18-May-1987 15:24:45 To: SECURITY@RED.RUTGERS.EDU
>Re: Recent quote from IBM ad in "InformationWeek" > >Regardless of the legal penalties for prying into electronic mail, it >seems to me that enforcement will be difficult if not impossible. The >nature of messages makes them readily readable by anyone, [...] (1) Does this make you subject to prosecution should you simply "see" a message within the scope of your designated duties (ie, watching a line monitor, updating a mailer daemon/DVM, acting as postmaster)? (2) Is this law even applicable to public (and/or Internet) networks in the first place? It would appear only applicable to common-carrier nets or services such as MCI-Mail, Telenet, Comshare, etc. If you have two tin cans and a piece of string between offices, such facilities are not subject to FCA telecommunications restrictions :-) +-----------------------------------+----------------------------------+ | Jeffrey R Kell, Dir Tech Services | Bell: (615)-755-4551 | | Admin Computing, 117 Hunter Hall |Bitnet: JEFF@UTCVM.BITNET | | Univ of Tennessee at Chattanooga |Internet address below: | | Chattanooga, TN 37403 |JEFF%UTCVM.BITNET@WISCVM.WISC.EDU | +-----------------------------------+----------------------------------+
-----------[000012][next][prev][last][first]---------------------------------------------------- From: Brint Cooper <abc@BRL.ARPA> 19-May-1987 10:11:08 To: security@RED.RUTGERS.EDU
Henry Mench writes, quoting the Vice President for Information Systems: > Specifically, individuals who access electronic files without > appropriate authorization could find themselves subject to criminal > penalties under this new law. It seems that "appropriate authorization" is the governing concept. In the typical Unix environment (if there is such a thing), it is routinely assumed that files made readable by the public carry the implicit presumption of permission to read. If the law fails to recognize this, then every one of us who has ever read his neighbor's C code to get the solution to a programming problem has broken the law. _Brint
-----------[000013][next][prev][last][first]---------------------------------------------------- From: "David D. Story" <FTD-P%MIT-OZ @ MC.LCS.MIT.EDU> 20-May-1987 00:56:17 To: security@RED.RUTGERS.EDU
I would think that the service would have to be specifically a mail service or system. This would fall under the intended use of a system and messages such a wayward system messages, ARPAlist messages and others do not fill the definition of mail. It would be the service that is responsible legally and the rest would fall under ordinary privacy laws. This would inline with UPS, Fed Express, Purolator, Courier, and the U.S. Mail. There was a considerable fight some years back, I believe the 60's, where U.S. Mail Package Service was upgraded in Protection to U.P.S.. Does anyone have the exact definition of mail as used in this law ? or what it takes for a system to qualify ? Must the company file for licenses to be covered under such a law ? This is extremely vague but applaud MIT's Tech Talk, (is there any other ?), for their normative editorial position toward electronic messaging and conferencing privacy. Dave
-----------[000014][next][prev][last][first]---------------------------------------------------- From: Bob Dixon <TS0400%OHSTVMA.BITNET@wiscvm.wisc.edu> 20-May-1987 12:17:53 To: SECURITY@RED.RUTGERS.EDU
Another aspect of the recent privacy legislation concerns radio receivers. For the first time in US history, it is now illegal to tune a radio receiver to certain frequencies and listen to whatever may be transmitted there. This refers specifically to the cellular radio frequencies in the 800 mHz range, which are used for mobile telephones. The vendors of these systems have been telling their customers they are just as private as wire connections, but this has never been true. But since it improves sales to make the claim, they still do, and now there is legislation that tries to make it be true by fiat. Any UHF TV receiver and many commonly-available scanner receivers can tune to these frequencies, so it seems futile to say in essence "don't touch that dial" to someone who might happen to tune across those particular frequencies. The FCC has already said they have no intention of enforcing this legislation. The vendors could always encode their signals, but they do not want to as that would raise costs and decrease profits. I heard that some legislative body once decreed that pi = 3 exactly, because it made calculations easier. Bob Dixon Ohio State University
-----------[000015][next][prev][last][first]---------------------------------------------------- From: <PETERSC0@VUENGVAX> 20-May-1987 12:55:49 To: security@uga
> From: Richard Cheshire <CHESHIRE@OLDBORAX.LCS.MIT.EDU> > Great! There's legislation to stop it! Harrah! After all, look at how much > drug legislation there is, and how it has decreased drug trafficing. There > are more and more laws regulating automobiles, so there will be fewer > accidents. That legislation also treats head on the "Human Nature" issues. > How? Just by making things illegal. Agreed! Just as it seems to be with your other examples, I think it is up to inidivdual security people and system administrators how seriously they take mail privcay and enforcement of the rules. It seems to me that anyone who values mail privacy should work to insure it and punish those who do the breaking, and that those who make the choice not to care as much should not mind so much when their mail system gets taken apart. Yes, this would be a step backward (a giant leap?) in standardization of legislation, and I realize it would be impractical to leave these decisions to each individual system's policies. But, is this kind of standardization necessary? While I'm thinking (it is indeed rare), what does this do to plans I have aheard about from the phone company to charge more for lines which are used with a modem? If it is my residential line, they would have to have some kind of line-monitoring. If I am doing e-mail, is their line monitoring illegal? Is it legal when I'm not? Chris Petersen Disclaimer: Who cares what I say anyway?
-----------[000016][next][prev][last][first]---------------------------------------------------- From: Henry Mensch <henry@ATHENA.MIT.EDU> 20-May-1987 14:28:06 To:
>> [Isn't there a more appropriate mailing list where such things are discussed >> continually and at length? If not, then go for it... _H*] This has already been beaten to death in the RISKS digest, I think. -- Henry
-----------[000017][next][prev][last][first]---------------------------------------------------- From: <PRITCHAR%CUA.BITNET@wiscvm.wisc.edu> 20-May-1987 15:55:47 To: SECURITY@RED.RUTGERS.EDU
> [Isn't there a more appropriate mailing list where such things are discussed > continually and at length? If not, then go for it... _H*] Yes. On BITNET, it's known as MAIL-L, and is available from a number of LISTSERVers. I receive mine from LISTSERV@BITNIC. Hugh Pritchard PRITCHAR@CUA.BITNET Systems Programming The Catholic University of America Computer Center (202) 635-5373 Washington, DC 20064 Disclaimer: My views aren't necessarily those of the Pope.
-----------[000018][next][prev][last][first]---------------------------------------------------- From: Jack Ostroff <OSTROFF@RED.RUTGERS.EDU> 21-May-1987 11:23:54 To: security@RED.RUTGERS.EDU
From my experience of having driven ambulances and a fire truck (both as a volunteer, not a professional) changing all four directions to red might decrease problems with traffic light phreaks, but green really helps. Even emergency vehicles with lights and sirens on are supposed to stop before proceeding through a red light. (I know it doesn't always happen that way, but if an emergency driver doesn't stop at red light, any accident is considered his fault.) The second problem is with having the lights respond to the siren. Most emergency vehicles use electronic sirens - which can produce several kinds of sounds (wail, yelp, hi-lo) and drivers frequently keep switching between them to try to get the attention of oblivious drivers of nearly sound-proof cars. Such sensors would have to respond to all modes of all makes of sirens used in that area. Jack (OSTROFF@RED.RUTGERS.EDU)
-----------[000019][next][prev][last][first]---------------------------------------------------- From: davy@intrepid.ecn.purdue.edu (Dave Curry) 24-May-1987 20:25:01 To: risks@csl.sri.com, security@red.rutgers.edu
When I got the MIT notice from the SECURITY list, I did a little digging in the law books (Purdue's library is a Federal Depository). I pulled out a copy of the Act (Public Law 99-508, H.R. 4952) and a copy of Title 18 of the United States Code, which it amends. From this (after a couple of hours of "strike words a through f, insert words g through m" -- I'd hate to be a law clerk), I extracted most of the "interesting" parts of the law. These parts pertain to administrators and users of electronic communications services (if your machine has electronic mail or bboards, it fits into this category). The parts I specifically went for were what we can and cannot do, what the punishment is if we do it, and what our means of recourse are if it's done to us. I left out all the stuff about government agents being able to requisition things and stuff, and all the stuff pertaining to radio and satellite communications. So anyway, I typed all this stuff in to give it to our staff so they'd be aware of the new legislation. Since there is probably interest in this, I am making the document availble for anonymous ftp from the host intrepid.ecn.purdue.edu. Grab the file "pub/PrivacyAct.troff" if you have troff (it looks better), or "pub/PrivacyAct.output" if you need a pre-formatted copy. Bear in mind I'm not a lawyer, and I just typed in the parts of the law I deemed to be of interest to our staff. --Dave Curry
-----------[000020][next][prev][last][first]---------------------------------------------------- From: Steinar Haug <haug%vax.runit.unit.uninett@TOR.NTA.NO> 27-May-1987 08:54:50 To: <info-vax@KL.SRI.COM>, <security@RUTGERS.EDU>
In connection with the implementation of secure MHS (X.400 based) systems, I'm looking for any available programs to perform DES and/or RSA encryption. Before you start telling me about it: Yes, I'm aware that there is a version of DES used in Unix systems to encrypt passwords. Yes, I'm aware of the MP multiple precision math package running under Unix. The trouble with both of these is that they are simply too slow! The Unix DES because (among other things) it was made slow on purpose; the mp package because it is a very general package using a lot of malloc/free calls. So I'm looking for something faster... Preferably written in C or Pascal, running on VMS or Unix systems. Any help is appreciated! Steinar Haug Database Research Group, Computing Center at the Univ. of Trondheim, Norway E-mail: haug%vax.runit.unit.uninett@nta-vax.arpa steinar@nta-vax.arpa
-----------[000021][next][prev][last][first]---------------------------------------------------- From: *Hobbit* <AWalker@RED.RUTGERS.EDU> 29-May-1987 06:42:15 To: security@RED.RUTGERS.EDU
I recently had a chance to disassemble and examine yet another type of hotel security system. These are all-mechanical magnetic door locks made by Cor-Key Systems in California. The user is given a small white plastic card with rounded ends, and inserts same into a slot in the top of a rather large doorknob on his room. Pushing the card all the way into the slot "connects" the knob to the actual latch hardware and allows entry; otherwise the knob just spins around. The neat thing about these is that the latch and the rest of the lock are a standard lockset that could have been made by anybody, and to upgrade to the Cor-Key system one simply has to install this other doorknob. Thus the hotel, which previously had regular old key locksets, avoided a lot of expense and retrofitting. Internally, the lock works entirely by magnetism. The card is laminated plastic over a layer of rather granular magnetic material that can be magnetized in small regions and hold the field virtually forever. When the card is inserted into the slot it covers up a matrix of 35 or so holes, and the tumblers move according to how the north or south regions on the card line up with the matrix. The tumblers themselves are small cylindrical permanent magnets, and are attracted or repelled by the card regions. About nine of these are sprinkled around the matrix, leaving a lot of the holes empty. Each tumbler has a spot of either red or blue ink on one end to indicate its polarity. The parts are arranged as follows, moving toward the door along the axis of the shaft. Front doorknob surface, steel plate, card slot, thin nonmagnetic metal plate, brass plate with holes, plastic slider with wells containing the tumblers. Everything except the plastic slider is fixed in place; the slider is held in place by the tumblers, which normally are attracted partway out of their wells toward the steel plate and are thus protruding through the holes in the brass plate. Thus the slider can't slide, because the tumblers are locking it to the brass plate. The correct key imposes itself down between the steel plate and the tumblers, and if the regions on the key repel *all* the tumblers away from itself, all the tumblers retreat into the plastic housing out of the brass plate. Then the slider is free to move, which it does when the key is pushed down the last quarter-inch or so. This engages the latch mechanism and connects it to the knob, so the door will open when the knob is turned. There is a mechanism for rekeying a door quickly: near the bottom of the knob there are two small holes through which a small tool can be inserted. Under these are two rotating alloy carriers, each containing one tumbler. Each carrier can be rotated to one of four positions, giving a total of 16 combinations between them. Rotating one of these moves the respective tumbler to a different point in the matrix, thus disabling one key and allowing a new one to work. Guest keys would have variable encoding in these matrix regions, and the master key[s] would be configured such that they would address these tumblers regardless of where they were. Since this only creates 16 possible combinations between them, it is a "first level" of mastering which can be changed without disassembly. More in-depth mastering is done by leaving parts of the static matrix empty, but the tumblers that are installed will match the corresponding regions of the master keys. In an unmastered system, if the entire matrix were filled with tumblers, all the locks and keys would be configured the same and all keys would work everywhere. Each lock is made unique by removing different parts of the matrix, and each guest key is made unique by differently magnetizing the "don't care" regions that correspond to the empty parts of the matrix in the given door. Thus Guest A's key will correctly address the parts of the matrix that Room A's knob contains, but the *other* regions in his key will incorrectly address the filled matrix locations of Room B's lock. The master key essentially repels the entire matrix's worth of tumblers, whether it's there or not. It was mentioned that the master also has a hole in the appropriate place to bypass the double-locking mechanism -- normally when the door is double-locked, a small rod protrudes into the key slot and completely prevents insertion of a normal key. Each location in the matrix is numbered [not in any obvious way, but...] so that the combination can easily be represented by a computer. Although in the past when the company started, records of whose lock contained what were kept in large books, computers are now being used to keep track of this. The keys are magnetized at the desk with a machine containing an equivalent matrix full of electromagnets. These can generate, I'm told by the Cor-Key people, fields of 250 gauss or so. A key region can be made north, south, or neutral; it is possible to "read" a key's encoding by running a !small! magnet over it and feeling if it's attracted, repelled, or ignored. [One of the tumblers glued to a piece of flexible wire worked fine.] However, even examining the part of the matrix you were given only gives you a small section of the master key, so it's virtually impossible to generate a hotel master by examining your own lock. Pick this one? Forget it. The tumblers are inaccessible behind the thin nonmagnetic plate. Perhaps a very large strong electromagnet could fit over the entire knob, remagnetize *all* the tumblers one way [good luck!] and then apply a gentler field in the reverse direction to push them all inward. I really don't see something like this working either. An expensive and precise piece of equipment could concievably be built to stick a small coil down into the slot and "read" the matrix by applying fields in different directions while the user listens for each individual tumbler to bang against one end or the other. Yuk. Conceptually, therefore, the Cor-Key is fairly secure. Unfortunately the workmanship of the lock itself is a bit on the shoddy side, and I was told by the people who build them that the official "backdoor" used in cases where the lock is completely screwed up is to drill a hole in a magic spot and force the latch mechanism to engage. Furthermore, to *really* re-key the lock it must be taken completely apart, because any key encoded the same all over the two changeable regions will open the lock regardless of where the carriers are rotated to. _H*
-----------[000022][next][prev][last][first]---------------------------------------------------- From: Michael Robinson <MIKE@UTCVM> 29-May-1987 13:55:21 To: SECURITY Digest <SECURITY@UGA>
You'd have an awfully hard time proving a case of electronic espionage against someone if you failed to take any steps to protect your own interests. The burden of proof always rests with the prosecution. The easiest way to send sensitive information is to use the telephone or some other network which has made some sort of legal guarantee of privacy. And take some sort of action to protect your interests in the event of casual contact. For example, you can encrypt the message and attach a plaintext notice which clearly states that the contents of the message are confidential. Casual contact with the message will not damage your interests. No one who tampers with the message can honestly say that they didn't know that it was wrong. /mr/
-----------[000023][next][prev][last][first]---------------------------------------------------- To: security@red.rutgers.edu Subject: security article
I've gotten over 50 requests for this article. I'm not answering them any more. Instead, I'm posting the article to the list... -simson % (C) 1987, Simson L. Garfinkel. % May not be transmitted or copied without permission Introduction to Security An Introduction to Computer Security For Lawyers (Most of the examples in this article are based on actual events.) A small business has its accounting records erased by a malicious high school student using a home computer and a modem. Did the business take reasonable security precautions to prevent this sort of damage? A friend gives you a public domain program which greatly improves your computer's performance. One day, you find that the program has stopped working, along with all of your wordprocessor, spreadsheet and database programs. It is important for legal practitioners to understand issues of computer security, both for the protection of their own interests and the interests of their clients. Lawyers today must automatically recognize insecure computer systems and lax operating procedures in the same was as Lawyers now recognize poorly written contracts. Additionally, as computers become more pervasive, more legal cases will arise which revolve around issues of computer security. Unless familiar with the basic concepts of computer security, a lawyer will not know how to approach the question. Not being a lawyer, the author will not attempt to address the legal aspects surrounding computer security. Instead, the goal of this article is to convey to the reader a basic understanding of the technical issues in the field. Even a simple understanding of computer security will afford the average lawyer protection from the accidental loss or theft of documents and data stored in the firm's computer systems, and allow the lawyer to begin to evaluate cases in which bypassing of computer security is of primary interest. This article attempts to broadly cover questions of computer security in the small business or law firm. Because of its objectives, this article is not a step-by-step guide on how to make a law firm computer more secure: Instead, this article hopes to acquaint the reader with the issues involved so that the reader may then be able to analyze systems on a case-by-case basis and recognize when outside assistance is required. Simply defined, computer security is the process, procedures, or tools which assure that data entered into a computer today will be retrievable at a later time by, and only by, those authorized to do so. The procedures should additionally include systems by which computer system managers (simply ``management'' on future references) will be notified when attempts at penetrating security are made. Security is violated when some person or persons (the ``subverter'') succeedes in retrieving data without authorization. Security is also breached when the subverter manages to destroy or altering data belonging to others, making retrieval of the original data impossible. Although a substantial effort has been spent in the academic and computer research communities exploring issues of computer security, little of what is understood has been put into practice on a wide scale. Computers are not inherently insecure, but there is a great temptation to build and run computers with lax security procedures, since this often results in simpler and faster operation. If security considerations are built into a product from the beginning they are relatively low cost; security added as an after-thought is often very expensive. Additionally, many computer users are simply not aware of how their facilities are insecure and how to rectify the situation. Who are the subverters? It is a mistake to assume that all people bent on stealing or destroying data can be grouped together and that similar defenses are equally effective against all subverters. In practice, the are two major groups: those who want to steal data and those who wish to destroy it. The first group can be called ``spies,'' the second group can be called ``vandals'' or ``crackers.'' Different security measures are targeted at each group. Spies are sometimes exactly that: spies, either governmental or corporate who stand to gain from the possession of confidential or secret data. Other times, spies are employees of the organization that owns the computer -- employees who seek information in the computer for personal advancement or blackmail. Crackers are typically adolescent boys who have a computer and a modem. They are usually very intelligent and break into computer systems for the challenge. They communicate with their friends via computer bulletin boards, often using stolen ATT credit card or MCI numbers to pay for the calls. On these boards, crackers report phone numbers, user names, passwords and other information regarding computer systems they have ``discovered.'' Many crackers are aware that their actions are illegal and cease them on their 18th birthday to avoid criminal liability for their actions. ``Vandals'' describes a larger group which includes both crackers and other people likely to vandalize data, such as disgruntled employees. Computer security has two sets of mutual goals, each tailored to a particular set of opponents. The first goal is to make the cost of violating the computer security vastly greater than the value of the data which might be stolen. This is designed to deter the spies, who are interested in stealing data for its value. The second goal of security is to to make it too difficult for crackers to gain access to a computer system within a workable period of time. Three terms: operating system, accounts and passwords The program which controls the basic operations of a computer is referred to as the computer's ``operating system.'' Often the same computer can be used to run several different operating systems (but not simultaneously). For example, the IBM PC/AT can run either the MSDOS operating system or Xenix, a Unix-based operating system. Under these two operating systems, the PC/AT has completely different behavior. If a computer system is intended for use by many people, the operating system must distinguish between users to prevent them from interfering with each other. For example, most multi-user operating systems will not allow one user to delete files belonging to another user unless the second user gave explicitly permission. Typically, each user of the computer is assigned an ``account.'' The operating system then does not allow commands issued by the user of one account to modify data which was created by another account. Accounts are usually named with between one and eight letters or numbers which are also called ``usernames.'' Typical usernames that the author has had include ``simsong'', ``Garfinkel'', ``slg'', ``SIMSON'' and ``ML1744.'' Most operating systems require that a user enter both the account name and a ``password'' in order to use the account. Account names are generally public knowledge while passwords are secret, known only to the user and the operating system. (Some operating systems make passwords available to system management, an insecure practice which will be explored in a later section.) Since the account can not be used without the password the name of the account can be made public knowledge. If a cracker does break into an account, only the password needs to be changed. Knowing a person's username is mandatory in order to exchange electronic mail. How much security? In most computer systems, security is purchased at a cost in system performance, ease of use, complexity and management time. Many government systems have a full time ``security officer'' whose job is to supervise and monitor the security operations of the computer facility. Many universities are also extremely concerned about security, since they are well-marked targets for crackers in the surrounding community. Most businesses, however, are notoriously lax in their security practices, largely out of ignorance and a lack of direct experience. Security exists in many forms: An operating system may be programmed to prevent users from reading data they are not authorized to access. Security may be procedures followed by computer users, such as disposing of all printouts and unusable magnetic media in shredders or incinerators. Security may be in the form of alarms and logs which tell the management when a break-in is attempted and/or successful. Security may be a function of hiring procedures which require extensive security checks of employees before allowing them to access confidential data. Lastly, security may be in the form of physical security, such as locks on doors and alarm systems intended to protect the equipment and media from theft. In a secure environment, the many types and layers of security are used to reinforce each other, with the hope that if one layer fails another layer will prevent or minimize the damage. Established protocol and judgment are required to determine the amount and cost of security which a particular organization's data warrant. Security through obscurity Security through obscurity is the reliance upon little known and often unchangeable artifacts for security. Security through obscurity is not a form of security, although it is often mistaken for such. Usually no mechanism informs site management that the ``security'' has been circumvented. Often intrusions are not detected until significant damage has been done or the intruder gets careless. Once damage is detected, management has little choice but to choose a new security system which does not depend on obscurity for its strength. The classic example of security through obscurity is the family that hides the key to the front door under the ``Welcome'' mat. The only thing to stop a burglar from entering the house is the ignorance that there is a hidden key and its location -- that is, the key's obscurity. If the house is burglarized and the burglar returns the key to its original place, the family will have no way of knowing how the burglar got in. If the family does change the location of the hidden key, all the burglar needs to do is to find it again. A higher level of security would be achieved by disposing of the hidden key and issuing keys to each member of the family. For an example of security through obscurity on a computer, imagine the owner of a small business who uses her IBM PC for both day-to-day bookkeeping and management of employee records. In an attempt to keep the employee records hidden from his employees, she labels the disk ``DOS 1.0 BACKUP DISK.'' The owner's hope is that none of the employees will be interested in the disk after reading the label. Although the label may indeed disinterest inquisitive employees, there are far better ways to secure the disk (such as locking it in a file cabinet). In a second example of security through obscurity, a secretary stores personal correspondence on her office wordprocessor. To hide the documents' existence, she chooses filenames for them such as MEMO1, MEMO2, ..., and sets the first three pages of the documents to be the actual text of old, inter-office memos. Her private letters are obscurely hidden after the old memos. Once her system is discovered, none of her correspondence is secure. Physical Security Physical security refers to devices and procedures used to protect computer hardware and media. Physical security is the most important aspect of computer security. Because of the similarities between computers and other physical objects, physical security is the aspect of computer which is best understood. Like typewriters and furniture, office computers are targets for theft. But unlike typewriters and furniture, the cost of a computer theft can be many times the dollar value of the equipment stolen. Often, the dollar value of the data stored inside a computer far exceeds the value of the computer itself. Very strict precautions must be taken to insure that computer equipment is not stolen by casual thieves. Hardware A variety of devices are available to physically secure computers and computer equipment in place. Examples are security plates which mount underneath a computer and attach it to the table that it rests on. Other approaches include the use of heavy-duty cables threaded through holes in the computer's cabinet. It is important, when installing such a restraining device, to assure that they will not damage or interfere with the operation of the computer (more than one installation has had workmen drill holes through circuit boards to bolt them down to tables.) Backups To ``back up'' information means to make a copy of it from one place to another. The copy, or ``backup,'' is saved in a safe place. In the event that the original is lost, the backup can be used. Backups should be performed regularly to protect the user from loss of data resulting from hardware malfunction. Improved reliability is a kind of security, in that it helps to assure that data stored today will be accessible tomorrow. The subverter in such an event might be a the faulty chip or power spike. Backups stored off site provide insurance against fire. Backups are also vital in defending against human subverters. If a computer is stolen, the only copy of the data it contained will be on the backup, which can then be restored on another computer. If a cracker breaks into a computer system and erases all of the files, the backups can be restored, assuming that the cracker does not have access to or knowledge of the backups. But backups are a potential security problem. Backups are targets for theft by spies, since they can contain exact copies of confidential information. Indeed, backups warrant greater physical security than the computer system, since the theft of a backup will not be noticed as quickly as the theft of media containing working data. With recognition of the potential security hole of backups, some computer systems allow users to prevent specific files from being backed up at all. Such action is justified when the potential cost of having a backup tape containing the data stolen is greater than the potential cost of losing the data due to equipment malfunction, or when the data stored on the computer is itself a copy of secure master source, such as a tape in a file cabinet. Sanitizing Floppy disks and tapes grow old and are often discarded. Hard disks are removed from service and returned enact to the manufacture for repair or periodic maintenance. Disk packs costing thousands of dollars are removed from equipment and resold. If these media ever contained confidential data, special precautions must be taken to ensure that no traces of the data remain on the media after disposal. This process is called ``sanitizing.'' To understand sanitizing, first it is necessary to understand how information is recorded on magnetic media: The typical PC floppy disk can store approximately 360 thousand characters. Each of these each of these characters consists of 8 binary digits, called ``bits,'' which can be set to ``0'' or ``1.'' Information on the disk is arranged into files. One part of the disk, called the directory, is used to list the name and location of every file. Using the operating system's delete-file command (such as the MSDOS ``erase'' command) is not sufficient to insure that data stored cannot be recovered by skilled operators. Most delete-file commands do not actually erase the target file from a diskette: instead, the command merely erases the name of the file from the diskette's directory. This action frees the storage area occupied by the file for use but does not modify the data in any way. The file itself remains intact and can be recovered at a later time if it has not been overwritten. Many programs exist on the market to do just this. Even if the actual file contents are overwritten or erased -- that is, even if all of the bits used to store the contents of the file are set to ``0'' -- it is still possible to recover the original data, although not with normal operating procedures. Imagine a black and white checkerboard used for a computer memory. Assume that the value of any square on the checkerboard is proportional to the darkness of the square: the black squares are 1s and the white squares are 0s. Now consider what happens when the checkerboard is painted with one coat of white paint: the original checkerboard pattern is still discernible, but less so. The squares which formerly had a value of 1 now evaluate to 0.1 or 0.2. When the computer reads the memory, the 0.1 or 0.2 are rounded to 0. But an expert with special equipment could easily recover the original pattern. Just as the pattern can be recovered from a checkerboard uniformly painted, data can be recovered from a floppy disk which has been uniformly erased or reformatted. Typical sanitization procedures involve writing a 1 to every location on the media, then to write a 0 to every location, then to fill the media with random data. To use the checkerboard analogy, this would be the same as painting the board black, then white, then with a different checkered pattern. The original pattern should then be undetectable. Additional effort might be desired when dealing with very sensitive data. Sanitizing is obviously an expensive and time consuming process. Physical destruction of the media represents an attractive alternative -- simply feeding the floppy disk (or the checkerboard) into a paper shredder does very well. Unfortunately, physical destruction is not economically possible with expensive media which must be returned for service or for resale in order to recover costs of purchase. Authentication Authentication is the process by which the computer system verifies that a user is who the user claims to be, and vice versa. Systems of authentication are usually classified as being based on: Something the user has. (keys) Something the user knows. (passwords) Something the user is. (fingerprints) Passwords A password is a secret word or phrase which should be known only to the user and the computer. When the user attempts to use the computer, he must first enter the password. The computer compares the typed password to the stored password and, if they match, allows the user access. Some computer systems allow management access to the list of stored passwords; doing so is generally regarded as an unsound practice. If a cracker gained access to such a list, every password on the computer system would have to be changed. Other computers store passwords after they have been processed by a non-invertible mathematical function. The user's typed password cannot be derived by the processed password, eliminating the damage resulting from the theft of the master password list. The password that the user types when attempting to log on is then transformed with the same mathematical function and the two processed passwords are compared for equality. What makes a secure password? Insecure passwords are passwords which are easy for people to guess. Examples of these include passwords which are the same as usernames, common first or last names, passwords of four characters or less, and English words (all english words, even long ones like ``cinnamon.''). A few years ago, the typical cracker would spend many hours at his keyboard trying password after password. Today, crackers have automated this search with personal computers. The cracker can program his computer to try every word in a large file. Typically, these files consist of thirty thousand word dictionaries, lists of first and last names and easy-to-remember keyboard patterns. Examples of secure passwords include random, unpronounceable combinations of letters and numbers and several words strung together. Single words spelled backwards, very popular in some circles, are not secure passwords since crackers started searching for them. The second characteristic of a secure password (and of a secure computer) is that it is easily changed by the user. Users should be encouraged to change their passwords frequently and whenever they believe that someone else has been using their account. This way, if a cracker does manage to learn a user's password, the damage will be minimized. It should go without saying that passwords should never be written down, told to other people or chosen according to an easily predicted system. Smart Cards If the communication link between the user and the computer is monitored, even the longest and most obscure password can be recorded, giving the eavesdropper access to the account. The answer, some members of the computer community believe, is for users to be assigned mathematical functions instead of passwords. When the user attempts to log on, the computer presents him with a number. The user applies his secret function (which the computer knows) to the number and replies with the result. Since the listener never sees the function, only the input and the result, tapping the communications link does not theoretically give one access to the account. Assume for example, user P's formula is ``multiply by 2.'' When she tries to log in, the computer prints the number ``1234567.'' She types back ``2469134,'' and the computer lets her log in. A problem with this system is that unless very complicated formulas are used, it is relatively easy for a eavesdropper to figure out the formula. Very complicated formulas can be implemented with the ``smart card,'' which is a small credit-card sized device with an embedded computer instead of magnetic strip. The host computer transmits a large (100 digit) number to the smart card which performs several thousand calculations on the number. The smart card then transmits the result back to the host. Obviously, dedicated hardware consisting of the smart cards themselves and a special reader are required. Smart cards change authentication from something to user knows (a password) to something the user has (a smart card). Naturally, the theft of a smart-card is equivalent to the disclosure of a password. Smart cards have been proposed as a general replacement for many password applications, including logon for very secure computers, verification of credit cards, and ATM cards and identity cards. Since the cards are authenticated by testing a mathematical function stored inside the card on a silicon computer, rather than a number stored on a magnetic strip, the cards would be very difficult to duplicate or forge. They are also very expensive. Authentication of the computer: The Trojan Horse problem While most computer systems require that the user authenticate himself to the computer, very few provide a facility for the computer to authenticate itself to the user! Yet, computer users face the same authentication problems a computer does. For example, a user sits down at a terminal to log onto a computer and is prompted to type his username and his password. What assurance does the user have that the questions are being asked by the operating system and not by a program that has been left running on the terminal? Such a program -- called a Trojan Horse -- can collect hundreds of passwords in a very short time. Well written trojan horses can be exceedingly difficult to detect. Another example of a trojan horse program is a program which claims to performs one function while actually performing another. For example, a program called DSKCACHE was distributed on some computer bulletin board systems in the New York in December 1985. The program substantially improved disk i/o performance of an IBM Personal Computer, encouraging people to use the program and give it to their friends. The hidden function of DSKCACHE was to erase the contents of the computer's disk when it was run on or after the trigger date, which was March 24, 1986. Trojan horses are possible because reliable ways in which the computer can authenticate itself to the user are not wide spread. Computer Viruses A computer virus is a malicious program which can reproduce itself. The DSKCACHE program described above is a sort of computer virus that used humans to propagate. Other computer viruses copy themselves automatically when they are executed. Viruses have been written which propagate by telephone lines or by computer networks. The computer virus is another problem of authentication: Since programs have no way of authenticating their actions, the user must proceed on blind trust when we run them. When I use a text editor on my computer, I trust that the program will not maliciously erase all of my files. There are times that this trust is misplaced. Computer viruses are some of the most efficient programs at exploiting trust. One computer virus is a program which when run copies itself over a randomly located program on the hard disk. For example, the first time the virus is run it might copy itself onto the installed wordprocessor program. Then, when either the original virus program or the wordprocessor program are run, another program on the hard disk will be corrupted. Soon there will be no programs remaining on the disk besides the virus. A more cleaver virus would merely modify the other programs on the disk, inserting a copy of itself and then remain dormant until a particular target date was reached. The virus might then print a ransom note and prevent use of the infected programs until a ``key'' was purchased from the virus' author. Once a system is infected, the virus is nearly impossible to eradicate. The real danger of computer viruses is that they can remain dormant for months or years, then suddenly strike, erasing data and making computer systems useless (since all of the computer's programs are infected with the virus.) Viruses could also be triggered by external events such as phone calls, depending on the particular computer. A number of authors have suggested ways of using computer viruses for international blackmail infecting the nation's banking computers with them. Viruses can and have been placed by disgruntled employees in software under development. Such viruses might be triggered when the employee's name is removed from the business' payroll. There are several ways to defend against computer viruses. The cautious user should never use public domain software, or only use such software after a competent programmer has read the source-code and recompiled the executable-code from scratch. {Computer programs are usually written in one of several english-like languages and then processed, using a program called a compiler, into a form which the computer can execute directly. While even a good programmer would have a hard time detecting a virus if presented solely with the executable code, they are readily detectable in source-code.} Telecommunications Modems The word MODEM stands for Modulator/Demodulator. A modem takes a stream of data and modulates it into a series of tones suitable for broadcast over standard telephone lines. At the receiving end, another modem demodulates the tones into the original stream of data. In practice, modems are used in two distinct ways: A) File Transfer and B) Telecomputing. When used strictly for file transfer, modems are used in a fashion similar to the way that many law firms now use telcopier machines. One computer operator calls another operator and they agree to transfer a file. Both operators set up the modems, transmit the file and then shut down the modems, usually disconnecting them from the phone lines. When used in this manner, the two computer operators are essentially authenticating each other over the telephone. (``Hi, Sam? This is Jean.'' ``Hi Jean. I've got Chris' file to send.'' ``Ok, send it. Have a nice day.'') If one operator didn't recognize or had doubts about the other operator, the transfer wouldn't proceed until the questions had been resolved. This system is called attended file transfer. Modems can also be used for unattended file transfer, which is really a special case of telecomputing. In telecomputing, one or more of the modems involved in operated without human intervention. In this configuration, a computer is equipped with a modem capable of automatically answering a ringing telephone line. Such modems are called AA (for ``auto answer'') modems. When the phone rings, the computer answers. After the modem answers the caller is required to authenticate himself to the computer system (at least, this is the case when a secure computer system is used), after which the caller is allowed to use the computer system or perform file transfer. In most configurations, the computer system does not authenticate itself to the caller, creating a potential for Trojan horse programs to be used by subverters (see above). AA modems answer the telephone with a distinctive tone. If a cracker dials an AA modem, either by accident or as the result of an deliberate search, the tone is like a neon sign inviting the cracker to try his luck. Fortunately, most multi-user operating systems are robust enough to stand up to even the most persistent crackers. Most personal computers are not so robust, although this depends on the particular software being used. Leaving a PC unattended running a file-transfer program is an invitation for any calling cracker to take every file on the machine he can find, especially if the file-transfer program uses a well known protocol and does not require the user to type a password. The only security evident is the obscurity of the telephone number, which may not be very obscure at all, and of the file transfer program's protocol. Call back and password modems Modem manufactures have attempted two strategies to make AA modems more secure: passwords and call back. When calling a password modem, the user must first type a password before the modem will pass data to the host computer. The issues involved in breaking into a computer system protected by password modems are the same as in breaking into a computer system which requires that users enter passwords before logging in. A good password modem has a password for every user and records the times that each user calls in, but most password modems only have one password. For most operating systems a password modem is overkill, since the operating system provides its own password and accounting facilities, or useless, since, any functionality which a password modem provides can be implemented better by programs running on a computer which a non-password modem is attached to. But for an unattended microcomputer performing file transfer, a password modem may be the only way to achieve a marginal level of security. A call back modem is like a password modem, in that it requires the caller to type in a preestablished password. The difference is that a call back modem then hangs up on the caller and then ``calls back'' -- the modem dials the phone number associated with the password. The idea is that even if a cracker learns the password, he cannot use the modem because it won't call him back. In practice, shortcomings in the telephone system make call back modems are no more secure than password modems. Most telephone exchanges are ``caller controlled,'' which means that a connection is not broken until the caller hangs up. If the cracker, after entering the correct password, doesn't hang up, the modem will attempt to ``hang up,'' pick up the phone, dial and connect to the cracker's modem (since the connection was never dropped). A few modems will not being dialing until they hear a dial tone, but this is easily overcome by playing a dial tone into the telephone. The idea of call back can be made substantially more secure by using two modems, so that the returned call is made on a different telephone line than the original call is received on. Call back of this type must be implemented by the operating system rather than the modem. Two modem call back is also defeatable by use of the ``ring window,'' explained below: How many times have you picked up the telephone to discover someone at the other end? The telephone system will connect the caller before it rings the called party's bell if the telephone is picked up within a brief period of time, called the ``ring window.'' That is -- when a computer (or person) picks up a silent telephone, there is no way to guarantee that there will be no party at the other end of the line. There is no theoretical way around the ring window problem with the current telephone system, but the problem can be substantially minimized by programming the dialout-modem to wait a random amount of time before returning the call. The principle advantage of a call back modem is that it allows the expense of the telephone call to be incurred at the computer's end, rather than at the callers end. One way to minimize telecommunication costs might be to install a call back modem with a WATS line. In general, both password and call back modems represent expensive equipment with little or no practical value. They are becoming popular because modem companies, playing on people's fears, are making them popular with advertising. Computer Networks A network allows several computers to exchange data and share devices, such as laser printers and tape drives. Computer networks can be small, consisting of two computers connected by a serial line, or very large, consisting of hundreds or thousands of systems. One network, the Arpanet, consists of thousands of computers at universities, corporations and government installations all over the United States. Among other functions, the Arpanet allows users of any networked computer to transfer files or exchange electronic mail with users at any other networked computer. The Arpanet also provides a service) by which a user of one computer can log onto another computer, even if the other computer is several thousand miles away. It is utility of the network which presents potential security problems. A file transfer facility can be used to steal files, remote access can be used to steal computer time. A spy looking for a way to remove a classified file from a secure installation might use the network to ``mail'' the document to somebody outside the building. Unrestricted remote access to resources such as disks and printers places these devices at the mercy of the other users of the network. A substantial amount of the Arpanet's system software is devoted to enforcing security and protecting users of the network from each other. In general, computer networks can be divided into two classes: those that are physically secure and those that are not. A physically secure network is a network in which the management knows the details of every computer connected at all times. An insecure network is one in which private agents, employees, saboteurs and crackers are free to add equipment. Few networks are totally insecure. Encryption What is encryption? The goal of encryption is to translate a message (the ``plaintext'') into a second message (the ``cyphertext'') which is unreadable without the possession of additional information. This translation is performed by a mathematical function called the encryption algorithm. The additional information is known as the ``key.'' In most encryption systems, the same key is used for encryption as for decryption. Encryption allows the content of the message to remain secure even if the cyphertext is stored or transmitted via insecure methods (or even made publicly available). The security in such a system resides in the strength of the encryption system employed and the security of the key. In an ideal cryptographic system, the security of the message resides entirely in the secrecy of the key. When Julius Caeser sent his reports on the Gallic Wars back to Rome, he wanted the content of the reports to remain secret until they reached Rome (where his confidants would presumably be able to decode them.) To achieve this end, he invented an encrypted system now known as the Caeser Cipher. The Caeser Cipher is a simple substitution cipher in which every letter of the plaintext is substituted with the letter three places further along in the alphabet. Thus, the word: AMERICA encrypts as DQHULFD The ``key'' of the Caeser Cipher is the number of letters which the plaintext is shifted (three); the encryption algorithm is the rule ``shift all letters in the plaintext by the same number of characters.'' The Caeser Cipher isn't very secure: if the algorithm is known, the key is deducible by a few rounds of trail-and-error. Additionally, the algorithm is readily determinable by lexigraphical analysis of the cyphertext. Recently, the author sent a postcard to a friend which was encrypted with the Caeser Cipher (without any information on the card that it was encrypted or which system was used): the postcard was decoded in five minutes. Modern cryptography systems assume that both the encryption algorithm and the complete cyphertext are publicly known. Security of the plaintext is achieved by security of the key. Cryptographic keys are typically very large numbers. Since people find it easier to remember sequences of letters than numbers, most cryptographic systems allow the user to enter an alphabetic key which is translated internally into a very large number. Ideally, it should be impossible for a spy to translate the cyphertext back into plaintext unless he is in possession of the key. In practice, there are a variety of methods by which cyphertext can be decrypted. Breaking cyphers usually involves detecting regularities within the cyphertext and repeated decoding attempts of the cyphertext with different keys. This process requires considerable amounts of computer time and (frequently) a large portion of the cyphertext. As there are many excellent books written on the subject of cryptography, it will not be explored in depth here. Why encryption? Encryption makes it more expensive for spies to steal data, since even after the data is stolen it must still be decrypted. Encryption thus provides an additional defense layer against data theft after other security systems have failed. On computer systems without security, such as office IBM PCs shared by several people, encryption is a means for providing privacy of data between users. Instead of copying confidential files to removable media, users can simply encrypt their files and leave them on the PC's hard disk. Of course, the files must be decrypted before they can be used again and encryption of files does not protect them from deletion. Encryption allows confidential data to be transmitted via insecure systems, such as telephone lines or by courier. Encryption allows one to relax other forms of security with the knowledge that the encryption system is reasonably secure. Costs of Encryption Encryption is not without its costs. Among these are the expenses of the actual encryption and decryption, the costs associated with managing keys, and the degree of security required of the encryption program. Beyond the cost of purchasing the encryption system, there are costs associated with the employment of cryptography as a security measure. Encrypting and decrypting data requires time. Most cryptography systems encrypt plaintext to cyphertext containing many control characters: special file-transfer programs must be used to transmit these files over telephone lines. In many cryptography systems, a one character change in the cyphertext will result in the rest of the ciphertext being indecipherable, requiring that 100 percent reliable data transmission and storage systems be used for encrypted text. If the encryption program is lost or if the key is forgotten, an encrypted message becomes useless. This characteristic of cryptography encourages many users to store both an encrypted and a plaintext version of their message, which dramatically reduces the security achieved from the encryption in the first place. An encryption program should be the most carefully guarded program on the system. A cracker/spy might modify the program so that it records all keys in a special file on the system, or so that it encrypts all files with the same key (known to the cracker), or with an easy-to-break algorithm rather than the advertised one. Management should regularly verify an encryption program to assure that it is providing its expected function, and only its expected function. Key Management Key management is the process by which cryptographic keys are decided upon and changed. For maximum security, keys (like passwords) should be randomly chosen combinations of letters and numbers. Keys should not be reused (that is, every message should be encrypted with a different key) and no written copy of the key should exist. Few computer users are able to adhere to such demanding protocols. Encryption as a defense against crackers If a database is stored in encrypted form, it becomes nearly impossible for a saboture guy to make fradulant entries unless the encryption key is known. This provides an excellent defense against crackers and sabatures who vandalize databases by creating fraudulent entries. On a legal accounting or medical records system, it is far more damaging to have a database unknowingly modified than destroyed. A destroyed database can be restored from backups; modifications to a database may require weeks or months to detect. Unfortunately, few database programs on the market use encryption for stored files. Some operating systems store user information, such as passwords, encrypted. As noted previously, when passwords are stored with a one-way encryption algorithm it is of little value to a cracker to steal the file which contains user passwords. The UNIX operating system is so confident in its encryption system that the password file is readable by all users of the system; to date, it does not appear that this confidence is misplaced. Encryption in practice In practice, there are several serviceable cryptography systems on the market: most of them use different cryptographic algorithms, which is both advantageous and disadvantagous to the end user. One advantage of the availability of many different cryptography systems is that secrecy of the encryption system adds to the security of the plaintext. This is a form of security through obscurity and should not be relied on, but its presence will slightly strengthen security. A disadvantage of the multitude of encryption systems is that the transmitter of an encrypted message must ensure that the proposed recipient knows which decryption algorithm to use and has a suitable program, in addition to knowing the decryption key. Public-key encryption In some cryptography systems a different key is used to encrypt a message than to decrypt it. Such systems are called ``public-key'' systems, because the encrypting key can be made public without (in theory) sacrificing the security of encrypted messages. There are several public key systems in existence; all of them have been broken with the exception of system devised by Rivest, Shamir and Adlerman called RSA. In RSA, the private key consists of two large prime numbers while the public key consists of the product of the two numbers. The system is considered to be secure because it is not possible, with today's computers and algorithms, to factor numbers several hundred digits in length. The problem with RSA is determining the size of the prime numbers to use: they must be large enough so that their product cannot be factored within a reasonable amount of time, yet small enough to be manipulated and transmitted by existing computers in a reasonable time frame. The problem is compounded by the fact that new factoring algorithm are being constantly developed, so a number which is long enough today may not be long enough next week. While the length of the public key can always be increased, messages encrypted with today's ``short'' keys may be decryptable with tomorrow's new algorithms and computers. Confidence in the encryption program A computer's cryptography program is one of the most rewarding targets for a Trojan horse. The very nature of a computer's cryptography program is that it requires absolute faith on the part of the user that the program is performing exactly the function which it claims to, but there are a number of very damaging in which a cryptography program can be modified without notice: The program could make a plaintext copy of everything it encrypts or decrypts without the user's knowledge. This copy could be hidden for the later retrieval by the cracker. The copy could even be encrypted with a different key. The program could keep a log of every time it encrypted or decrypted a file. Included in this log could be the time, user, filename, key and length of the encrypted or decrypted file. The program might use an encryption algorithm which has a hidden ``back door'' -- that is, a secret method to decrypt any cyphertext message with a second key. The program might have a ``time bomb'' in it so that, after a particular date, instead of decrypting cyphertext it prints a ransom note. The user would only be able to decrypt his file after obtaining a password from the author of the program, perhaps at a very high cost. (This is a form of computer extortion which will be further explored under ``subversion.'') Microcomputer Security Issues Beware of public domain software! Although there are many excellent programs in the public domain, there is are an increasing number of malicious Trojan Horses and computer viruses. Unless the source code of the program is carefully examined by a competent programmer, it is nearly impossible to test a public domain program for hidden and malicious functions. Even ``trying a'' program once may cause significant data loss -- especially if the microcomputer is equipped with a hard disk. Although the vast majority of public domain software is very useful and relatively reliable, the risks faced by the user are considerable and the trust required in the software absolute. Hobbyists can afford to risk their data for gains of using some public domain software; businesses and law practices cannot be so careless. The user of a microcomputer must back up his own files, not only to protect against accidental deletion or loss of data but also to protect against theft of equipment. Although no issue in microcomputer security is stressed more than backups, many users do not perform this routine chore. More than any other computer system, with a microcomputer physical security is vitally important because of the ease of stealing a microcomputer and the ease at which it can be resold. (It is rather difficult for a bugler to sell a stolen mainframe computer). Anti-theft devices must be installed on equipment containing hard disks, not only for the value of the equipment but also for the value of the data stored therein. Do not trust the microcomputer or its operating system to guard confidential documents stored on a hard disk. If a spy has physical access to the computer, he can physically remove the hard disk and read its contents on another machine. File encryption is another defense against this sort of data theft, but the installed encryption program should be regularly checked for signs of tampering (for example, the modification date or the size of the file having changed). Managing a secure computer Auditing Most security-conscious operating systems provide some sort of auditing system to record events such as invalid logon attempts or attempted file transfer of classified files. Typically, each log entry consists of a timestamp and a description of the event. One of the responsibility of site management is to read these ``security logs.'' Most operating systems keep records of the times that each user was logged on within the past year. A selective list of logons between 5pm and 8am can help detect unauthorized ``after-hours'' use of accounts by crackers, especially on computers equipped with modems. Some operating systems will notify a user when he logs in of the last time he logged in. Other systems will will notify a user of every time an unsuccessful login attempt is made on his account. Presented with this information, it is very easy one to discover when crackers are attempting (or have succeeded) to break into the system. Good auditing systems include the option to set software alarms which will notify management of suspicious activity. For example, an alarm might be sent to notify management whenever someone logs into the user administration account, or the first time that an account is accessed over a dialup. The security administrator could then verify that the account was used by those authorized to use it and not by crackers. Alarms Software alarms scan for suspicious activity and alert management when such activity is detected. These programs can be implemented as daily tasks which scan the security logs and isolate out questionable occurrences. Software alarms can be useful on insecure computers, such as desktop PCs, for altering management of security violations which the operating system cannot prevent. For example, it is possible to write a very simple program on a PC that would notify management whenever a system program, such as a text editor, spread sheet or utility program is modified or replaced. Such a program could detect a virus infection and could be used to isolate and destroy the virus before it became widespread. On larger computers, alarms can notify management of repeated failed logon attempts (indicating that a cracker it attempting to break into the computer) or repeated attempts by one user to read another user's files. It is important for management to test alarms regularly and not to become dependent on alarms to detect attempted violations of security; the first action by an experienced cracker after breaking into a system should be to disable or reset the software alarms so that the break in is hidden. Policy and Protocol The most secure protocol is useless if people do not follow it. A good protocol is one that is easy, if not automatic, to follow. For example, many university computer centers have adopted a policy that computer passwords are not given out over the telephone under any circumstances. Such a policy, if enforced, eliminates the possibility of a cracker telephoning management and, posing as a staff member, obtaining a user's password. Other policies include requiring users to change their passwords on a regular basis. Some computer systems allow policies such as this to be implemented automatically: After the same password has been used for a given period of time, the computer requires that the user change the password the next time the user logs in. Subversion Most incidents of data loss are due to employees rather than external agents. Many employees, by virtue of their position, are presented with ample opportunity to steal or corrupt data, use computer resources for personal gain or the benefit of a third party and generally wreak havoc. While computers make these actions easier, they are merely reflections of concerns already present in the businessplace. Traditional methods of employee screening coupled with sophisticated software alarms and backup systems can both minimize the impact of subversion and aid in its early detection. Cracking This section is intended to give some idea of how a cracker breaks into a computer. The intent is that, by giving a demonstration of how a cracker breaks into a computer system, the reader will gain insight into ways of preventing similar actions. The target system is actually irrevelent; the concepts presented apply to many on the market. Perhaps as the result of a random telephone search, the cracker has found the telephone number of a modem connected to a timesharing computer. Upon calling the computer's modem, the cracker is prompted to Logon. Different operating systems have different ways of logging in and perhaps the cracker is not familiar with this one. (The cracker's typing is lowercase for clarity.) He starts: hello RESTART The computer prints ``RESTART'' telling the cracker that ``hello'' is not the proper way to logon to the computer system. Some computer systems provide extensive help facilities in order to assist novice users in logging in, which are just as helpful to crackers as they are to novices. From trial and error, the cracker determines the proper way to logon to the system: help RESTART user RESTART login DMKLOG020E USERID MISSING OR INVALID The next task for the cracker is to determine a valid username and password combination. One way to do this is to try a lot of them. It is not very difficult to find a valid username from a list of common first and last names: login david DMKLOG053E DAVID NOT IN CP DIRECTORY login sally DMKLOG053E SALLY NOT IN CP DIRECTORY login cohen LOGIN FORMAT: LOGIN USERNAME,PASSWORD RESTART Once a valid username is found, the cracker tries passwords until he find one that works: login cohen,david DMKLOG050E PASSWORD INCORRECT - REINITIATE LOGON PROCEDURE login cohen,charles DMKLOG050E PASSWORD INCORRECT - REINITIATE LOGON PROCEDURE login cohen,sally LOGMSG - 15:40:23 +03 TUESDAY 06/24/86 WICC CMS 314 05/29 PRESS ENTER=> The basic flaw in this operating system is that it tells the cracker the difference between a (valid username,invalid password) pair and an (invalid username, invalid password) pair. For the invalid usernames, the system responded with the ``NOT IN CP DIRECTORY'' response, while for valid usernames the system asked for the user's PASSWORD. Some systems systems ask for a password regardless of whether or not the username provided by the cracker is valid. This features enhances security dramatically since the cracker never knows if a username he tries is valid or not. Suppose a cracker has to try an average of 20,000 names or words to find a correct username or password. Mathematically, on a system which does not inform the cracker when a username is correct the cracker may have to try upwards from 20,000 x 20,000 = 400,000,000 username/password combinations. On a system which tells the cracker when he has found a valid username the search is reduced to total of 20,000 + 20,000 = 40,000 tries. The difference is basically whether the password and the username can be guessed sequentially or must be guessed together. All it takes is patience to crack a system. One way to speed the process is to automate the username and password search: essentially, the cracker programs his computer to try repeatedly to log onto the target system. To find a username, the cracker can instruct his computer to cycle through a list of a few thousand first and last names. Once a username is found, the cracker programs his computer to search for passwords in a similar fashion. The cracker may also have a dictionary of the 30,000 most common english words, and try each of these as a password. Since people tend to pick first names, single characters, and common words as passwords, most passwords can be broken within a few thousand tries. If the cracker's computer can test one password every 5 seconds, ten thousand passwords can be tested in under 15 hours. (Hopefully by this time a software alarm would have disabled logins from the computer's modem, but few operating systems contain such provisions.) Finding one valid username/password combination on a system does not place the entire computer at the mercy of the cracker (unless it is a privileged account which he discovers), but it does give him a very strong basis from which to explore and then crack the rest of the accounts on the system. Some computers are more resistant to this sort of exploration than others. If the cracker gives up trying to penetrate the login server of the host, there are still many other ways to crack the system. He might telephone the computer operator and, pretending to be a member of the computer center's staff, ask for the operator's password. (Crackers have successfully used this method to break into numerous computer systems around the country.) Some crackers use their computers to search for other computers. A cracker will program his computer to randomly dial telephone numbers searching for AA modems. When the cracker's computer finds a modem answering, the phone number is recorded for later cracking. Automatically dialing modems can also be used to crack into long distance services such as MCI and Sprint by trying successive account numbers. Although it is theoretically possible to track a cracker back through his call, such action requires the assistance of the telephone utility. Utilities will not trace telephone calls unless ordered to do so by police who have, to date, been very hesitant about ordering such action. At a recent massive computer break in at Stanford University one research staffer communicated with a cracker over the computer for two hours while another staffer in the lab contacted police to arrange a trace; the police refused. Conclusion Computer security is a topic too large to cover fully in any publication, least of all in as short an introduction as this. In order to evaulate a security system it is necessary to think like a cracker or a subverter. After that, most other details follow. Glossary Backup (n.): A copy of information stored in a computer, to be used in the event that the original is destroyed. Back up (v.): To make a backup. break (v.): To gain access to computers or information thought to be secure. To break a cypher is to be able to decrypt any message encrypted with it. To break a computer is to log on to it without authorization. bit: One unit of memory storage. Either a ``0'' or a ``1.'' client: With reference to a computer network, the computer or program which requests data or a service. Confidence: The level of trust which can be placed in a computer system or program to perform the function which it is designed to do. Alternatively, the amount of protection offered by such a system. Cracker: A person who breaks into computers for fun. Encryption: The process of taking information and making it unreadable to those who are not in possession of a the decrypting key. MODEM: Modulator/Demodulator. A device used for sending computer information over a telephone line. Public key: A cryptography system which uses one key to encrypt a message and a second key to decrypt it. In a perfect public-key system it is not possible to decrypt a message without the second key. RSA: Rivest, Shamir and Adlerman. A popular public-key cryptography system. Trojan Horse: A program which claims to be performing one function while actually performing another. Sanitizing: Ensuring that confidential data has been removed from computer media before the media is disposed of. security logs: A recording of all events of a computer system pertinent to security. Security through obscurity: Security that arises from ignorance of operating procedures rather than first principles. server: With respect to a network, the computer or program which responds to requests from clients. smart card: a credit-card sized computer, used for user authentication. subversion: Attacks on a computer system's security from trusted individuals within the organization References and Credits For more information on computer security, see: The Codebreakers, by David Kahn, 1973. Available in abridged (by author) paperback. A signet Book from The New American Library, Inc, Bergenfield, NJ 07621. ISBN 0-451-08967-7. The Hut Six Story, by Gordon Welchman. Personal Computer Security Considerations, by the National Computer Security Center, NCSC-WA-002-85, December 1985, from the Government Printing Office. Special Publication 500-120 - Security of Personal Computer Systems: A Management Guide, January 1985, from the National Bureau of Standards. Some of the information presented in this article is the result of discussions on the ARPANET network ``Security'' mailing list and the Usenet network ``net.crypt'' newsgroup. Multics is a trademark of Honeywell. UNIX is a trademark of Bell Laboratories. VM/CMS is a trademark of International Business Machines (IBM).
-----------[000024][next][prev][last][first]---------------------------------------------------- From: "Paul R. Grupp" <GRUPP@AI.AI.MIT.EDU> Subject: computer viruses
Security experts are afraid that saboteurs could infect computers with a "virus" that would remain latent for months or even years, and then cause chaos. Attack of the Computer Virus ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ By Lee Dembart Germ warfare, the deliberate release of deadly bacteria or viruses, is a practice so abhorrent that it has long been outlawed by international treaty. Yet computer scientists are confronting the possibility that something akin to germ warfare could be used to disable their largest machines. In a civilization ever more dependent on computers, the results could be disastrous - the sudden shutdown of air traffic control systems, financial networks, or factories, for example, or the wholesale destruction of government or business records. The warning has been raised by a University of Southern California researcher who first described the problem in September, before two conferences on computer security. Research by graduate student Fred Cohen, 28, shows that it is possible to write a type of computer program, whimsically called a virus, that can infiltrate and attack a computer system in much the same way a real virus infects a human being. Slipped into a computer by some clever saboteur, the virus would spread throughout the system while remaining hidden from its operators. Then, at some time months or years later, the virus would emerge without warning to cripple or shut down any infected machine. The possibility has computer security experts alarmed because, as Cohen warns, the programming necessary to create the simplest forms of computer virus is not particularly difficult. "Viral attacks appear to be easy to develop in a short time," he told a conference co- sponsored by the National Bureau of Standards and the Department of Defense. "[They] can be designed to leave few if any traces in most current systems, are effective against modern security policies, and require only minimal expertise to implement." Computer viruses are aptly named; they share several insidious features with biological viruses. Real viruses burrow into living cells and take over their hosts' machinery to make multiple copies of themselves. These copies escape to infect other cells. Usually infected cells die. A computer virus is a tiny computer program that "infects" other programs in much the same way. The virus only occupies a few hundred bytes of memory; a typical mainframe program, by contrast, takes up hundreds of thousands. Thus, when the virus is inserted into an ordinary program, its presence goes unnoticed by computer operators or technicians. Then, each time the "host" program runs, the computer automatically executes the instructions of the virus-just as if they were part of the main program. A typical virus might contain the following instructions: "First, suspend execution of the host program temporarily. Next, search the computer's memory for other likely host programs that have not been already infected. If one is found, insert a copy of these instructions into it. Finally, return control of the computer to the host program." The entire sequence of steps takes a half a second or less to complete, fast enough so that no on will be aware that it has run. And each newly infected host program helps spread the contagion each time it runs, so that eventually every program in the machine is contaminated. The virus continues to spread indefinitely, even infecting other computers whenever a contaminated program is transmitted to them. Then, on a particular date or when certain pre-set conditions are met, the virus and all it's clones go on the attack. After that, each time an infected program is run, the virus disrupts the computer's operations by deleting files, scrambling the memory, turning off the power, or making other mischief. The saboteur need not be around to give the signal to attack. A disgruntled employee who was afraid of getting fired, for example, might plot his revenge in advance by adding an instruction to his virus that caused it to remain dormant only so long as his personal password was listed in the system. Then, says Cohen, "as soon as he was fired and the password was removed, nothing would work any more." The fact that the virus remains hidden at first is what makes it so dangerous. "Suppose your virus attacked by deleting files in the system," Cohen says. "If it started doing that right away, then as soon as your files got infected they would start to disappear and you'd say 'Hey, something's wrong here.' You'd probably be able to identify whoever did it." To avoid early detection of the virus, a clever saboteur might add instructions to the virus program that would cause it to check the date each time it ran, and attack only if the date was identical -or later than- some date months or years in the future. "Then," says Cohen, "one day, everything would stop. Even if they tried to replace the infected programs with programs that had been stored on back-up tapes, the back-up copies wouldn't work either - provided the copies were made after the system was infected. The idea of virus-like programs has been around since at least 1975, when the science fiction writer John Brunner included one in his novel `The Shockwave Rider'. Brunner's "tapeworm" program ran loose through the computer network, gobbling up computer memory in order to duplicate itself. "It can't be killed," one character in the book exclaims in desperation. "It's indefinitely self-perpetuating as long as the network exists." In 1980, John Shoch at the Xerox Palo Alto research center devised a real-life program that did somewhat the same thing. Shoch's creation, called a worm, wriggled through a large computer system looking for machines that were not being used and harnessing them to help solve a large problem. It could take over an entire system. More recently, computer scientists have amused themselves with a gladiatorial combat, called Core War, that resembles a controlled viral attack. Scientists put two programs in the same computer, each designed to chase the other around the memory, trying to infect and kill the rival. Inspired by earlier efforts like these, Cohen took a security course last year, and then set out to test whether viruses could actually do harm to a computer system. He got permission to try his virus at USC on a VAX computer with a Unix operating system, a combination used by many universities and companies. (An operating system is the most basic level of programming in a computer; all other programs use the operating system to accomplish basic tasks like retrieving information from memory, or sending it to a screen.) In five trial runs, the virus never took more than an hour to penetrate the entire system. The shortest time to full infection was five minutes, the average half an hour. In fact, the trial was so successful that university officials refused to allow Cohen to perform further experiments. Cohen understands their caution, but considers it shortsighted. "They'd rather be paranoid than progressive," he says. "They believe in security through obscurity." Cohen next got a chance to try out his viruses on a privately owned Univac 1108. (The operators have asked that the company not be identified.) This computer system had an operating system designed for military security; it was supposed to allow people with low-level security clearance to share a computer with people with high-level clearance without leakage of data. But the restrictions against data flow did not prevent Cohen's virus from spreading throughout the system - even though he only infected a single low-security level security user. He proved that military computers, too, may be vulnerable, despite their safeguards. The problem of viral spread is compounded by the fact that computer users often swap programs with each other, either by shipping them on tape or disk or sending them over a telephone line or through a computer network. Thus, an infection that originates in one computer could easily spread to others over time - a hazard that may be particularly severe for the banking industry, where information is constantly being exchanged by wire. Says Cohen, "The danger is that somebody will write viruses that are bad enough to get around the financial institutions and stop their computers from working." Many security professionals also find this prospect frightening. Says Jerry Lobel, manager of computer security at Honeywell Information Systems in Phoenix, "Fred came up with one of the more devious kinds of problems against which we have very few defenses at present." Lobel, who organized a recent security conference sponsored by the International Federation for Information Processing -at which Cohen also delivered a paper- cites other potential targets for attack: "If it were an air traffic control system or a patient monitoring system in a hospital, it would be a disaster." Marvin Schaefer, chief scientist at the Pentagon's computer security center, says the military has been concerned about penetration by virus-like programs for years. Defense planners have protected some top-secret computers by isolating them, just as a doctor might isolate a patient to keep him from catching cold. The military's most secret computers are often kept in electronically shielded rooms and connected to each other, when necessary, by wires that run through pipes containing gas under pressure. Should anyone try to penetrate the pipes in order to tap into the wires, the drop in gas pressure would immediately give him away. But, Schaefer admits, "in systems that don't have good access controls, there really is no way to contain a virus. It's quite possible for an attack to take over a machine." Honeywell's Lobel strongly believes that neither Cohen nor any other responsible expert should even open a public discussion of computer viruses. "It only takes a halfway decent programmer about half a day of thinking to figure out how to do it," Lobel says. "If you tell enough people about it, there's going to be one crazy enough out there who's going to try." Cohen disagrees, insisting that it is more dangerous `not' to discuss and study computer viruses. "The point of these experiments," he says, "is that if I can figure out how to do it, somebody else can too. It's better to have somebody friendly do the experiment, tell you how bad it is, show you how it works and help you counteract it, than to have somebody vicious come along and do it." If you wait for the bad guys to create a virus first, Cohen says, then by the time you find out about it, it will be too late.
END OF DOCUMENT
ISSN 1742-948X 01 (Online) | 2005/03/01 | Copyright 2002-2008 securitydigest.org. All rights reserved. |