EditWYSIWYGAttach PDF Raw View►More Actions▼More Actions


Restore topic to revision: You will be able to review the topic before saving it to a new revision

Copy text and form data to a new topic (no attachments will be copied though).
Name of copy:
You will be able to review the copied topic before saving

Rename/move topic... scans links in all public webs (recommended)
Rename/move topic... scans links in CBI_ComputerSecurity web only
Delete topic... scans links in all public webs (recommended)
Delete topic... scans links in CBI_ComputerSecurity web only

Revision Date Username Comment
4930 Oct 2014 - 18:02ThomasMisa 
4830 Oct 2014 - 18:00ThomasMisa 
4730 Oct 2014 - 17:54ThomasMisa 
4603 Oct 2013 - 16:02norqu036? 
4519 Jun 2013 - 08:53NicLewis 
4418 Jun 2013 - 17:00ThomasMisa 
4311 Jun 2013 - 12:08NicLewisAttached file haigh.pdf 
4210 Jun 2013 - 11:20NicLewis 
4110 Jun 2013 - 10:46ThomasMisa(minor)  
4030 May 2013 - 13:11NicLewis 
earlier first

Render style:     Context:


 History: r49 | r45 < r44 < r43 < r42
[X] Hide this message.
Notice: On June 30, 2016, UMWiki service will be decommissioned. If you have information in UMWIki that needs to be preserved, you should make plans to move it before that date. Google Sites is anticipated to be the most popular and appropriate alternative for users because it offers a more modern and user-friendly interface and unlimited capacity. To learn more about the features of Google Sites and other alternatives, and to identify which one best fits your needs, see the University’s Website Solution Selection Guide. If you have concerns or would like help regarding this change and your options, please contact Technology Help at help@umn.edu
You are here: UMWiki>CBI_ComputerSecurity Web>Systems>SystemsMultics (revision 43)

Multics

 

After the implementation of the Compatible Time-Sharing System (CTSS) in 1964, Project MAC, an ARPA-sponsored MIT research program, began to plan the development of a second-generation operating system.(1) General Electric and Bell Labs both joined MIT in the project in 1964, with GE providing the Project MAC research computer, and Bell Labs lending its computing expertise. Bell was not allowed to pursue a commercial computer project on its own due to a 1956 antitrust consent decree. The proposed operating system would be more than a successor to CTSS, which could only support up to 30 remote users.(2) Multics (MULTiplexed Information and Computing Service) was a “comprehensive, general-purpose programming system.”(3) Multics was intended as a research project to investigate the theoretical and practical requirements of a large-scale, multi-user computer system that would scale as demands upon the system grew and changed.(4) This system was designed to run continuously and reliably, while performing a wide variety of tasks, from real-time user interaction on any number of as-yet undeveloped applications, to autonomous batch-processing operations. At the time of its development, Multics was conceived of as a computer “utility,” akin to a public electricity utility, shaping its requirements not only in terms of reliability, versatility, and security, due to the wide potential range of users. These requirements influenced several design choices, including the programming language used to develop Multics, PL/I.(5)

Only a small portion of Multics was written in assembly language, with most of the operating system written in the high-level PL/I programming language. This unusual decision would have a lasting impact on Multics security. PL/I uses character strings that are either of fixed length, or of a variable length, but the maximum length is always specified. The C language, by contrast, uses variable string lengths. Determining string length written in C requires searching for a null byte. An operating system written in PL/I can accurately allocate the correct amount of memory required for any of its compiled code. This feature greatly reduces the likelihood of buffer overflows, which occur when an operating system fails to allocate the correct amount of system memory for the code it is executing. A buffer overflow allows the code potentially to crash the system, as it exceeds its allocated memory space. Malicious code can exploit this weakness to access resources beyond what the operating system had permitted, a common point of attack in operating systems written in the C language. The use of PL/I made buffer overflows extremely unlikely in Multics, improving system security as a consequence.(6)

Protection rings were another important software security control. Multics featured eight concentric rings of protection, with software in ring “7” having the least access privileges on the system, and ring “0,” containing the supervisor, having the most access privileges. The Multics supervisor was a component of the operating system that allocated system resources to user processes, performed searches of the system’s secondary storage, and managed other activities of the computer. Multics implemented protection rings in software because the GE 645 CPU lacked hardware implementation of protection rings. Later, commercial versions of the Multics system would feature hardware-based protection rings.(7)

The Multics team developed hardware features to work in conjunction with software, further enhancing Multics security. While PL/I made buffer overflows unlikely, the Multics hardware further reduced the consequences of such an attack. Most buffer overflow attacks were a threat because, once outside its allocated memory space, malicious code could access data beyond its allocated permissions. This posed a particular threat to multi-user systems, which might, as in government installations, contain data of multiple levels of security clearance. Three important hardware features in Multics -- the use of hardware execute permission bits, segmented virtual addresses, and stacks on Multics processors growing in a positive direction -- meant that a successful buffer overflow exploit would be unable to access hardware resources beyond what the operating system allocated. As well, malicious code would also be unable to determine where other system resources were stored, and could only overflow into unused stack frames, so that the malicious code could not overwrite its own return pointer.(8) These features made it difficult for malicious code to alter its own permissions to grant itself further access. As part of its contract to provide the project computer, General Electric modified the GE 635 to meet Multics hardware specifications, resulting in the GE 645.(9)

Multics security features also included enciphered passwords, a login audit trail, and software maintenance procedures. Passwords in Multics were never stored in clear text. Instead, when a user entered their login password, the password was enciphered, then compared to the password stored on the system in the same cipher. This prevented un-enciphered user passwords from being revealed in the event of a system dump. In addition, a login audit trail logged the time, date, and terminal of each login attempt, and notified the user of the number of incorrect password attempts on their account since the last successful login. Finally, software maintenance procedures, such as verifying new software before adding it to the system source and object libraries, worked to detect and prevent unauthorized modifications to the “ring 0” software, which contained the supervisor and primary system files.(10)

Projected to require two-and-a-half years to complete, the Multics project progressed slowly, largely due to the scale of the operating system and the length of time waiting for Digitek, an outside Bell Labs contractor, to complete the PL/I compiler. Multics was still confined to the lab as of 1968. While waiting for the Multics compiler, under development for 18 months, the team documented the system design in the MSPM (Multics System Programmers’ Manual), which grew to approximately 3,000 pages, and outlined numerous features that were never implemented in the final system. To provide a relatively concise summary for programmers wishing to understand the overall structure of Multics, Elliot I. Organick, of the University of Utah, wrote The Multics System: An Examination of Its Structure. When David Bell and Len LaPadula approached the Multics developers about Multics design features, they were given a copy of Organick’s book. It served as Bell and LaPadula’s first exposure to operating system theory. In subsequent years, David Bell’s first request when starting a new project was, jokingly, for the “Organick” of the new system.(11)

Six months before Multics entered use at MIT in 1969, Bell Labs left the project after William Baker, the vice president for research, decided that Multics development was moving too slowly, and that continued Bell involvement was a drain on resources. This decision reflected a growing impression of Multics at Bell, with Sam Morgan, the director of computing science research, saying: “It was becoming clear to people that Multics was an attempt to climb too many trees at once.”(12) Two Bell Labs Multics project members, Dennis Ritchie and Ken Thompson, would develop their own Multics-inspired operating system: UNIX was of a modest scale compared to Multics and meant to operate on lower-power hardware.(13) In 1970, Honeywell purchased General Electric’s computer division, including its stake in the Multics project.(14) Multics entered actual usage at MIT in 1969 and at Rome Air Development Center at Griffiss Air Force Base in Rome, New York in 1970. Honeywell began to offer Multics to academic, industrial (such as General Motors and Ford), and military customers shortly thereafter, growing to about 25 installations (an exact number has not been found) before Multics entered commercial distribution on the Honeywell 6180 in 1973. Each Multics installation cost approximately $7 million, and consisted of a complete hardware and software bundle. MIT was the first commercial customer.(15)

United States military agencies first explored security in multi-user computers in 1967, when ARPA (Advanced Research Projects Agency) commissioned a task force “to study and recommend appropriate computer security safeguards that would protect classified information in multi-access, resource-sharing computer systems.”(16) The resulting 1970 RAND report, Security Controls for Computer Systems, did not have the intended impact, largely due to the report’s Confidential classification that limited its circulation to defense agencies and defense contractors.(17) The later two-volume Anderson report, Computer Security Technology Planning Study, would also criticize the RAND report, stating that it “may have had a negative effect due to its specification of necessary, but not sufficient criteria” for computer security.(18) In 1972, Roger Schell, of the U.S. Air Force Electronic Systems Division, and an MIT alum who had worked intimately with Multics security, commissioned a study aimed at finding solutions to the computer security issues the 1970 RAND report outlined, but not resolved, in the 1970 RAND report. Steve Lipner, of the MITRE Corporation, and future analyst of Multics security, would later state that Roger Schell was instrumental in recommending Multics as part of the Air Force search for a secure computer system.(19)

The Anderson report, named after consultant James P. Anderson, author of the final presentation, was published in October of 1972. Anderson’s previous work for the Air Force included successfully breaking into the existing Air Force Honeywell 635 computer and GECOS III operating system as part of a test for remote access security. This was pioneering work in penetration attacks and part of the Air Force’s impetus to search for a secure system.(20) Focusing on the inadequacy of modifying existing systems to add security, the Anderson report insisted that a secure computer system must be built from the ground up with security in mind, citing the unsuitable nature of systems like the IBM S/360 as a starting point for a secure computing platform, due to a lack of security integration during its initial development. Among the report’s recommended avenues of further research was a concept Roger Schell and Harvard mathematician John Goodenough called a “security kernel,” a small operating system that negotiated all security-related requests between software and the system hardware. A security kernel simplified the process of implementing, monitoring, and updating security policies, because it was independent of the primary operating system and far smaller in scale. The small scale of the kernel, compared to the larger operating system, theoretically made it possible to verify that the kernel worked exactly as intended.(21)

Prior to the Anderson report, the Air Force had employed an expensive strategy of purchasing multiple or redundant computer systems to avoid the security risk of placing classified and unclassified data on the same machine. The Anderson report concluded this strategy cost the Air Force approximately $100,000,000 per year. The Anderson report recommended the deployment of a multi-user computer system capable of the secure storage and distribution of data containing multiple levels of security clearance on the same machine. For the purpose of saving the time and cost of producing a new computer system, the report recommended utilizing “off-the-shelf” hardware and software. However, because multi-user computer systems that employed multiple levels of data classification were not in wide use at that point in the early 1970s, the panel behind the Anderson report searched for an existing commercial system suitable for development into a secure platform for the Air Force. After assessing multiple systems, including the IBM S/360 and 370, and the Univac 1100 series, the Anderson report recommended the Honeywell 6180 Multics System for further development, due to the security features included as part of the original Multics design.(22)

In 1972, the Air Force, in conjunction with the MITRE Corporation, began a vulnerability analysis on Multics to provide evidence to Honeywell, which was reluctant to undertake further development work, that Multics required security enhancements to meet the needs of the armed forces. Calling themselves “Project ZARF,” a “tiger team” group of computer security experts, including Steven Lipner of MITRE and Roger Schell of the Air Force, looked for exploitable security holes in Multics. The tiger team found multiple weaknesses, including the ability to install trap doors, an access point known to the intruder, but unrecognizable to system users. The tiger team successfully introduced Trojan horses, which involved packaging unauthorized code in a seemingly legitimate program. The team also cracked the Multics password file, using a flaw in the Honeywell PL/I compiler that made it possible to remove the password encryption.(23)

Although the tiger team report concluded that Multics was not, as it then existed, a secure system, the security features built into Multics made it “significantly more secure than other commercial systems,” forming “a base from which a secure system can be developed.”(24) The team recommended that Multics undergo a full security analysis to identify vulnerabilities, and to form a strategy to develop a “certifiably secure version of Multics.” As part of this development, the team report also recommended the restructuring of Multics around a security kernel, citing that “such restructuring is essential since malicious users cannot be ruled out in an open system.”(25) This process of finding exploits garnered attention in the popular media, with the New Yorker published a two-part article on computer crime that included the tiger team’s success in hacking a multi-user operating system that was originally designed with security in mind.(26) The success of the tiger team raised alarm in the popular media over the state of computer security in general, if a security-oriented operating system like Multics was so clearly vulnerable.(27) As for Multics itself, as Steve Lipner describes, the results of the tiger team “made Honeywell more cooperative in the Multics security enhancement.” Honeywell cooperated with the subsequent Multics security enhancement project.(28)

The security enhancement project, a joint venture of MITRE, the Air Force, and Honeywell, resulted in “Design for Multics Security Enhancements,” known colloquially as the Whitmore report. The 1973 Whitmore report focused on areas that the tiger team found most vulnerable, and outlined an access control policy, called the Access Isolation Mechanism, that would allow “two levels of classified data to be used simultaneously on a single Multics system.”(29) Honeywell implemented the Whitmore report’s recommended security enhancements to the Multics access control system.(30) However, even after Honeywell corrected all the flaws the tiger team discovered, the tiger team was able to break into the system again, exploiting flaws in the new fixes. These results contained an important lesson for computer security. Simple hacking and patching would not result in a secure system. The complexity of an operating system meant that not every flaw could ever be found, since the patches for those flaws could contain their own flaws. In addition, tiger team testing was dependent upon the skill and cunning of each team. The fact that a team did not find an exploit did not mean a more skilled, or more determined, attack would not have better results. Secure computing, as Roger Schell would soon argue, required mathematical provability.(31)

Several significant security projects grew out of Project Guardian, the Air Force program to enhance the security of Multics. MITRE conducted research and development on at least two Multics projects that shaped the future of secure computing. One project produced the “Unified Exposition and Multics Interpretation” report of 1975 (published in 1976), which integrated the Bell-!LaPadula mathematical model of computer security into Multics. Multics was the first system to undergo restructuring to incorporate the Bell-LaPadula model, one of the earliest and most formative mathematical models of computer security. This integration process solidified David Bell and Len LaPadula’s evolving model around a real-world system. This version of Multics would later become the standard for the NSA’s Trusted Computer System Evaluation Criteria (TCSEC) “B2” classification, with Multics garnering that classification in 1985.(32)

A second project, a joint effort of MITRE, the Air Force, MIT, and Honeywell, worked on a security kernel design for Multics. This effort involved determining the minimum amount of code necessary for a security kernel, and restructuring the operating system through a process of removing the elements from the operating system that became part of the kernel, all while ensuring that Multics still functioned as before. This was the first practical attempt at developing a self-protecting, high assurance system. Despite its success, the Air Force Systems Command terminated Project Guardian in 1976, including the incomplete security kernel effort, on the direction of the General Accounting Office (GAO). Despite admitting to Project Guardian’s success, the GAO cited insufficient funding to continue the project, and stated that the World Wide Military Command and Control System (WWMCCS) had not endorsed or supported the Air Force multilevel computer security effort. The cancelation report included mention of the work already completed in the project being put to use in other security efforts, such as the ARPA effort, in conjunction with the University of California, Berkeley, to develop a secure version of the Bell Labs UNIX operating system, resulting in BSD Unix.(33)

Although canceled before its completion, the Multics security kernel project, like the Bell-LaPadula project, proved its worth well beyond Multics, and even beyond the Air Force. The Multics security kernel not only demonstrated the practical feasibility of the security kernel concept, but served as the basis for the TCSEC A1 classification requirements.(34) Despite dire warnings of the insecurity of the Air Force’s computer systems at the end of the 1970s, including Roger Schell providing an imagined scenario involving KGB agents joyfully exploiting American computer security weaknesses, the Multics security kernel project, and Project Guardian as a whole, remained cancelled. The project cancelation report, not published until 1978, concluded that, although such a program was necessary for improving computer security in the armed forces, too much time had passed since Guardian’s cancelation to reconvene the program. The project team had long since disbanded.(35) With Roger Schell serving as its director, the Department of Defense Computer Security center, founded in 1981, used Multics as its template when developing the TSCEC “Orange Book,” first released in 1983, which established the basic assessment requirements for security controls built into an operating system.(36) A 2002 retrospective on the “Multics Security Evaluation” concluded that Multics, after its “hardening” in the Air Force, was more secure than most of the operating systems available twenty-five years later.(37)

Despite approximately 100 installation sites, enjoying success particularly in France and Europe during the 1980s, Honeywell canceled commercial Multics development in 1985. Multics customers unsuccessfully pressured Honeywell for further development, and several similarly unsuccessful efforts attempted to resurrect Multics on a new platform, such as the Intel 80386 architecture. The number of operational sites dwindled over the following fifteen years, with MIT’s Multics service ceasing operations in 1988. In 1992, Bull, which took over the Honeywell computing division, released the final version of the Multics source code to MIT. The last remaining Multics site, operated by the Canadian Department of National Defense in Halifax, shut down in October of 2000.(38)

CTSS

First demonstrated in 1961, the Compatible Time-Sharing System (CTSS) was one of the first time-sharing operating systems. Developed at MIT's Computation center, CTSS operated at MIT from 1961 to 1973. Project MAC developed Multics as a successor to CTSS.(39)

Project MAC

Project MAC (Mathematics and Computation, among other acronyms) began in 1963 at the MIT Research Laboratory of Electronics as part of a DARPA grant, and would conduct research in operating systems, artificial intelligence, and the theory of computation. One such research project was the Multics operating system, which began in 1964.(40)

Additional Resources:

lock Login Required to View Attachment

-- Main.lewi0740 - 19 Sep 2012

Notes

1 : lock F. J. Corbato, et al., Multics: The First Seven Years, Spring Joint Computer Conference (1972), 571. (Login required)

2 , 4 : lock Gordon M. Brown, "Unix: An Oral History," 1. (Login required)

3 : lock F.J. Corbato and V.A. Vyssotsky, "Introduction and Overview of the Multics System," Fall Joint Computer Conference (1965). (Login required)

5 : lock F.J. Corbato and V.A. Vyssotsky, "Introduction and Overview of the Multics System" (Login required); Jeffrey R. Yost, “An Interview with Roger R. Schell, Ph.D., OH 405,” Charles Babbage Institute, 2012, 37; lock Paul A. Karger and Roger R. Schell, “Thirty Years Later: Lessons from the Multics Security EvaluationComputer Security Applications Conference (ACSAC, 2002), 1 (Login required); Richard L. Wexelblat, ed., History of Programming Languages, (New York: Academic Press, 1981), 561.

6 : lock Paul A. Karger and Roger R. Schell, “Thirty Years Later: Lessons from the Multics Security EvaluationComputer Security Applications Conference (ACSAC, 2002), 2 (Login required); George Radin, “The Early History and Characteristics of PL/I,” 568-569, in Richard L. Wexelblat, ed., History of Programming Languages, (New York: Academic Press, 1981), 551-598.

7 : lock Paul A. Karger and Roger R. Schell, "Multics Security Evaluation: Vulnerability Analysis," (MA: Hanscom AFB, 1974), 9-10 (Login required); Elliot I. Organick, The Multics System: An Examination of Its Structure, (Cambridge, MA: MIT Press, 1972), 9, 132; lock J. Whitmore, et al, “Design for Multics Security Enhancements," (Cambridge, MA: Honeywell Information Systems, Inc., 1973), 21. (Login required)

8 : lock Paul A. Karger and Roger R. Schell, “Thirty Years Later: Lessons from the Multics Security EvaluationComputer Security Applications Conference (ACSAC, 2002), 2 (Login required)

9 : Thomas Haigh, “Multicians.org and the History of Operating Systems," Iterations (The Charles Babbage Institute, 2002), 3; lock F.J. Corbato and V.A. Vyssotsky, "Introduction and Overview of the Multics System." (Login required)

10 : lock Paul A. Karger and Roger R. Schell, "Multics Security Evaluation: Vulnerability Analysis," (MA: Hanscom AFB, 1974), 15-16. (Login required)

11 : Thomas Haigh, “Multicians.org and the History of Operating Systems," Iterations (The Charles Babbage Institute, 2002), 3; lock R. A. Freiburghouse, “The Multics PL/I Compiler," Proceedings of the November 18-20, 1969, Fall Joint Computer Conference (ACM, 1969), 187 (Login required); <www.multicians.org/mspmtoc.html>; Elliott I. Organick, The Multics System: An Examination of Its Structure, (Cambridge, MA: MIT Press, 1972), xiii-xv; Jeffrey R. Yost, “An Interview with David Bell, OH 411," Charles Babbage Institute, 2012, 33-34, 38.

12 : lock Gordon M. Brown, "Unix: An Oral History," 2. (Login required)

13 : Beginning as a two-person project, UNIX grew to become a major operating system in education, business, and government in the 1980s, as the UNIX design allowed users to add additional software features onto the base UNIX platform without impacting system reliability. Multiple versions of UNIX, including those from Berkeley and Sun Microsystems, grew out of the original software platform. The current Linux and Mac OSX operating systems developed out of UNIX-based systems.

14 : Thomas Haigh, “Multicians.org and the History of Operating Systems," Iterations (The Charles Babbage Institute, 2002), 3; lock Gordon M. Brown, "Unix: An Oral History," 3 (Login required); Jeffrey R. Yost, “An Interview with Thomas Van Vleck, OH 408," Charles Babbage Institute, 2012, 30; <www.multicians.org/history.html>.

15 : <www.multicians.org/history.html>; Thomas Haigh, “Multicians.org and the History of Operating Systems," Iterations (The Charles Babbage Institute, 2002), 3; Jeffrey R. Yost, “An Interview with Thomas Van Vleck, OH 408," Charles Babbage Institute, 2012, 52.

16 : Willis H. Ware, “Security Controls for Computer Systems: Report of Defense Science Board Task Force on Computer Security - RAND Report R-609-1,” Santa Monica, CA (Rand, 1979), at: <www.rand.org/pubs/reports/R609-1/index2.html>; Donald MacKenzie, Mechanizing Proof: Computing, Risk, and Trust, (Cambridge, MA: MIT Press, 2001), 159; lock James P. Anderson, “Computer Security Technology Planning Study, Vol. I," (MA: L. G. Hanscom Field, 1972), 4. (Login required)

17 : lock James P. Anderson, “Computer Security Technology Planning Study, Vol. I," (MA: L. G. Hanscom Field, 1972), 4. (Login required); <www.rand.org/pubs/reports/R609-1/index2.html>.

18 : lock James P. Anderson, “Computer Security Technology Planning Study, Vol. I," (MA: L. G. Hanscom Field, 1972), 4. (Login required)

19 : Donald MacKenzie, Mechanizing Proof: Computing, Risk, and Trust, (Cambridge, MA: MIT Press, 2001), 160-161; Jeffrey R. Yost, “An Interview with Roger R. Schell, OH 405," Charles Babbage Institute, 2012, 57-58, 59, 61-62.

20 : lock Edward Hunt, “US Government Computer Penetration Programs and the Implications for Cyberwar," IEEE Annals 34, no. 3 (July-Sept. 2012), 10. (Login required)

21 : lock James P. Anderson, “Computer Security Technology Planning Study, Vol. II," (MA: L. G. Hanscom Field, 1972), 14-15, 24-25 (Login required); Jeffrey Yost, “An Interview with Steven B. Lipner, OH 406," Charles Babbage Institute, 2012, 30-31.

22 : lock Paul A. Karger and Roger R. Schell, "Multics Security Evaluation: Vulnerability Analysis," (MA: Hanscom AFB, 1974), 5 (Login required); “lock James P. Anderson, “Computer Security Technology Planning Study, Vol. II," (MA: L. G. Hanscom Field, 1972), 10-12, 19-20. (Login required)

23 : <www.multicians.org/security.html>; Jeffrey Yost, “An Interview with Steven B. Lipner, OH 406," Charles Babbage Institute, 2012, 31-32; Thomas Whiteside, “Dead Souls in the Computer,” New Yorker (August 29, 1977), 58-62; Thomas Whiteside, Computer Capers: Tales of Electronic Thievery, Embezzlement, and Fraud, (New York: Thomas Y. Crowell Company, 1978), 117-121.

24 : lock Paul A. Karger and Roger R. Schell, "Multics Security Evaluation: Vulnerability Analysis," (MA: Hanscom AFB, 1974), 59. (Login required)

25 : lock Paul A. Karger and Roger R. Schell, "Multics Security Evaluation: Vulnerability Analysis," (MA: Hanscom AFB, 1974), 60. (Login required)

26 : lock J. Whitmore, et al, “Design for Multics Security Enhancements," (Cambridge, MA: Honeywell Information Systems, Inc., 1973), 59-60. (Login required)

27 : lock Edward Hunt, “US Government Computer Penetration Programs and the Implications for Cyberwar," IEEE Annals 34, no. 3 (July-Sept. 2012), 12. (Login required)

28 : Jeffrey Yost, “An Interview with Steven B. Lipner, OH 406," Charles Babbage Institute, 2012, 32-33. (Login required)

29 : lock J. Whitmore, et al, “Design for Multics Security Enhancements," (Cambridge, MA: Honeywell Information Systems, Inc., 1973), Report Documentation Page. (Login required)

30 : Jeffrey Yost, “An Interview with Steven B. Lipner, OH 406," Charles Babbage Institute, 2012, 32. (Login required)

31 : lock Roger Schell, “Computer Security: The Achilles’ Heel of the Electronic Air Force?," Air University Review 30, no. 2 (January-February 1979), 171, 182-183 (Login required); lock Edward Hunt, “US Government Computer Penetration Programs and the Implications for Cyberwar," IEEE Annals 34, no. 3 (July-Sept. 2012), 12. (Login required)

32 : lock David Elliot Bell, "Looking Back at the Bell-LaPadula Model," 342, in Proceedings of the 21st Annual Computer Security Applications (Washington, DC: IEEE Computer Society, 2005), 337-351 (Login required); Jeffrey R. Yost, “An Interview with David Bell, OH 411," Charles Babbage Institute, 2012, 32-35; lock Paul A. Karger and Roger R. Schell, “Thirty Years Later: Lessons from the Multics Security EvaluationComputer Security Applications Conference (ACSAC, 2002), 1, 4. (Login required)

33 : Jeffrey Yost, “An Interview with Steven B. Lipner, OH 406," Charles Babbage Institute, 2012, 36; Fred J. Shafer, "Multilevel Computer Security Requirements of the World Wide Military Command and Control System (WWMCCS), LCD-78-106," Logistics and Communications Division (Department of Defense, April 5, 1978), 4-5; <docstore.mik.ua/orelly/other/puis3rd/0596003234_puis3-chp-2-sect-1.html>; lock M. D. Schroeder, et al, “Final Report of the Multics Kernel Design Project," Cambridge, MA (MIT, Laboratory for Computer Science, June 30, 1977), 10-12. (Login required)

34 , 37 : lock Paul A. Karger and Roger R. Schell, “Thirty Years Later: Lessons from the Multics Security EvaluationComputer Security Applications Conference (ACSAC, 2002), 5. (Login required)

35 : lock Roger Schell, “Computer Security: The Achilles’ Heel of the Electronic Air Force?," Air University Review 30, no. 2 (January-February 1979), 190-191 (Login required); lock Fred J. Shafer, "Multilevel Computer Security Requirements of the World Wide Military Command and Control System (WWMCCS), LCD-78-106," Logistics and Communications Division (Department of Defense, April 5, 1978), 8-9. (Login required)

36 : <www.multicians.org/security.html>; Department of Defense, “Department of Defense Trusted Computer System Evaluation Criteria, DoD 5200.28-STD," Department of Defense (December, 1985), 6.

38 : <www.multicians.org/history.html>; <web.mit.edu/multics-history>.

39 , 40 : Peter H. Salus, A Quarter Century of UNIX, New York (Addison-Wesley Publishing Company, 1994), 25-26; http://www.multicians.org/thvv/7094.html


Topic attachments
I Attachment Action Size Date Who Comment
pdfpdf haigh.pdf manage 57.8 K 11 Jun 2013 - 12:05 NicLewis Thomas Haigh, “Multicians.org and the History of Operating Systems," Iterations (The Charles Babbage Institute, 2002).
Edit | WYSIWYG | Attach |  PDF |  History: r49 | r45 < r44 < r43 < r42 |  Backlinks |  Raw View | More topic actions...
Topic revision: r43 - 11 Jun 2013 - 12:08:26 - NicLewis
 
Signed in as lewi0740 (NicLewis) | Sign out
UMWiki UMWiki
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding UMWiki? Send feedback