os 8th edition

982

Click here to load reader

Upload: rajarsh-singh-bais

Post on 21-Jun-2015

438 views

Category:

Engineering


21 download

DESCRIPTION

Operating system by galvin

TRANSCRIPT

  • 1. To my children, Lemar, Sivan, and Aaronand my NicoletteAvi SilberschatzTo my wife, Carla,and my children, Gwen, Owen, and MaddiePeter Baer GalvinTo my wife, Pat,and our sons, Tom and JayGreg Gagne

2. Abraham Silberschatz is the Sidney J. Weinberg Professor & Chair of ComputerScience at Yale University. Prior to joining Yale, he was the Vice Presidentof the Information Sciences Research Center at Bell Laboratories. Prior to that,he held a chaired professorship in the Department of Computer Sciences at theUniversity of Texas at Austin.Professor Silberschatz is an ACM Fellow and an IEEE Fellow. He receivedthe 2002 IEEE Taylor L. Booth Education Award, the 1998 ACM Karl V. KarlstromOutstanding Educator Award, and the 1997 ACM SIGMOD ContributionAward. In recognition of his outstanding level of innovation and technicalexcellence, he was awarded the Bell Laboratories President's Award for threedifferent projects-the QTM Project (1998), the DataBlitz Project (1999), andthe Netlnventory Project (2004).Professor Silberschatz' writings have appeared in numerous ACM andIEEE publications and other professional conferences and journals. He is acoauthor of the textbook Database System Concepts. He has also written Op-Edarticles for the New York Times, the Boston Globe, and the Hartford Courant,among others.Peter Baer Galvin is the chief technologist for Corporate Technologies(www.cptech.com), a computer facility reseller and integrator. Before that, Mr.Galvin was the systems manager for Brown University's Computer ScienceDepartment. He is also Sun columnist for ;login: magazine. Mr. Galvin haswritten articles for Byte and other magazines, and has written columns forSun World and SysAdmin magazines. As a consultant and trainer, he has giventalks and taught tutorials on security and system administration worldwide.Greg Gagne is chair of the Computer Science department at WestminsterCollege in Salt Lake City where he has been teaching since 1990. In additionto teaching operating systems, he also teaches computer networks, distributedsystems, and software engineering. He also provides workshops to computerscience educators and industry professionals. 3. Operating systems are an essential part of any computer system. Similarly,a course on operating systems is an essential part of any computer-scienceeducation. This field is undergoing rapid change, as computers are nowprevalent in virtually every application, from games for children through themost sophisticated planning tools for governments and multinational firms.Yet the fundamental concepts remain fairly clear, and it is on these that we basethis book.We wrote this book as a text for an introductory course in operating systemsat the junior or senior undergraduate level or at the first-year graduate level.We hope that practitioners will also find it useful. It provides a clear descriptionof the concepts that underlie operating systems. As prerequisites, we assumethat the reader is familiar with basic data struchues, computer organization,and a high-level language, such as C or Java. The hardware topics requiredfor an understanding of operating systems are included in Chapter 1. For codeexamples, we use predominantly C, with some Java, but the reader can stillunderstand the algorithms without a thorough knowledge of these languages.Concepts are presented using intuitive descriptions. Important theoreticalresults are covered, but formal proofs are omitted. The bibliographical notesat the end of each chapter contain pointers to research papers in which resultswere first presented and proved, as well as references to material for furtherreading. In place of proofs, figures and examples are used to suggest why weshould expect the result in question to be true.The fundamental concepts and algorithms covered in the book are oftenbased on those used in existing conunercial operating systems. Our aimis to present these concepts and algorithms in a general setting that isnot tied to one particular operating system. We present a large number ofexamples that pertain to the most popular and the most im1.ovative operatingsystems, including Sun Microsystems' Solaris; Linux; Microsoft WindowsVista, Windows 2000, and Windows XP; and Apple Mac OS X. When we referto Windows XP as an example operating system, we are implying WindowsVista, Windows XP, and Windows 2000. If a feature exists in a specific release,we state this explicitly.vii 4. viiiThe organization of this text reflects our many years of teaching courses onoperating systems. Consideration was also given to the feedback provided bythe reviewers of the text, as well as comments submitted by readers of earliereditions. In addition, the content of the text corresponds to the suggestionsfrom Computing Curricula 2005 for teaching operating systems, published bythe Joint Task Force of the IEEE Computing Society and the Association forComputing Machinery (ACM).On the supporting Web site for this text, we provide several samplesyllabi that suggest various approaches for using the text in both introductoryand advanced courses. As a general rule, we encourage readers to progresssequentially through the chapters, as this strategy provides the most thoroughstudy of operating systems. However, by using the sample syllabi, a reader canselect a different ordering of chapters (or subsections of chapters).On-line support for the text is provided by WileyPLUS. On this site, studentscan find sample exercises and programming problems, and instructors canassign and grade problems. In addition, in WileyPLUS, students can access newoperating-system simulators, which are used to work through exercises andhands-on lab activities. References to the simulators and associated activitiesappear at the ends of several chapters in the text.The text is organized in nine major parts:Overview. Chapters 1 and 2 explain what operating systems are, what theydo, and how they are designed and constructed. These chapters discuss whatthe common features of an operating system are, what an operating systemdoes for the user, and what it does for the computer-system operator. Thepresentation is motivational and explanatory in nature. We have avoided adiscussion of how things are done internally in these chapters. Therefore,they are suitable for individual readers or for students in lower-level classeswho want to learn what an operating system is without getting into thedetails of the internal algorithms.Process management and Process coordination. Chapters 3 through 7describe the process concept and concurrency as the heart of modernoperating systems. A process is the unit of work in a system .. Sucha system consists of a collection of concurrently executing processes,some of which are operating-system processes (those that execute systemcode) and the rest of which are user processes (those that execute usercode). These chapters cover n1.ethods for process scheduling, interprocesscommunication, process synchronization, and deadlock handling. Alsoincluded is a discussion of threads, as well as an examination of issuesrelated to multicore systems.Memory management. Chapters 8 and 9 deal with the management ofmain memory during the execution of a process. To improve both theutilization of the CPU and the speed of its response to its users, thecomputer must keep several processes in memory. There are many different 5. ixmanagement, and the effectiveness of a particular algorithm depends onthe situation.Storage management. Chapters 10 through 13 describe how the file system,mass storage, and I/0 are handled in a modern computer system. Thefile system provides the mechanism for on-line storage of and accessto both data and programs. We describe the classic internal algorithmsand structures of storage management and provide a firm practicalunderstanding of the algorithms used -their properties, advantages, anddisadvantages. Our discussion of storage also includes matters relatedto secondary and tertiary storage. Since the I/0 devices that attach to acomputer vary widely, the operating system needs to provide a wide rangeof functionality to applications to allow them to control all aspects of thesedevices. We discuss system I/O in depth, including I/O system design,interfaces, and internal system structures and functions. In many ways,I/O devices are the slowest major components of the computer. Becausethey represent a performance bottleneck, we also examine performanceissues associated with I/0 devices.Protection and security. Chapters 14 and 15 discuss the mechanismsnecessary for the protection and security of computer systems. Theprocesses in an operating system must be protected from one another'sactivities, and to provide such protection, we must ensure that onlyprocesses that have gained proper authorization from the operating systemcan operate on the files, memory, CPU, and other resources of the system.Protection is a mechanism for controlling the access of programs, processes,or users to the resources defined by a computer system. This mechanismmust provide a means of specifying the controls to be imposed, aswell as a means of enforcement. Security protects the integrity of theinformation stored in the system (both data and code), as well as thephysical resources of the system, from 1.mauthorized access, maliciousdestruction or alteration, and accidental introduction of inconsistency.Distributed systems. Chapters 16 through 18 deal with a collection ofprocessors that do not share memory or a clock-a distributed system. Byproviding the user with access to the various resources that it maintains, adistributed system can improve computation speed and data availabilityand reliability. Such a system also provides the user with a distributed filesystem, which is a file-service system whose users, servers, and storagedevices are dispersed among the sites of a distributed system. A distributedsystem must provide various mechanisms for process synchronizationand communication, as well as for dealing with deadlock problems and avariety of failures that are not encountered in a centralized system.Special-purpose systems. Chapters 19 and 20 deal with systems used forspecific purposes, including real-time systems and multimedia systems.These systems have specific requirements that differ from those of thegeneral-purpose systems that are the focus of the remainder of the text.Real-time systems may require not only that computed results be "correct"but also that the results be produced within a specified deadline period.Multimedia systems require quality-of-service guarantees ensuring thatthe multimedia data are delivered to clients within a specific time frame. 6. XCase studies. Chapters 21 through 23 in the book, and Appendices Athrough C (which are available on www.wiley.comJ go I global/ silberschatzand in WileyPLUS), integrate the concepts described in the earlier chaptersby describing real operating systems. These systems include Linux,Windows XP, FreeBSD, Mach, and Windows 2000. We chose Linuxand FreeBSD because UNIX-at one time-was almost small enoughto understand yet was not a "toy" operating system. Most of itsinternal algorithms were selected for simplicity, rather than for speedor sophistication. Both Linux and FreeBSD are readily available tocomputer-science departments, so many students have access to thesesystems. We chose Windows XP and Windows 2000 because they providean opporhmity for us to study a modern operating system with a designand implementation drastically different from those of UNIX. Chapter 23briefly describes a few other influential operating systems.This book uses examples of many real-world operating systems to illustratefundamental operating-system concepts. However, particular attention is paidto the Microsoft family of operating systems (including Windows Vista,Windows 2000, and Windows XP) and various versions of UNIX (includingSolaris, BSD, and Mac OS X). We also provide a significant amount of coverageof the Linux operating system reflecting the most recent version of the kernel-Version 2.6-at the time this book was written.The text also provides several example programs written in C andJava. These programs are intended to run in. the following programmingenvironments:Windows systems. The primary programming environment for Windowssystems is the Win32 API (application programming interface), which providesa comprehensive set of functions for managing processes, threads,memory, and peripheral devices. We provide several C programs illustratingthe use of the Win32 API. Example programs were tested on systemsrum1.ing Windows Vista, Windows 2000, and Windows XP.POSIX. POSIX (which stands for Portable Operating System Inte1jace) representsa set of standards implemented primarily for UNIX-based operatingsystems. Although Windows Vista, Windows XP, and Windows 2000 systemscan also run certain POSIX programs, our coverage of POSIX focusesprimarily on UNIX and Linux systems. POSIX-compliant systems mustimplement the POSIX core standard (POSIX.1): Linux, Solaris, and Mac OSX are examples of POSIX-compliant systems. POSIX also defines severalextensions to the standards, including real-time extensions (POSIXl.b) andan extension for a threads library (POSIX1.c, better known as Pthreads). Weprovide several programn1.ing examples written inC illustrating the POSIXbase API, as well as Pthreads and the extensions for real-time programming.These example programs were tested on Debian Linux 2.4 and 2.6 systems,Mac OS X 10.5, and Solaris 10 using the gee 3.3 and 4.0 compilers.Java. Java is a widely used programming language with a rich API andbuilt-in language support for thread creation and management. Java 7. xiprograms run on any operating system supporting a Java virtual machine(or JVM). We illustrate various operating system and networking conceptswith several Java programs tested using the Java 1.5 JVM.We have chosen these three programming environments because it is ouropinion that they best represent the two most popular models of operatingsystems: Windows and UNIX/Linux, along with the widely used Java environment.Most programming examples are written in C, and we expect readers tobe comfortable with this language; readers familiar with both the C and Javalanguages should easily understand most programs provided in this text.In some instances-such as thread creation-we illustrate a specificconcept using all three programming environments, allowing the readerto contrast the three different libraries as they address the same task. Inother situations, we may use just one of the APis to demonstrate a concept.For example, we illustrate shared memory using just the POSIX API; socketprogramming in TCP /IP is highlighted using the Java API.As we wrote the Eighth Edition of Operating System Concepts, we were guidedby the many comments and suggestions we received from readers of ourprevious editions, as well as by our own observations about the rapidlychanging fields of operating systems and networking. We have rewrittenmaterial in most of the chapters by bringing older material up to date andremoving material that was no longer of interest or relevance.We have made substantive revisions and organizational changes in manyof the chapters. Most importantly, we have added coverage of open-sourceoperating systems in Chapter 1. We have also added more practice exercisesfor students and included solutions in WileyPLUS, which also includes newsimulators to provide demonstrations of operating-system operation. Below,we provide a brief outline of the major changes to the various chapters:Chapter 1, Introduction, has been expanded to include multicore CPUs,clustered computers, and open-source operating systems.Chapter 2, System Structures, provides significantly updated coverage ofvirtual machines, as well as multicore CPUs, the GRUB boot loader, andoperating-system debugging.Chapter 3, Process Concept, provides new coverage of pipes as a form ofinterprocess communication.Chapter 4, Multithreaded Programming, adds new coverage of programmingfor multicore systems.Chapter 5, Process Scheduling, adds coverage of virtual machine schedulingand multithreaded, multicore architectures.Chapter 6, Synchronization, adds a discussion of mutual exclusion locks,priority inversion, and transactional memory.Chapter 8, Memory-Management Strategies, includes discussion ofNUMA. 8. xiiChapter 9, Virtual-Memory Management, updates the Solaris example toinclude Solaris 10 memory managernent.Chapter 10, File System, is updated with current technologies andcapacities.Chapter 11, Implementing File Systems, includes a full description ofSun's ZFS file system and expands the coverage of volumes and directories.Chapter 12, Secondary-Storage Structure, adds coverage of iSCSI, volumes,and ZFS pools.Chapter 13, I/0 Systems, adds coverage of PCIX PCI Express, and HyperTransport.Chapter 16, Distributed Operating Systems, adds coverage of 802.11wireless networks.Chapter 21, The LimiX System, has been updated to cover the latest versionof the LimiX kernel.Chapter 23, Influential Operating Systems, increases coverage of veryearly computers as well as TOPS-20, CP/M, MS-DOS, Windows, and theoriginal Mac OS.To emphasize the concepts presented in the text, we have added severalprogramming problems and projects that use the POSIX and Win32 APis, aswell as Java. We have added more than 15 new programming problems, whichemphasize processes, threads, shared memory, process synchronization, andnetworking. In addition, we have added or modified several programmingprojects that are more involved than standard programming exercises. Theseprojects include adding a system call to the Linux kernel, using pipes onboth UNIX and Windows systems, using UNIX message queues, creatingmultithreaded applications, and solving the producer-consumer problemusing shared memory.The Eighth Edition also incorporates a set of operating-system simulatorsdesigned by Steven Robbins of the University of Texas at San Antonio. Thesimulators are intended to model the behavior of an operating system as itperforms various tasks, such as CPU and disk-head schedulil1.g, process creationand interprocess communication, starvation, and address translation. Thesesimulators are written in Java and will run on any computer systern withJava 1.4. Students can download the simulators from WileyPLUS and observethe behavior of several operating system concepts in various scenarios. Inaddition, each simulator includes several exercises that ask students to setcertain parameters of the simulator, observe how the system behaves, and thenexplain this behavior. These exercises can be assigned through WileyPLUS. TheWileyPLUS course also includes algorithmic problems and tutorials developedby Scott M. Pike of Texas A&M University. 9. xiiiThe following teaching supplencents are available in WileyPLUS and onwww.wiley.coml go I global/ silberschatz: a set of slides to accompany thebook, model course syllabi, all C and Java source code, up-to-date errata,three case study appendices and the Distributed Communication appendix.The WileyPLUS course also contains the simulators and associated exercises,additional practice exercises (with solutions) not found in the text, and atestbank of additional problems. Students are encouraged to solve the practiceexercises on their own and then use the provided solutions to check their ownanswers.To obtain restricted supplements, such as the solution guide to the exercisesin the text, contact your local J orne Wiley & Sons sales representative. Note thatthese supplements are available only to faculty who use this text.We use the mailman system for communication among the users of OperatingSystem Concepts. If you wish to use this facility, please visit the following URLand follow the instructions there to subscribe:http: I I mailman.cs.yale.edul mailmanllistinfo I os-bookThe mailman mailing-list system provides many benefits, such as an archiveof postings, as well as several subscription options, including digest and Webonly. To send messages to the list, send e-mail to:[email protected] on the message, we will either reply to you personally or forwardthe message to everyone on the mailing list. The list is moderated, so you willreceive no inappropriate mail.Students who are using this book as a text for class should not use the listto ask for answers to the exercises. They will not be provided.We have attempted to clean up every error in this new edition, but-ashappens with operating systems-a few obscure bugs may remain. We wouldappreciate hearing from you about any textual errors or omissions that youidentify.If you would like to suggest improvements or to contribute exercises,we would also be glad to hear from you. Please send correspondence [email protected] book is derived from the previous editions, the first three of whichwere coauthored by James Peterson. Others who helped us with previouseditions include Hamid Arabnia, Rida Bazzi, Randy Bentson, David Black, 10. xivJoseph Boykin, Jeff Brumfield, Gael Buckley, Roy Campbell, P. C. Capon, JohnCarpenter, Gil Carrick, Thomas Casavant, Bart Childs, Ajoy Kum.ar Datta,Joe Deck, Sudarshan K. Dhall, Thomas Doeppner, Caleb Drake, M. RacsitEskicioglu, Hans Flack, Robert Fowler, G. Scott Graham, Richard Guy, MaxHailperin, Rebecca I-Iartncan, Wayne Hathaway, Christopher Haynes, DonHeller, Bruce Hillyer, Mark Holliday, Dean Hougen, Michael Huangs, AhmedKamet Marty Kewstet Richard Kieburtz, Carol Kroll, Marty K westet ThomasLeBlanc, John Leggett, Jerrold Leichter, Ted Leung, Gary Lippman, CarolynMiller, Michael Molloy, Euripides Montagne, Yoichi Muraoka, Jim M. Ng,Banu Ozden, Ed Posnak, Boris Putanec, Charles Qualline, John Quarterman,Mike Reiter, Gustavo Rodriguez-Rivera, Carolyn J. C. Schauble, Thomas P.Skimcer, Yannis Smaragdakis, Jesse St. Laurent, John Stankovic, Adam Stauffer,Steven Stepanek, John Sterling, Hal Stern, Louis Stevens, Pete Thomas, DavidUmbaugh, Steve Vinoski, Tommy Wagner, Larry L. Wear, Jolm Werth, JamesM. Westall, J. S. Weston, and Yang XiangParts of Chapter 12 were derived from a paper by Hillyer and Silberschatz[1996]. Parts of Chapter 17 were derived from a paper by Levy and Silberschatz[1990]. Chapter 21 was derived from an unpublished manuscript by StephenTweedie. Chapter 22 was derived from an unpublished manuscript by DaveProbert, Cliff Martin, and Avi Silberschatz. Appendix C was derived froman unpublished manuscript by Cliff Martin. Cliff Martin also helped withupdating the UNIX appendix to cover FreeBSD. Some of the exercises andaccompanying solutions were supplied by Arvind Krishnamurthy.Mike Shapiro, Bryan Cantrill, and Jim Mauro answered several Solarisrelatedquestions. Bryan Cantrill from Sun Microsystems helped with the ZFScoverage. Steve Robbins of the University of Texas at San Antonio designedthe set of simulators that we incorporate in WileyPLUS. Reece Newmanof Westminster College initially explored this set of simulators and theirappropriateness for this text. Josh Dees and Rob Reynolds contributed coverageof Microsoft's .NET. The project for POSIX message queues was contributed byJohn Trona of Saint Michael's College in Colchester, Vermont.Marilyn Turnamian helped generate figures and presentation slides. MarkWogahn has made sure that the software to produce the book (e.g., Latexmacros, fonts) works properly.Our Associate Publisher, Dan Sayre, provided expert guidance as weprepared this edition. He was assisted by Carolyn Weisman, who managedmany details of this project smoothly. The Senior Production Editor KenSantor, was instrumental in handling all the production details. Lauren Sapiraand Cindy Jolmson have been very helpful with getting material ready andavailable for WileyPlus.Beverly Peavler copy-edited the manuscript. The freelance proofreader wasKatrina Avery; the freelance indexer was Word Co, Inc.Abraham Silberschatz, New Haven, CT, 2008Peter Baer Galvin, Burlington, MA 2008Greg Gagne, Salt Lake City, UT, 2008 11. PART ONE OVERVIEWChapter 1 Introduction1.1 What Operating Systems Do 31.2 Computer-System Organization 61.3 Computer-System Architecture 121.4 Operating-System Shucture 181.5 Operating-System Operations 201.6 Process Management 231.7 Memory Management 241.8 Storage Management 25Chapter 2 System Structures2.1 Operating-System Services 492.2 User Operating-System Interface 522.3 System Calls 552.4 Types of System Calls 582.5 System Programs 662.6 Operating-System Design andImplementation 682.7 Operating-System Structure 701.9 Protection and Security 291.10 Distributed Systems 301.11 Special-Purpose Systems 321.12 Computing Environments 341.13 Open-Source Operating Systems 371.14 Summary 40Exercises 42Bibliographical Notes 462.8 Virtual Machines 762.9 Operating-System Debugging 842.10 Operating-System Generation 882.11 System Boot 892.12 Summary 90Exercises 91Bibliographical Notes 97PART TWO PROCESS MANAGEMENTChapter 3 Process Concept3.1 Process Concept 1013.2 Process Scheduling 1053.3 Operations on Processes 1103.4 Interprocess Communication 1163.5 Examples of IPC Systems 1233.6 Communication in ClientServerSystems 1283.7 Summary 140Exercises 141Bibliographical Notes 152XV 12. xviChapter 4 Multithreaded Programming4.1 Overview 1534.2 Multithreading Models 1574.3 Thread Libraries 1594.4 Threading Issues 165Chapter 5 Process Scheduling5.1 Basic Concepts 1835.2 Scheduling Criteria 1875.3 Scheduling Algorithms 1885.4 Thread Scheduling 1995.5 Multiple-Processor Scheduling 2004.5 Operating-System Examples 1714.6 Summary 174Exercises 174Bibliographical Notes 1815.6 Operating System Examples 2065.7 Algorithm Evaluation 2135.8 Summary 217Exercises 218Bibliographical Notes 222PART THREE PROCESS COORDINATIONChapter 6 Synchronization6.1 Backgrmmd 2256.2 The Critical-Section Problem 2276.3 Peterson's Solution 2296.4 Synchronization Hardware 2316.5 Semaphores 2346.6 Classic Problems ofSynchronization 239Chapter 7 Deadlocks7.1 System Model 2837.2 Deadlock Characterization 2857.3 Methods for Handling Deadlocks 2907.4 Deadlock Prevention 2917.5 Deadlock Avoidance 2946.7 Monitors 2446.8 Synchronization Examples 2526.9 Atomic Transactions 2576.10 Summary 267Exercises 267Bibliographical Notes 2807.6 Deadlock Detection 3017.7 Recovery from Deadlock 3047.8 Summary 306Exercises 307Bibliographical Notes 310PART FOUR MEMORY MANAGEMENTChapter 8 Memory-Management Strategies8.1 Background 3158.2 Swapping 3228.3 Contiguous Memory Allocation 3248.4 Paging 3288.5 Structure of the Page Table 3378.6 Segmentation 3428.7 Example: The Intel Pentium 3458.8 Summary 349Exercises 350Bibliographical Notes 354 13. xviiChapter 9 Virtual-Memory Management9.1 Background 3579.2 Demand Paging 3619.3 Copy-on-Write 3679.4 Page Replacement 3699.5 Allocation of Frames 3829.6 Thrashing 3869.7 Memory-Mapped Files 3909.8 Allocating Kernel Memory 3969.9 Other Considerations 3999.10 Operating-System Examples 4059.11 Summary 407Exercises 409Bibliographical Notes 416PART FIVE STORAGE MANAGEMENTChapter 10 File System10.1 File Concept 42110.6 Protection 45110.2 Access Methods 43010.7 Summary 45610.3 Directory and Disk Structure 43310.4 File-System Mounting 44410.5 File Sharing 446Exercises 457Bibliographical Notes 458Chapter 11 Implementing File Systems11.1 File-System Structure 46111.2 File-System Implementation 46411.3 Directory Implementation 47011.4 Allocation Methods 47111.5 Free-Space Management 47911.6 Efficiency and Performance 48211.7 Recovery 48611.8 NFS 49011.9 Example: The WAFL File System 49611.10 Summary 498Exercises 499Bibliographical Notes 502Chapter 12 Secondary-Storage Structure12.1 Overview of Mass-StorageStructure 50512.2 Disk Structure 50812.3 Disk Attachment 50912.4 Disk Scheduling 51012.5 Disk Man.agement 51612.6 Swap-Space Management 520Chapter 13 I/0 Systems13.1 Overview 55513.2 I/0 Hardware 55613.3 Application I/0 Interface 56513.4 Kernel I/0 Subsystem 57113.5 Transforming I/0 Requests toHardware Operations 57812.7 RAID Structure 52212.8 Stable-Storage Implementation 53312.9 Tertiary-Storage Struchue 53412.10 Summary 543Exercises 545Bibliographical Notes 55213.6 STREAMS 58013.7 Performance 58213.8 Summary 585Exercises 586Bibliographical Notes 588 14. xviiiPART SIX PROTECTION AND SECURITYChapter 14 System Protection14.1 Goals of Protection 59114.2 Principles of Protection 59214.3 Domain of Protection 59314.4 Access Matrix 59814.5 Implementation of Access Matrix 60214.6 Access Control 605Chapter 15 System Security15.1 The Security Problem 62115.2 Program Threats 62515.3 System and Network Threats 63315.4 Cryptography as a Security Tool 63815.5 User Authentication 64915.6 Implementing Security Defenses 65415.7 Firewalling to Protect Systems andNetworks 66114.7 Revocation of Access Rights 60614.8 Capability-Based Systems 60714.9 Language-Based Protection 61014.10 Surnmary 615Exercises 616Bibliographical Notes 61815.8 Computer-SecurityClassifications 66215.9 An Example: Windows XP 66415.10 Summary 665Exercises 666Bibliographical Notes 667PART SEVEN DISTRIBUTED SYSTEMSChapter 16 Distributed Operating Systems16.1 Motivation 67316.2 Types of Network-basedOperating Systems 67516.3 Network Structure 67916.4 Network Topology 68316.5 Communication Structure 68416.6 Communication Protocols 69016.7 Robustness 69416.8 Design Issues 69716.9 An Example: Networking 69916.10 Summary 701Exercises 701Bibliographical Notes 703Chapter 17 Distributed File Systems17.1 Background 70517.2 Naming and Transparency 70717.3 Remote File Access 71017.4 Stateful versus Stateless Service 71517.5 File Replication 71617.6 An Example: AFS 71817.7 Summary 723Exercises 724Bibliographical Notes 725Chapter 18 Distributed Synchronization18.1 Event Ordering 72718.2 Mutual Exclusion 73018.3 Atomicity 73318.4 Concurrency Control 73618.5 Deadlock Handling 74018.6 Election Algorithms 74718.7 Reaching Agreement 75018.8 Summary 752Exercises 753Bibliographical Notes 754 15. PART EIGHT SPECIAL PURPOSE SYSTEMSChapter 19 Real-Time Systems19.1 Overview 75919.2 System Characteristics 76019.3 Features of Real-Time Kernels 76219.4 Implementing Real-Time OperatingSystems 76419.5 Real-Time CPU Scheduling 76819.6 An Example: VxWorks 5.x 77419.7 Summary 776Exercises 777Bibliographical Notes 777Chapter 20 Multimedia Systems20.1 What Is Multimedia? 77920.2 Compression 78220.3 Requirements of MultimediaKernels 78420.4 CPU Scheduling 78620.5 Disk Scheduling 78720.6 Network Management 78920.7 An Example: CineBlitz 79220.8 Summary 795Exercises 795Bibliographical Notes 797PART NINE CASE STUDIESChapter 21 The Linux System21.1 Linux History 80121.2 Design Principles 80621.3 Kernel Modules 80921.4 Process Management 81221.5 Scheduling 81521.6 Memory Management 82021.7 File Systems 828Chapter 22 Windows XP22.1 History 84722.2 Design Principles 84922.3 System Components 85122.4 Environmental Subsystems 87422.5 File System 87821.8 Input and Output 83421.9 Interprocess Communication 83721.10 Network Structure 83821.11 Security 84021.12 Summary 843Exercises 844Bibliographical Notes 84522.6 Networking 88622.7 Programmer Interface 89222.8 Sum.mary 900Exercises 900Bibliographical Notes 901Chapter 23 Influential Operating Systems23.1 Feature Migration 90323.2 Early Systems 90423.3 Atlas 91123.4 XDS-940 91223.5 THE 91323.6 RC 4000 91323.7 CTSS 91423.8 MULTICS 91523.9 IBM OS/360 91523.10 TOPS-20 91723.11 CP/M and MS/DOS 91723.12 Macintosh Operating System andWindows 91823.13 Mach 91923.14 Other Systems 920Exercises 921xix 16. XXChapter A BSD UNIXA1 UNIX History 1A2 Design Principles 6A3 Programmer Interface 8A.4 User Interface 15AS Process Management 18A6 Memory Management 22Appendix B The Mach SystemB.l History of the Mach System 1B.2 Design Principles 3B.3 System Components 4B.4 Process Management 7B.S Interprocess Conununication 13B.6 Memory Management 18Appendix C Windows 2000C.1 History 1C.2 Design Principles 2C.3 System Components 3C.4 Enviromnental Subsystems 19C.S File System 22Bibliography 923Credits 941Index 943A7 File System 25AS I/0 System 32A9 Interprocess Communication 35AlO Summary 40Exercises 41Bibliographical Notes 42B.7 Programmer Interface 23B.S Summary 24Exercises 25Bibliographical Notes 26Credits 27C.6 Networking 28C.7 Programmer Interface 33C.S Summary 40Exercises 40Bibliographical Notes 41 17. Part OneAn operating system acts as an intermediary between the user of acomputer and the computer hardware. The purpose of an operatingsystem is to provide an environment in which a user can executeprograms in a convenient and efficient manner.An operating system is software that manages the computer hardware.The hardware must provide appropriate mechanisms to ensure thecorrect operation of the computer system and to prevent user programsfrom interfering with the proper operation of the system.Internally, operating systems vary greatly in their makeup, since theyare organized along many different lines. The design of a new operatingsystem is a major task. It is impmtant that the goals of the system be welldefined before the design begins. These goals form the basis for choicesamong various algorithms and strategies.Because an operating system is large and complex, it must be createdpiece by piece. Each of these pieces should be a well delineated portionof the system, with carefully defined inputs, outputs, and functions. 18. 1.1CH ERAn is a program that manages the computer hardware. Italso provides a basis for application programs and acts as an intermediarybetween the computer user and the computer hardware. An amazing aspectof operating systems is how varied they are in accomplishing these tasks.Mainframe operating systems are designed primarily to optimize utilizationof hardware. Personal computer (PC) operating systems support complexgames, business applications, and everything in between. Operating systemsfor handheld computers are designed to provide an environment in which auser can easily interface with the computer to execute programs. Thus, someoperating systems are designed to be convenient, others to be efficient, and otherssome combination of the two.Before we can explore the details of computer system operation, we needto know something about system structure. We begin by discussing the basicfunctions of system startup, I/0, and storage. We also describe the basiccomputer architecture that makes it possible to write a functional operatingsystem.Because an operating system is large and complex, it must be createdpiece by piece. Each of these pieces should be a well-delineated portion of thesystem, with carefully defined inputs, outputs, and functions. In this chapter,we provide a general overview of the major components of an operatingsystem.To provide a grand tour of the major components of operating systems.To describe the basic organization of computer systems.We begin our discussion by looking at the operating system's role in theoverall computer system. A computer system can be divided roughly into3 19. 4 Chapter 1compiler assembler text editoroperating systemdatabasesystemFigure 1.1 Abstract view of the components of a computer system.four components: the hardware/ the operating system, the application programs/and the users (Figure 1.1).The hardwa.te-the the and theperating system. To~cmp_ut~r af_t~ril p~c:ified peri() d. The period may be fixed (for example,1/60 second) or variable (for example, from 1 millisecond to 1 second). Ais generally implemented by a fixed-rate clock and a counter.The operating system sets the counter. Every time the clock ticks, the counteris decremented. When the counter reaches 0, an interrupt occurs. For instance,a 10-bit counter with a 1-millisecond clock allows interrupts at intervals from1 millisecond to 1,024 milliseconds, in steps of 1 millisecond.Before turning over control to the user, the operating system ensuresthat the timer is set to interrupt. lL~ll.~ __ tiJ11e__il1t~rrl1pts/control transfersautomatically totll.e ()pel:9:t~~Y!epl,_"Thicfl__!l-1(1Ytreat the interrupt as a faiaferror or n:taygi-y_etll.ep_rograrn rnc:>r~!i:rn~:. Clearly,il~structions that modify thecontent of the timer are privileged.Thus, we can use the timer to prevent a user program from running toolong. A simple technique is to il1.itialize a counter with the amount of time that aprogram is allowed to run. A program with a 7-minute time limit, for example,would have its counter initialized to 420. Every second, the timer interruptsand the counter is decremented by 1. As long as the counter is positive, controlis returned to the user program. When the counter becomes negative, theoperating system terminates the program for exceeding the assigned timelimit.A program does nothing unless its instructions are executed by a CPU. Aprogram in execution, as mentioned, is a process. A time-shared user programsuch as a compiler is a process. A word-processing program being run by anindividual user on a PC is a process. A system task, such as sending outputto a printer, can also be a process (or at least part of one). For now, you canconsider a process to be a job or a time-shared program, but later you will learn 35. 24 Chapter 11.7that the concept is more general. As we shall see in Chapter 3, it is possibleto provide system calls that allow processes to create subprocesses to executeconcurrent! y.A process needs certain resources---including CPU time, me111ory, files,and-I;o devices:::_:_ to accomplish its:task These iesources are e!tl1er given tothe process when it is created or- allocated to it while it is running. In additionto the various physical and logical resources that a process obtains when it iscreated, various initialization data (input) may be passed along. For example,consider a process whose function is to display the status of a file on the screenof a terminal. The process will be given as an input the name of the file and willexecute the appropriate instructions and system calls to obtain and displayon the terminal the desired information. When the process terminates, theoperating system will reclaim any reusable resources.l"Ve ~_111pl:t21size that a program by itselfis nota process; a program is a y_assive er~!~ty, likt:tl1e C()I1terltsof a fil(?storecl_m1 c!iskL~A.ThereasC_pr(Jce~~s_1s 21~1aCtive entity. A si-Dgl~::1hr:eaded proc~ss has on~_pr_ogra111 cou11!er s:eecifying thenexf1il~r:Uc_tiogt()_eX~ClJte. (Threads are covered in Chapter 4.) The -execi.rtioil.of such a process must be sequential. The CPU executes one instruction of theprocess after another, until the process completes. Further, at any time, oneinstruction at most is executed on behalf of the process. Thus, although twoprocesses may be associated with the same program, they are neverthelessconsidered two separate execution sequences. A multithreaded process hasmultiple program counters, each pointing to the next instruction to execute fora given thread.A process is the unit of work in a system. Such a system consists of acollection of processes, some of which are operating-system processes (thosethat execute system code) and the rest of which are user processes (those thatexecute user code). Al]Jheseprocesses canp()t~!ltially execute concurrently-_llY.IJ:lli}!p_l~)(_i!lg ()I'a sir1gle _C:Pl],for_~)(ample. - - - --- ----The operating system is responsible for the following activities in connectionwith process management:Scheduling processes and threads on the CPUsCreating and deleting both user and system processesSuspending and resuming processesProviding mechanisms for process synchronizationProviding mechanisms for process communicationWe discuss process-management techniques in Chapters 3 through 6.As we discussed in Section 1.2.2, the main memory is central to the operationof a modern computer system. Main memory is a large array of words or bytes,ranging in size from hundreds of thousands to billions. Each word or byte hasits own address. Main memory is a repository of quickly accessible data sharedby the CPU and I/0 devices. The central processor reads instructions from main 36. 1.81.8 25memory during the instruction-fetch cycle and both reads and writes data frommain memory during the data-fetch cycle (on a von Neumann architecture).As noted earlier, the main memory is generallythe only large storage devicethat the CPU is able to address and access directly. For example, for the CPU toprocess data from disk, those data mu.st first be transferred to main n"lemoryby CPU-generated I/0 calls. In the same way, instructions must be in memoryfor the CPU to execute them.For a program to be executed, it must be mapped to absolute addresses andloaded into memory. As the program executes, it accesses program instructionsand data from memory by generating these absolute addresses. Eventually,the program terminates, its memory space is declared available, and the nextprogram can be loaded and executed.To improve both the utilization of the CPU and the speed of the computer'sresponse to its users, general-purpose computers must keep several programsin memory, creating a need for memory management. Many different memorymanagementschemes are used. These schemes reflect various approaches, andthe effectiveness of any given algorithm depends on the situation. In selecting amemory-management scheme for a specific system, we must take into accountmany factors-especially the hardware design of the system. Each algorithmrequires its own hardware support.The operating system is responsible for the following activities in connectionwith memory management:Keeping track of which parts of memory are currently being used and bywhomDeciding which processes (or parts thereof) and data to move into and outof memoryAllocating and deallocating memory space as neededMemory-management techniques are discussed il1 Chapters 8 and 9.To make the computer system convenient for users, the operating systemprovides a uniform, logical view of information storage. The operating systemabstracts from the physical properties of its storage devices to define a logicalstorage unit, the file. The operating system maps files onto physical media andaccesses these files via the storage devices.1.8.1 File-System ManagementPile management is one of the most visible components of an operating system.Computers can store information on several different types of physical media.Magnetic disk, optical disk, and magnetic tape are the most common. Eachof these media has its own characteristics and physical organization. Eachmedium is controlled by a device, such as a disk drive or tape drive, thatalso has its own unique characteristics. These properties include access speed,capacity, data-transfer rate, and access method (sequential or randmn). 37. 26 Chapter 1A file is a collection of related information defined by its creator. Commonly,files represent programs (both source and object forms) and data. Data files maybe numeric, alphabetic, alphanumeric, or binary. Files may be free-form (forexample, text files), or they may be formatted rigidly (for example, fixed fields).Clearly, the concept of a file is an extremely general one.The operating system implements the abstract concept of a file by managingmass-storage media, such as tapes and disks, and the devices that control them.Also, files are normally organized into directories to make them easier to use.Finally, when multiple users have access to files, it may be desirable to controlby whom and in what ways (for example, read, write, append) files may beaccessed.The operating system is responsible for the following activities in connectionwith file management:Creating and deleting filesCreating and deleting directories to organize filesSupporting primitives for manipulating files and directoriesMapping files onto secondary storageBacking up files on stable (nonvolatile) storage mediaFile-management teclmiques are discussed in Chapters 10 and 11.1.8.2 Mass-Storage ManagementAs we have already seen, because main memory is too small to accommodateall data and programs, and because the data that it holds are lost when poweris lost, the computer system must provide secondary storage to back up mainmemory. Most modern computer systems use disks as the principal on-linestorage medium for both programs and data. Most programs-includingcompilers, assemblers, word processors, editors, and formatters-are storedon a disk until loaded into memory and then use the disk as both the sourceand destination of their processing. Hence, the proper management of diskstorage is of central importance to a computer system. The operating system isresponsible for the following activities in connection with disk management:Free-space managementStorage allocationDisk schedulingBecause secondary storage is used frequently, it must be used efficiently. Theentire speed of operation of a computer may hinge on the speeds of the disksubsystem and the algorithms that manipulate that subsystem.There are, however, many uses for storage that is slower and lower in cost(and sometimes of higher capacity) than secondary storage. Backups of diskdata, seldom-used data, and long-term archival storage are some examples.Magnetic drives and their tapes and CD and DVD drives and platters aretypical devices. The media (tapes and optical platters) varybetween (write-once, read-many-times) and (read-write) formats. 38. 1.8 27Tertiary storage is not crucial to systern performance, but it still mustbe managed. Some operating systems take on this task, while others leavetertiary-storage management to application progran1s. Some of the functionsthat operating systerns can provide include mounting and unmounting rnediain devices, allocating and freeing the devices for exclusive use by processes,and migrating data from secondary to tertiary storage.Techniques for secondary and tertiary storage management are discussedin Chapter 12.1.8.3 Cachingis an important principle of computer systems. Information isnormally kept in some storage system (such as main memory). As it is used,it is copied into a faster storage system-the cache-on a temporary basis.When we need a particular piece of information, we first check whether it isin the cache. If it is, we use the information directly from the cache; if it is not,we use the information from the source, putting a copy in the cache under theassumption that we will need it again soon.In addition, internal programmable registers, . such as index registers,provide a high-speed cache for main memory. The programmer (or compiler)implements the register-allocation and register-replacement algorithms todecide which information to keep in registers and which to keep in mainmemory. There are also caches that are implemented totally in hardware.For instance, most systems have an instruction cache to hold the instructionsexpected to be executed next. Without this cache, the CPU would have to waitseveral cycles while an instruction was fetched from main memory. For similarreasons, most systems have one or more high-speed data caches in the memoryhierarchy. We are not concerned with these hardware-only caches in this text,since they are outside the control of the operating system.Because caches have limited size, is an importantdesign problem. Careful selection of the cache size and of a replacement policycan result in greatly increased performance. Figure 1.11 compares storageperformance in large workstations and small servers. Various replacementalgorithms for software-controlled caches are discussed in Chapter 9.Typical size 9.5 C1l9.02 3 4 5 6 7time quantumFigure 5.5 How turnaround time varies with the time quantum.Turnaround time also depends on the size of the time quantum. As wecan see from Figure 5.5, the average turnaround time of a set of processesdoes not necessarily improve as the time-quantum size increases. In general,the average turnaround time can be improved if most processes finish theirnext CPU burst in a single time quantum. For example, given three processesof 10 time units each and a quantum of 1 time unit, the average turnaroundtime is 29. If the time quantum is 10, however, the average turnaround timedrops to 20. If context-switch time is added in, the average turnaround timeincreases even more for a smaller time quantum, since more context switchesare required.Although the time quantum should be large compared with the contextswitchtime, it should not be too large. If the time quantum is too large, RRscheduling degenerates to an FCFS policy. A rule of thumb is that 80 percent ofthe CPU bursts should be shorter than the time quantum.5.3.5 Multilevel Queue SchedulingAnother class of scheduling algorithms has been created for situations inwhich processes are easily classified into different groups. For example, acommon division is made between foreground (interactive) processes andbackground (batch) processes. These two types of processes have differentresponse-time requirements and so may have different scheduling needs. Inaddition, foreground processes may have priority (externally defined) overbackground processes.A multilevel queue scheduling algorithm partitions the ready queue intoseveral separate queues (Figure 5.6). The processes are permanently assigned toone queue, generally based on some property of the process, such as memory 206. 5.3 197highest priority====~'-------'i-'-n_te_r~ac_t_iv_e_e...:.d_it~in_g'-'-p~r-.o'-c_e'---ss~e-s"--------"--'-'---l====i>======~'---------'b_a_tc_h_p_r_o_ce_s_s_e_s ______ _J======~>======~'-------s_tu_d_e_n_t_p_ro_c_e_s_s_es_ _____ _jl======i>lowest priorityFigure 5.6 Multilevel queue scheduling.size, process priority, or process type. Each queue has its own schedulingalgorithm. For example, separate queues might be used for foreground andbackground processes. The foreground queue might be scheduled by an RRalgorithm, while the background queue is scheduled by an FCFS algorithm.In addition, there must be scheduling among the queues, which is commonlyimplemented as fixed-priority preemptive scheduling. For example, theforeground queue may have absolute priority over the background queue.Let's look at an example of a multilevel queue scheduling algorithm withfive queues, listed below in order of priority:System processesInteractive processesInteractive editing processesBatch processesStudent processesEach queue has absolute priority over lower-priority queues. No process in thebatch queue, for example, could run unless the queues for system processes,interactive processes, and interactive editing processes were all empty. If aninteractive editing process entered the ready queue while a batch process wasrunning, the batch process would be preempted.Another possibility is to time-slice among the queues. Here, each queue getsa certain portion of the CPU time, which it can then schedule among its variousprocesses. For instance, in the foreground-background queue example, theforeground queue can be given 80 percent of the CPU time for RR schedulingamong its processes, whereas the background queue receives 20 percent of theCPU to give to its processes on an FCFS basis. 207. 198 Chapter 55.3.6 Multilevel Feedback Queue SchedulingNormally, when the multilevel queue scheduling algorithm is used, processesare permanently assigned to a queue when they enter the system. If thereare separate queues for foreground and background processes, for example,processes do not move from one queue to the other, since processes do notchange their foreground or background nature. This setup has the advantageof low scheduling overhead, but it is inflexible.The multilevel feedback queue scheduling algorithm, in contrast, allowsa process to move between queues. The idea is to separate processes accordingto the characteristics of their CPU bursts. If a process uses too much CPU time,it will be moved to a lower-priority queue. This scheme leaves I/O-bound andinteractive processes in the higher-priority queues. In addition, a process thatwaits too long in a lower-priority queue may be moved to a higher-priorityqueue. This form of aging prevents starvation.For example, consider a multilevel feedback queue scheduler with threequeues, numbered from 0 to 2 (Figure 5.7). The scheduler first executes allprocesses in queue 0. Only when queue 0 is empty will it execute processesin queue 1. Similarly, processes in queue 2 will only be executed if queues 0and 1 are empty. A process that arrives for queue 1 will preempt a process inqueue 2. A process in queue 1 will in turn be preempted by a process arrivingfor queue 0.A process entering the ready queue is put in queue 0. A process in queue 0is given a time quantum of 8 milliseconds. If it does not filcish within this time,it is moved to the tail of queue 1. If queue 0 is empty, the process at the headof queue 1 is given a quantum of 16 milliseconds. If it does not complete, it ispreempted and is put into queue 2. Processes in queue 2 are run on an FCFSbasis but are run only when queues 0 and 1 are empty.This scheduling algorithm gives highest priority to any process with a CPUburst of 8 milliseconds or less. Such a process will quickly get the CPU, finishits CPU burst, and go off to its next I/0 burst. Processes that need more than8 but less than 24 milliseconds are also served quickly, although with lowerpriority than shorter processes. Long processes automatically sink to queue2 and are served in FCFS order with any CPU cycles left over from queues 0and 1.Figure 5.7 Multilevel feedback queues. 208. 5.45.4 199In general, a multilevel feedback queue scheduler is defined by thefollowing parameters:The number of queuesThe scheduling algorithm for each queueThe method used to determine when to upgrade a process to a higherpriorityqueueThe method used to determine when to demote a process to a lowerpriorityqueueThe method used to determine which queue a process will enter when thatprocess needs serviceThe definition of a multilevel feedback queue scheduler makes it the mostgeneral CPU-scheduling algorithm. It can be configured to match a specificsystem under design. Unfortunately, it is also the most complex algorithm,since defining the best scheduler requires some means by which to selectvalues for all the parameters.In Chapter 4, we introduced threads to the process model, distinguishingbetween user-level and kernel-level threads. On operating systems that supportthem, it is kernel-level threads-not processes-that are being scheduled bythe operating system. User-level threads are managed by a thread library,and the kernel is unaware of them. To run on a CPU, user-level threadsmust ultimately be mapped to an associated kernel-level thread, althoughthis mapping may be indirect and may use a lightweight process (LWP). In thissection, we explore scheduling issues involving user-level and kernel-levelthreads and offer specific examples of scheduling for Pthreads.5.4.1 Contention ScopeOne distinction between user-level and kernel-level threads lies in how theyare scheduled. On systems implementing the many-to-one (Section 4.2.1) andmany-to-many (Section 4.2.3) models, the thread library schedules user-levelthreads to run on an available LWP, a scheme known as process-contentionscope (PCS), since competition for the CPU takes place among threads belongingto the same process. When we say the thread library schedules user threads ontoavailable LWPs, we do not mean that the thread is actually running on a CPU;this would require the operating system to schedule the kernel thread ontoa physical CPU. To decide which kernel thread to schedule onto a CPU, thekernel uses system-contention scope (SCS). Competition for the CPU with SCSscheduling takes place among all threads in the system. Systems usilcg theone-to-one model (Section 4.2.2), such as Windows XP, Solaris, and Linux,schedule threads using only SCS.Typically, PCS is done according to priority-the scheduler selects therunnable thread with the highest priority to run. User-level thread priorities 209. 200 Chapter 55.5are set by the programmer and are not adjusted by the thread library, althoughsome thread libraries may allow the programmer to change the priority ofa thread. It is important to note that PCS will typically preempt the threadcurrently running in favor of a higher-priority thread; however, there is noguarantee of time slicing (Section 5.3.4) among threads of equal priority.5.4.2 Pthread SchedulingWe provided a sample POSTX Pthread program in Section 4.3.1, along with anintroduction to thread creation with Pthreads. Now, we highlight the POSIXPthread API that allows specifying either PCS or SCS during thread creation.Pthreads identifies the following contention scope values:PTHREAD_SCOPE_PROCESS schedules threads using PCS scheduling.PTHREAD_SCOPE_SYSTEM schedules threads using SCS scheduling.On systems implementing the many-to-many model, thePTHREAD_SCOPE_PROCESS policy schedules user-level threads onto availableLWPs. The number of LWPs is maintained by the thread library, perhaps usingscheduler activations (Section 4.4.6). The PTHREAD_SCOPE_SYSTEM schedulingpolicy will create and bind an LWP for each user-level thread on many-to-manysystems, effectively mapping threads using the one-to-one policy.The Pthread IPC provides two functions for getting-and setting-thecontention scope policy:pthread_attr_setscope(pthread_attr_t *attr, int scope)pthread_attr_getscope(pthread_attr_t *attr, int *scope)The first parameter for both functions contains a pointer to the attribute set forthe thread. The second parameter for the pthread_attr_setscope () functionis passed either the PTHREAD_SCOPE_SYSTEM or the PTHREAD_SCOPE_PROCESSvalue, indicating how the contention scope is to be set. In the case ofpthread_attr_getscope (), this second parameter contaiilS a pointer to anint value that is set to the current value of the contention scope. If an erroroccurs, each of these functions returns a non-zero value.In Figure 5.8, we illustrate a Pthread scheduling API. The programfirst determines the existing contention scope and sets it toPTHREAD_SCOPLPROCESS. It then creates five separate threads that willrun using the SCS scheduling policy. Note that on some systems, only certaincontention scope values are allowed. For example, Linux and Mac OS Xsystems allow only PTHREAD_SCOPE_SYSTEM.Our discussion thus far has focused on the problems of scheduling the CPU ina system with a single processor. If multiple CPUs are available, load sharingbecomes possible; however, the scheduling problem becomes correspondingly 210. 505#include #include #define NUM_THREADS 5int main(int argc, char *argv[]){}int i, scope;pthread_t tid[NUM_THREADS];pthread_attr_t attr;I* get the default attributes *Ipthread_attr_init(&attr);I* first inquire on the current scope *Iif (pthread_attr_getscope(&attr, &scope) != 0)fprintf(stderr, "Unable to get scheduling scopen");else {}if (scope == PTHREAD_SCOPE_PROCESS)printf("PTHREAD_SCOPLPROCESS");else if (scope == PTHREAD_SCOPE_SYSTEM)printf("PTHREAD_SCOPE_SYSTEM");elsefprintf(stderr, "Illegal scope valueon");I* set the scheduling algorithm to PCS or SCS *Ipthread_attr_setscope(&attr, PTHREAD_SCOPE_SYSTEM);I* create the threads *Ifor (i = 0; i < NUM_THREADS; i++)pthread_create(&tid[i] ,&attr,runner,NULL);I* now join on each thread *Ifor (i = 0; i < NUM_THREADS; i++)pthread_join(tid[i], NULL);I* Each thread will begin control in this function *Ivoid *runner(void *param){I* do some work 0 0 0 *Ipthread_exi t ( 0) ;}Figure 508 Pthread scheduling API.201more complex. Many possibilities have been tried; and as we saw with singleprocessorCPU scheduling, there is no one best solution. Here, we discussseveral concerns in multiprocessor scheduling. We concentrate on systems 211. 202 Chapter 5in which the processors are identical-homogeneous-in terms of theirfunctionality; we can then use any available processor to run any processin the queue. (Note, however, that even with homogeneous multiprocessors,there are sometimes limitations on scheduling. Consider a system with an l/0device attached to a private bus of one processor. Processes that wish to usethat device must be scheduled to run on that processor.)5.5.1 Approaches to Multiple-Processor SchedulingOne approach to CPU scheduling in a n1.ultiprocessor system has all schedulingdecisions, I/O processing, and other system activities handled by a singleprocessor-the master server. The other processors execute only user code.This asymmetric multiprocessing is simple because only one processoraccesses the system data structures, reducing the need for data sharing.A second approach uses symmetric multiprocessing (SMP), where eachprocessor is self-scheduling. All processes may be in a common ready queue, oreach processor may have its own private queue of ready processes. Regardless,scheduling proceeds by having the scheduler for each processor examine theready queue and select a process to execute. As we shall see in Chapter 61if we have multiple processors trying to access and update a common datastructure, the scheduler must be programmed carefully. We must ensure thattwo processors do not choose the same process and that processes are not lostfrom the queue. Virtually all modern operating systems support SMP, includingWindows XP, Windows 2000, Solaris, Linux, and Mac OS X. In the remainder ofthis section, we discuss issues concerning SMP systems.5.5.2 Processor AffinityConsider what happens to cache memory when a process has been running ona specific processor. The data most recently accessed by the process populatethe cache for the processor; and as a result, successive memory accesses bythe process are often satisfied in cache memory. Now consider what happensif the process migrates to another processor. The contents of cache memorymust be invalidated for the first processor, and the cache for the secondprocessor must be repopulated. Because of the high cost of invalidating andrepopulating caches, most SMP systems try to avoid migration of processesfrom one processor to another and instead attempt to keep a process rumungon the same processor. This is known as processor affinity-that is, a processhas an affinity for the processor on which it is currently rumting.Processor affinity takes several forms. When an operating system has apolicy of attempting to keep a process running on the same processor-butnot guaranteeing that it will do so-we have a situation known as soft affinity.Here, it is possible for a process to migrate between processors. Some systems-such as Lim.IX -also provide system calls that support hard affinity, therebyallowing a process to specify that it is not to migrate to other processors. Solarisallows processes to be assigned to limiting which processes canrun on which CPUs. It also implements soft affinity.The main-memory architecture of a system can affect processor affinityissues. Figure 5.9 illustrates an architecture featuring non-uniform memoryaccess (NUMA), in which a CPU has faster access to some parts of main memorythan to other parts. Typically, this occurs in systems containing combined CPU 212. 5.5 203computerFigure 5.9 NUMA and CPU scheduling.and memory boards. The CPUs on a board can access the memory on that boardwith less delay than they can access memory on other boards in the system.If the operating system's CPU scheduler and memory-placement algorithmswork together, then a process that is assigned affinity to a particular CPUcan be allocated memory on the board where that CPU resides. This examplealso shows that operating systems are frequently not as cleanly defined andimplemented as described in operating-system textbooks. Rather, the "solidlines" between sections of an operating system are frequently only "dottedlines," with algorithms creating connections in ways aimed at optimizingperformance and reliability.5.5.3 Load BalancingOn SMP systems, it is important to keep the workload balanced among allprocessors to fully utilize the benefits of having more than one processor.Otherwise, one or more processors may sit idle while other processors havehigh workloads, along with lists of processes awaiting the CPU. Load balancingattempts to keep the workload evenly distributed across all processors inan SMP system. It is important to note that load balancing is typically onlynecessary on systems where each processor has its own private queue of eligibleprocesses to execute. On systems with a common run queue, load balancingis often unnecessary, because once a processor becomes idle, it immediatelyextracts a rmmable process from the common run queue. It is also important tonote, howeve1~ that in most contemporary operating systems supporting SMP,each processor does have a private queue of eligible processes.There are two general approaches to load balancing: push migration andpull migration. With push migration, a specific task periodically checks theload on each processor and -if it finds an imbalance-evenly distributes theload by moving (or pushing) processes from overloaded to idle or less-busyprocessors. Pull migration occurs when an idle processor pulls a waiting taskfrom a busy processor. Push and pull migration need not be mutually exclusiveand are in fact often implemented in parallel on load-balancing systems. Forexample, the Linux scheduler (described in Section 5.6.3) and the ULE scheduler 213. 204 Chapter 5available for FreeBSD systems implement both techniqL1es. Linux runs its loadbalancingalgorithm every 200 milliseconds (push migration) or whenever therun queue for a processor is empty (pull migration).Interestingly, load balancing often counteracts the benefits of processoraffinity, discussed in Section 5.5.2. That is, the benefit of keeping a processrunning on the same processor is that the process can take advantage of its databeing in that processor's cache memory. Either pulling or pushing a processfrom one processor to another invalidates this benefit. As is often the casein systems engineering, there is no absolute rule concerning what policy isbest. Thus, in some systems, an idle processor always pulls a process froma non-idle processor; and in other systems, processes are moved only if theimbalance exceeds a certain threshold.5.5.4 Multicore ProcessorsTraditionally, SMP systems have allowed several threads to run concurrently byproviding multiple physical processors. However, a recent trend in computerhardware has been to place multiple processor cores on the same physical chip,resulting in a . Each core has a register set to maintain itsarchitectural state and appears to the operating system to be a separatephysical processor. SMP systems that use multicore processors are faster andconsume less power than systems in which each processor has its own physicalchip.Multicore processors may complicate scheduling issues. Let's consider howthis can happen. Researchers have discovered that when a processor accessesmemory, it spends a significant amount of time waiting for the data to becomeavailable. This situation, known as a may occur for variousreasons, such as a cache miss (accessing data that is not in cache memory).Figure 5.10 illustrates a memory stall. In this scenario, the processor can spendup to 50 percent of its time waiting for data to become available from memory.To remedy this situation, many recent hardware designs have implementedmultithreaded processor cores in which two (or more) hardware threads areassigned to each core. That way, if one thread stalls while waiting for memory,the core can switch to another thread. Figure 5.11 illustrates a dual-threadedprocessor core on which the execution of thread 0 and the execution of thread 1are interleaved. From an operating-system perspective, each hardware threadappears as a logical processor that is available to run a software thread. Thus,on a dual-threaded, dual-core system, four logical processors are presented tothe operating system. The UltraSPARC Tl CPU has eight cores per chip and four0 compute cycle ~memory stall cyclethread c M c M c M c MtimeFigure 5.10 Memory stall. 214. 5.5 205thread1 c M c M c M cthread0 c M c M c M ctimeFigure 5.11 Multithreaded multicore system.hardware threads per core; from the perspective of the operating system, thereappear to be 32 logical processors.In general, there are two ways to multithread a processor: ~__u.,u."c-).;u:cHccumultithreading. With coarse-grained multithreading, a threadexecutes on a processor until a long-latency event such as a memory stall occurs.Because of the delay caused by the long-latency event, the processor mustswitch to another thread to begin execution. However, the cost of switchingbetween threads is high, as the instruction pipeline must be flushed beforethe other thread can begin execution on the processor core. Once this newthread begins execution, it begins filling the pipeline with its instructions.Fine-grained (or interleaved) multithreading switches between threads at amuch finer level of granularity-typically at the boundary of an instructioncycle. However, the architectural design of fine-grained systems includes logicfor thread switching. As a result, the cost of switching between threads is small.Notice that a multithreaded multicore processor actually requires twodifferent levels of scheduling. On one level are the scheduling decisions thatmust be made by the operating system as it chooses which software thread torun on each hardware thread (logical processor). For this level of scheduling,the operating system may choose any scheduling algorithm, such as thosedescribed in Section 5.3. A second level of scheduling specifies how each coredecides which hardware thread to run. There are several strategies to adoptin this situation. The UltraSPARC Tl, mentioned earlier, uses a simple roundrobinalgorithm to schedule the four hardware threads to each core. Anotherexample, the Intel Itanium, is a dual-core processor with hvo hardwaremanagedthreads per core. Assigned to each hardware thread is a dynamicurgency value ranging from 0 to 7, with 0 representing the lowest urgency,and 7 the highest. The Itanium. identifies five different events that may triggera thread switch. When one of these events occurs, the thread-switching logiccompares the urgency of the two threads and selects the thread with the highesturgency value to execute on the processor core.5.5.5 Virtualization and SchedulingA system with virtualization, even a single-CPU system, frequently acts likea multiprocessor system. The virtualization software presents one or morevirtual CPUs to each of the virtual machines rum1.ing on the system andthen schedules the use of the physical CPUs among the virtual machines.The significant variations between virtualization technologies make it difficultto summarize the effect of virtualization on scheduling (see Section 2.8).In general, though, most virtualized environments have one host operating 215. 206 Chapter 55.6system and many guest operating systems. The host operating system createsand manages the virtual machines, and each virtual n a > 0?b. What is the algorithm that results from a < ~ < 0?5.17 Suppose that the following processes arrive for execution at the timesindicated. Each process will run for the amount of time listed. Inanswering the questions, use nonpreemptive scheduling, and base all 231. 222 Chapter 5decisions on the information you have at the time the decision must bemade.Process Arrival Time Burst Time------pl 0.0 8p2 0.4 4p3 1.0 1a. What is the average turnaround time for these processes with theFCFS scheduling algorithm?b. What is the average turnaround time for these processes with theSJF scheduling algorithm?c. The SJF algorithm is supposed to improve performance, but noticethat we chose to run process P1 at time 0 because we did not k11owthat two shorter processes would arrive soon. Compute what theaverage turnaround time will be if the CPU is left idle for the first1 unit and then SJF scheduling is used. Remember that processesP1 and P2 are waiting durirtg this idle time, so their waiting timemay increase. This algorithm could be known as future-knowledgescheduling.Feedback queues were originally implemented on the CTSS system describedin Corbato et al. [1962]. This feedback queue scheduling system was analyzedby Schrage [1967]. The preemptive priority scheduling algorithm of Exercise5.16 was suggested by Kleinrock [1975].Anderson et al. [1989], Lewis and Berg [1998], and Philbin et al. [1996]discuss thread scheduling. Multicore scheduling is examined in McNairy andBhatia [2005] and Kongetira et al. [2005].Scheduling techniques that take into account information regarding processexecution times from previous runs are described in Fisher [1981], Hallet al. [1996], and Lowney et al. [1993].Fair-share schedulers are covered by Henry [1984], Woodside [1986], andKay and Lauder [1988].Scheduling policies used in the UNIX V operating system are describedby Bach [1987]; those for UNIX FreeBSD 5.2 are presented by McKusick andNeville-Neil [2005]; and those for the Mach operating system are discussedby Black [1990]. Love [2005] covers scheduling in Lim.IX. Details of the ULEscheduler can be found in Roberson [2003]. Solaris scheduling is describedby Mauro and McDougall [2007]. Solomon [1998], Solomon and Russinovich[2000], and Russinovich and Solomon [2005] discuss scheduling in Windowsinternals. Butenhof [1997] and Lewis and Berg [1998] describe schedulingin Pthreads systems. Siddha et al. [2007] discuss scheduling challenges onmulticore systems. 232. Part Three 233. 6.1c ERA cooperating process is one that can affect or be affected by other processesexecuting in the system. Cooperating processes can either directly share alogical address space (that is, both code and data) or be allowed to share dataonly through files or messages. The former case is achieved through the use ofthreads, discussed in Chapter 4. Concurrent access to shared data may result indata inconsistency, however. In this chapter, we discuss various mechanismsto ensure the orderly execution of cooperating processes that share a logicaladdress space, so that data consistency is maintained.To introduce the critical-section problem, whose solutions can be used toensure the consistency of shared data.To present both software and hardware solutions of the critical-sectionproblem.To introduce the concept of an atomic transaction and describe mechanismsto ensure atomicity.In Chapter 3, we developed a model of a system consisting of cooperatingsequential processes or threads, all running asynchronously and possiblysharing data. We illustrated this model with the producer-consumer problem,which is representative of operating systems. Specifically, in Section 3.4.1, wedescribed how a bounded buffer could be used to enable processes to sharememory.Let's return to our consideration of the bounded buffer. As we pointedout, our original solution allowed at most BUFFER_SIZE - 1 items in the bufferat the same time. Suppose we want to modify the algorithm to remedy thisdeficiency. One possibility is to add an integer variable counter, initialized to0. counter is incremented every time we add a new item to the buffer and is225 234. 226 Chapter 6decremented every time we remove one item from the buffer. The code for theproducer process can be modified as follows:while (true) {}I* produce an item in nextProduced *Iwhile (counter == BUFFER_SIZE); I* do nothing *Ibuffer[in] = nextProduced;in = (in + 1) % BUFFER_SIZE ;counter++;The code for the consumer process can be modified as follows:while (true) {}while (counter == 0); I* do nothing *InextConsumed = buffer[out];out = (out + 1) % BUFFER_SIZE;counter--;I* consume the item in nextConsumed *IAlthough both the producer and consumer routines shown above arecorrect separately, they may not function correctly when executed concurrently.As an illustration, suppose that the value of the variable counter is currently5 and that the producer and consumer processes execute the statements"counter++" and "counter--" concurrently. Following the execution of thesetwo statements, the value of the variable counter may be 4, 5, or 6! The onlycorrect result, though, is counter == 5, which is generated correctly if theproducer and consumer execute separately.We can show that the value of counter may be incorrect as follows. Notethat the statement" counter++" may be implemented in machine language (ona typical machine) asregister1 = counterregister1 = register1 + 1counter= register1where register1 is one of the local CPU registers. Similarly, the statementregister2"counter--" is implemented as follows:register2 = counterregister2 = register2 ~ 1counter= register2where again register2 is on eo the local CPU registers. Even though register1 andregister2 may be the same physical register (an accumulator, say), rememberthat the contents of this register will be saved and restored by the interrupthandler (Section 1.2.3). 235. 6.26.2 227The concurrent execution of "counter++" and "counter--" is equivalentto a sequential execution in which the lower-level statements presentedpreviously are interleaved in some arbitrary order (but the order within eachhigh-level statement is preserved). One such interleaving isTo: producer execute register1 =counter {register1 = 5}T1: producer execute register1 = register1 + 1 {register1 = 6}T2: consumer execute register2 = counter {register2 = 5}T3: consumer execute register2 = register2 1 {register2 = 4}T4: producer execute counter= register1 {counter = 6}Ts: consumer execute counter = register2 {counter = 4}Notice that we have arrived at the incorrect state "counter == 4", indicatingthat four buffers are full, when, in fact, five buffers are full. If we reversed theorder of the statements at T4 and T5, we would arrive at the incorrect state"counter== 6".We would arrive at this incorrect state because we allowed both processesto manipulate the variable counter concurrently. A situation like this, whereseveral processes access and manipulate the same data concurrently and theoutcome of the execution depends on the particular order in which the accesstakes place, is called a To guard against the race conditionabove, we need to ensure that only one process at a time can be manipulatingthe variable counter. To make such a guarantee, we require that the processesbe synchronized in some way.Situations such as the one just described occur frequently in operatingsystems as different parts of the system manipulate resources. Furthermore,with the growth of multicore systems, there is an increased emphasis ondeveloping multithreaded applications wherein several threads-which arequite possibly sharing data-are rmming in parallel on different processingcores. Clearly, we want any changes that result from such activities notto interfere with one another. Because of the importance of this issue, amajor portion of this chapter is concerned with andamongst cooperating processes.Consider a system consisting of n processes {Po, P1 , ... , P11 _ I}. Each processhas a segment of code, called a cdticall in which the process maybe changing common variables, updating a table, writing a file, and so on.The important feature of the system is that, when one process is executing inits critical section, no other process is to be allowed to execute in its criticalsection. That is, no two processes are executing in their critical sections at thesame time. The critical-section problem is to design a protocol that the processescan use to cooperate. Each process must request permission to enter its criticalsection. The section of code implementing this request is the Thecritical section may be followed by an exit The remaining code is theThe general structure of a typical process Pi is shown in 236. 228 Chapter 6do {I entry section Icritical sectionI exit section Iremainder section} while (TRUE);Figure 6.1 General structure of a typical process A.Figure 6.1. The entry section and exit section are enclosed in boxes to highlightthese important segments of code.A solution to the critical-section problem must satisfy the following threerequirements:1. Mutual exclusion. If process Pi is executing in its critical section, then noother processes can be executing in their critical sections.2. Progress. If no process is executing in its critical section and someprocesses wish to enter their critical sections, then only those processesthat are not executing in their remainder sections can participate indeciding which will enter its critical section next, and this selection carmotbe postponed indefinitely.Bounded waiting. There exists a bound, or limit, on the number of timesthat other processes are allowed to enter their critical sections after aprocess has made a request to enter its critical section and before thatrequest is granted.We assume that each process is executing at a nonzero speed. However, we canmake no assumption concerning the relative of the n processes.At a given point in time, many kernel-mode processes may be active in theoperating system. As a result, the code implementing an operating system(kernel code) is subject to several possible race conditions. Consider as anexample a kernel data structure that maintains a list of all open files in thesystem. This list must be modified when a new file is opened or closed (addingthe file to the list or removing it from the list). If two processes were to open filessimultaneously, the separate updates to this list could result in a race condition.Other kernel data structures that are prone to possible race conditions includestructures for maintaining memory allocation, for maintaining process lists,and for interrupt handling. It is up to kernel developers to ensure that theoperating system is free from such race conditions.Two general approaches are used to handle critical sections in operatingsystems: (1) preemptive kernels and (2) nonpreemptive kernels. A preemptivekernel allows a process to be preempted while it is running in kernel mode.A nonpreemptive kernel does not allow a process running in kernel mode 237. 6.36.3 229to be preempted; a kernel-mode process will run until it exits kernel mode,blocks, or voluntarily yields control of the CPU. Obviously, a nonpreemptivekernel is essentially free from race conditions on kernel data structures, as onlyone process is active in the kernel at a time. We cannot say the same aboutpreemptive kernels, so they must be carefully designed to ensure that sharedkernel data are free from race conditions. Preemptive kernels are especiallydifficult to design for SMP architectures, since in these environments it ispossible for two kernel-mode processes to run simultaneously on differentprocessors.Why, then, would anyone favor a preemptive kernel over a nonpreemptiveone? A preemptive kernel is more suitable for real-time programming, as it willallow a real-time process to preempt a process currently running in the kernel.Furthermore, a preemptive kernel may be more responsive, since there is lessrisk that a kernel-mode process will run for an arbitrarily long period beforerelinquishing the processor to waiting processes. Of course, this effect can beminimized by designing kernel code that does not behave in this way. Later inthis chapter, we explore how various operating systems manage preemptionwithin the kernel.Next, we illustrate a classic software-based solution to the critical-sectionproblem known as Peterson's solution. Because of the way modern computerarchitectures perform basic machine-language instructions, such as load andstore, there are no guarantees that Peterson's solution will work correctly onsuch architectures. Howeve1~ we present the solution because it provides a goodalgorithmic description of solving the critical-section problem and illustratessome of the complexities involved in designing software that addresses therequirements of mutual exclusion, progress, and bomcded waiting.Peterson's solution is restricted to two processes that alternate executionbetween their critical sections and remainder sections. The processes arenumbered Po and P1. For convenience, when presenting Pi, we use Pj todenote the other process; that is, j equals 1 - i.Peterson's solution requires the two processes to share two data items:int turn;boolean flag[2];The variable turn indicates whose turn it is to enter its critical section. That is,if turn == i, then process Pi is allowed to execute in its critical section. Theflag array is used to indicate if a process is ready to enter its critical section.For example, if flag [i] is true, this value indicates that Pi is ready to enterits critical section. With an explanation of these data structures complete, weare now ready to describe the algorithm shown in Figure 6.2.To enter the critical section, process Pi first sets flag [i] to be true andthen sets turn to the value j, thereby asserting that if the other process wishesto enter the critical section, it can do so. If both processes try to enter at the sametime, turn will be set to both i and j at roughly the sance time. Only one of theseassignments will last; the other will occur but will be overwritten immediately. 238. 230 Chapter 6do {flag [i] = TRUE;turn= j;while (flag[j] && turn j);critical sectionI flag [i] = FALSE; Iremainder section} while (TRUE);Figure 6.2 The structure of process A in Peterson's solution.The eventual value of turn determines which of the two processes is allowedto enter its critical section first.We now prove that this solution is correct. We need to show that:Mutual exclusion is preserved.The progress requirement is satisfied.The bounded-waiting requirement is met.To prove property 1, we note that each P; enters its critical section onlyif either flag [j] == false or turn == i. Also note that, if both processescan be executing in their critical sections at the same time, then flag [0] ==flag [1] ==true. These two observations imply that Po and P1 could not havesuccessfully executed their while statements at about the same time, since thevalue of turn can be either 0 or 1 but camwt be both. Hence, one of the processes-say, Pi -must have successfully executed the while statencent, whereas P;had to execute at least one additional statement ("turn== j"). However, atthat time, flag [j] == true and turn == j, and this condition will persist aslong as Pi is in its critical section; as a result, mutual exclusion is preserved.To prove properties 2 and 3, we note that a process P; can be prevented fromentering the critical section only if it is stuck in the while loop with the conditionflag [j] ==true and turn=== j; this loop is the only one possible. If Pi is notready to enter the critical section, then flag [j] ==false, and P; can enter itscritical section. If Pj has set flag [j] to true and is also executing in its whilestatement, then either turn === i or turn === j. If turn == i, then P; will enterthe critical section. If turn== j, then Pi will enter the critical section. However,once Pi exits its critical section, it will reset flag [j] to false, allowing P; toenter its critical section. If Pi resets flag [j] to true, it must also set turn to i.Thus, since P; does not change the value of the variable turn while executingthe while statement, P; will enter the critical section (progress) after at mostone entry by P1 (bounded waiting). 239. 6.46.4 231do {acquire lockcritical sectionI release lock Iremainder section} while (TRUE);Figure 6.3 Solution to the critical-section problem using locks.We have just described one software-based solution to the critical-sectionproblem. However, as mentioned, software-based solutions such as Peterson'sare not guaranteed to work on modern computer architectures. Instead, wecan generally state that any solution to the critical-section problem requires asimple tool-a lock. Race conditions are prevented by requiring that criticalregions be protected by locks. That is, a process must acquire a lock beforeentering a critical section; it releases the lock when it exits the critical section.This is illustrated in Figure 6.3.In the following discussions, we explore several more solutions to thecritical-section problem using techniques ranging from hardware to softwarebasedAPis available to application programmers. All these solutions are basedon the premise of locking; however, as we shall see, the designs of such lockscan be quite sophisticated.We start by presenting some simple hardware instructions that are availableon many systems and showing how they can be used effectively in solving thecritical-section problem. Hardware features can make any programming taskeasier and improve system efficiency.The critical-section problem could be solved simply in a uniprocessor environmentif we could prevent interrupts from occurring while a shared variablewas being modified. In this manner, we could be sure that the current sequenceof instructions would be allowed to execute in order without preemption. Noother instructions would be run, so no unexpected modifications could bemade to the shared variable. This is often the approach taken by nonpreemptivekernels.Unfortunately, this solution is not as feasible in a multiprocessor environment.Disabling interrupts on a multiprocessor can be time consuming, as theboolean TestAndSet(boolean *target) {boolean rv = *target;*target = TRUE;return rv;}Figure 6.4 The definition of the TestAndSet () instruction. 240. 232 Chapter 6do {while (TestAndSet(&lock)); II do nothingII critical sectionlock = FALSE;II remainder section} while (TRUE);Figure 6.5 Mutual-exclusion implementation with TestAndSet ().message is passed to all the processors. This message passing delays entry intoeach critical section, and system efficiency decreases. Also consider the effecton a