mca ii os u-5 unix linux file system

94
UNIT-5:UNIX/LINUX FILE SYSTEM MCA-II

Upload: rai-university

Post on 15-Jul-2015

186 views

Category:

Education


1 download

TRANSCRIPT

UNIT-5:UNIX/LINUX FILE SYSTEM MCA-II

Chapter 18: The Linux System

� Linux History

� Design Principles

� Kernel Modules

� Process Management

� Scheduling

� Memory Management

� File Systems

� Input and Output

� Interprocess Communication

� Network Structure

� Security

Objectives

� To explore the history of the UNIX operating system from which Linux is derived and the principles upon which Linux’s design is based

� To examine the Linux process model and illustrate how Linux schedules processes and provides interprocess communication

� To look at memory management in Linux

� To explore how Linux implements file systems and manages I/O devices

History

� Linux is a modern, free operating system based on UNIX standards

� First developed as a small but self-contained kernel in 1991 by Linus Torvalds, with the major design goal of UNIX compatibility, released as open source

� Its history has been one of collaboration by many users from all around the world, corresponding almost exclusively over the Internet

� It has been designed to run efficiently and reliably on common PC hardware, but also runs on a variety of other platforms

� The core Linux operating system kernel is entirely original, but it can run much existing free UNIX software, resulting in an entire UNIX-compatible operating system free from proprietary code

� Linux system has many, varying Linux distributions including the kernel, applications, and management tools

The Linux Kernel

� Version 0.01 (May 1991) had no networking, ran only on 80386-compatible Intel processors and on PC hardware, had extremely limited device-drive support, and supported only the Minix file system

� Linux 1.0 (March 1994) included these new features:� Support for UNIX’s standard TCP/IP networking protocols� BSD-compatible socket interface for networking programming� Device-driver support for running IP over an Ethernet� Enhanced file system� Support for a range of SCSI controllers for

high-performance disk access� Extra hardware support

� Version 1.2 (March 1995) was the final PC-only Linux kernel� Kernels with odd version numbers are development kernels, those

with even numbers are production kernels

Kernel Modules

� Sections of kernel code that can be compiled, loaded, and unloaded independent of the rest of the kernel.

� A kernel module may typically implement a device driver, a file system, or a networking protocol.

� The module interface allows third parties to write and distribute, on their own terms, device drivers or file systems that could not be distributed under the GPL.

� Kernel modules allow a Linux system to be set up with a standard, minimal kernel, without any extra device drivers built in.

� Four components to Linux module support:

� module-management system

� module loader and unloader

� driver-registration system

� conflict-resolution mechanism

Module Management

� Supports loading modules into memory and letting them talk to the rest of the kernel

� Module loading is split into two separate sections:

� Managing sections of module code in kernel memory

� Handling symbols that modules are allowed to reference

� The module requestor manages loading requested, but currently unloaded, modules; it also regularly queries the kernel to see whether a dynamically loaded module is still in use, and will unload it when it is no longer actively needed

Driver Registration

� Allows modules to tell the rest of the kernel that a new driver has become available

� The kernel maintains dynamic tables of all known drivers, and provides a set of routines to allow drivers to be added to or removed from these tables at any time

� Registration tables include the following items:

� Device drivers

� File systems

� Network protocols

� Binary format

Conflict Resolution

� A mechanism that allows different device drivers to reserve hardware resources and to protect those resources from accidental use by another driver.

� The conflict resolution module aims to:

� Prevent modules from clashing over access to hardware resources

� Prevent autoprobes from interfering with existing device drivers

� Resolve conflicts with multiple drivers trying to access the same hardware:

1. Kernel maintains list of allocated HW resources

2. Driver reserves resources with kernel database first

3. Reservation request rejected if resource not available

Process Management

� UNIX process management separates the creation of processes and the running of a new program into two distinct operations.

� The fork() system call creates a new process

� A new program is run after a call to exec()

� Under UNIX, a process encompasses all the information that the operating system must maintain to track the context of a single execution of a single program

� Under Linux, process properties fall into three groups: the process’s identity, environment, and context

Process Identity

� Process ID (PID) - The unique identifier for the process; used to specify processes to the operating system when an application makes a system call to signal, modify, or wait for another process

� Credentials - Each process must have an associated user ID and one or more group IDs that determine the process’s rights to access system resources and files

� Personality - Not traditionally found on UNIX systems, but under Linux each process has an associated personality identifier that can slightly modify the semantics of certain system calls

� Used primarily by emulation libraries to request that system calls be compatible with certain specific flavors of UNIX

� Namespace – Specific view of file system hierarchy� Most processes share common namespace and operate on a

shared file-system hierarchy� But each can have unique file-system hierarchy with its own root

directory and set of mounted file systems

Process Environment

� The process’s environment is inherited from its parent, and is composed of two null-terminated vectors:

� The argument vector lists the command-line arguments used to invoke the running program; conventionally starts with the name of the program itself.

� The environment vector is a list of “NAME=VALUE” pairs that associates named environment variables with arbitrary textual values.

� Passing environment variables among processes and inheriting variables by a process’s children are flexible means of passing information to components of the user-mode system software.

� The environment-variable mechanism provides a customization of the operating system that can be set on a per-process basis, rather than being configured for the system as a whole.

Process Context

� The (constantly changing) state of a running program at any point in time

� The scheduling context is the most important part of the process context; it is the information that the scheduler needs to suspend and restart the process

� The kernel maintains accounting information about the resources currently being consumed by each process, and the total resources consumed by the process in its lifetime so far

� The file table is an array of pointers to kernel file structures

� When making file I/O system calls, processes refer to files by their index into this table, the file descriptor (fd)

Process Context (Cont.)

� Whereas the file table lists the existing open files, the file-system context applies to requests to open new files

� The current root and default directories to be used for new file searches are stored here

� The signal-handler table defines the routine in the process’s address space to be called when specific signals arrive

� The virtual-memory context of a process describes the full contents of the its private address space

Processes and Threads

� Linux uses the same internal representation for processes and threads; a thread is simply a new process that happens to share the same address space as its parent

� Both are called tasks by Linux

� A distinction is only made when a new thread is created by the clone() system call

� fork() creates a new task with its own entirely new task context

� clone() creates a new task with its own identity, but that is allowed to share the data structures of its parent

� Using clone() gives an application fine-grained control over exactly what is shared between two threads

Kernel Synchronization

� A request for kernel-mode execution can occur in two ways:

� A running program may request an operating system service, either explicitly via a system call, or implicitly, for example, when a page fault occurs

� A device driver may deliver a hardware interrupt that causes the CPU to start executing a kernel-defined handler for that interrupt

� Kernel synchronization requires a framework that will allow the kernel’s critical sections to run without interruption by another critical section

Kernel Synchronization (Cont.)

� Linux uses two techniques to protect critical sections:

1. Normal kernel code is nonpreemptible (until 2.6)– when a time interrupt is received while a process is executing a kernel system service routine, the kernel’s need_resched flag is set so that the scheduler will run once the system call has completed and control is about to be returned to user mode

2. The second technique applies to critical sections that occur in an interrupt service routines

– By using the processor’s interrupt control hardware to disable interrupts during a critical section, the kernel guarantees that it can proceed without the risk of concurrent access of shared data structures

� Provides spin locks, semaphores, and reader-writer versions of both

Behavior modified if on single processor or multi:

Kernel Synchronization (Cont.)

� To avoid performance penalties, Linux’s kernel uses a synchronization architecture that allows long critical sections to run without having interrupts disabled for the critical section’s entire duration

� Interrupt service routines are separated into a top half and a bottom half

� The top half is a normal interrupt service routine, and runs with recursive interrupts disabled

� The bottom half is run, with all interrupts enabled, by a miniature scheduler that ensures that bottom halves never interrupt themselves

� This architecture is completed by a mechanism for disabling selected bottom halves while executing normal, foreground kernel code

Interrupt Protection Levels

� Each level may be interrupted by code running at a higher level, but will never be interrupted by code running at the same or a lower level

� User processes can always be preempted by another process when a time-sharing scheduling interrupt occurs

Symmetric Multiprocessing

� Linux 2.0 was the first Linux kernel to support SMP hardware; separate processes or threads can execute in parallel on separate processors

� Until version 2.2, to preserve the kernel’s nonpreemptible synchronization requirements, SMP imposes the restriction, via a single kernel spinlock, that only one processor at a time may execute kernel-mode code

� Later releases implement more scalability by splitting single spinlock into multiple locks, each protecting a small subset of kernel data structures

� Version 3.0 adds even more fine-grained locking, processor affinity, and load-balancing

Splitting of Memory in a Buddy Heap

Slab Allocator in Linux

Virtual Memory

� The VM system maintains the address space visible to each process: It creates pages of virtual memory on demand, and manages the loading of those pages from disk or their swapping back out to disk as required.

� The VM manager maintains two separate views of a process’s address space:

� A logical view describing instructions concerning the layout of the address space

The address space consists of a set of non-overlapping regions, each representing a continuous, page-aligned subset of the address space

� A physical view of each address space which is stored in the hardware page tables for the process

Virtual Memory (Cont.)

� Virtual memory regions are characterized by:

� The backing store, which describes from where the pages for a region come; regions are usually backed by a file or by nothing (demand-zero memory)

� The region’s reaction to writes (page sharing or copy-on-write)

� The kernel creates a new virtual address space

1. When a process runs a new program with the exec() system call

2. Upon creation of a new process by the fork() system call

Virtual Memory (Cont.)

� On executing a new program, the process is given a new, completely empty virtual-address space; the program-loading routines populate the address space with virtual-memory regions

� Creating a new process with fork() involves creating a complete copy of the existing process’s virtual address space

� The kernel copies the parent process’s VMA descriptors, then creates a new set of page tables for the child

� The parent’s page tables are copied directly into the child’s, with the reference count of each page covered being incremented

� After the fork, the parent and child share the same physical pages of memory in their address spaces

Swapping and Paging

� The VM paging system relocates pages of memory from physical memory out to disk when the memory is needed for something else

� The VM paging system can be divided into two sections:

� The pageout-policy algorithm decides which pages to write out to disk, and when

� The paging mechanism actually carries out the transfer, and pages data back into physical memory as needed

� Can page out to either swap device or normal files

� Bitmap used to track used blocks in swap space kept in physical memory

� Allocator uses next-fit algorithm to try to write contiguous runs

Kernel Virtual Memory

� The Linux kernel reserves a constant, architecture-dependent region of the virtual address space of every process for its own internal use.

� This kernel virtual-memory area contains two regions:

� A static area that contains page table references to every available physical page of memory in the system, so that there is a simple translation from physical to virtual addresses when running kernel code

� The reminder of the reserved section is not reserved for any specific purpose; its page-table entries can be modified to point to any other areas of memory

Executing and Loading User Programs

� Linux maintains a table of functions for loading programs; it gives each function the opportunity to try loading the given file when an exec system call is made

� The registration of multiple loader routines allows Linux to support both the ELF and a.out binary formats

� Initially, binary-file pages are mapped into virtual memory

� Only when a program tries to access a given page will a page fault result in that page being loaded into physical memory

� An ELF-format binary file consists of a header followed by several page-aligned sections

� The ELF loader works by reading the header and mapping the sections of the file into separate regions of virtual memory

Memory Layout for ELF Programs

Ext2fs Block-Allocation Policies

Journaling

� ext3 implements journaling, with file system updates first written to a log file in the form of transactions

� Once in log file, considered committed

� Over time, log file transactions replayed over file system to put changes in place

� On system crash, some transactions might be in journal but not yet placed into file system

� Must be completed once system recovers

� No other consistency checking is needed after a crash (much faster than older methods)

� Improves write performance on hard disks by turning random I/O into sequential I/O

The Linux Proc File System

� The proc file system does not store data, rather, its contents are computed on demand according to user file I/O requests

� proc must implement a directory structure, and the file contents within; it must then define a unique and persistent inode number for each directory and files it contains

� It uses this inode number to identify just what operation is required when a user tries to read from a particular file inode or perform a lookup in a particular directory inode

� When data is read from one of these files, proc collects the appropriate information, formats it into text form and places it into the requesting process’s read buffer

Input and Output

� The Linux device-oriented file system accesses disk storage through two caches:

� Data is cached in the page cache, which is unified with the virtual memory system

� Metadata is cached in the buffer cache, a separate cache indexed by the physical disk block

� Linux splits all devices into three classes:

� block devices allow random access to completely independent, fixed size blocks of data

� character devices include most other devices; they don’t need to support the functionality of regular files

� network devices are interfaced via the kernel’s networking subsystem

Block Devices

� Provide the main interface to all disk devices in a system

� The block buffer cache serves two main purposes:

� it acts as a pool of buffers for active I/O

� it serves as a cache for completed I/O

� The request manager manages the reading and writing of buffer contents to and from a block device driver

� Kernel 2.6 introduced Completely Fair Queueing (CFQ)

� Now the default scheduler

� Fundamentally different from elevator algorithms

� Maintains set of lists, one for each process by default

� Uses C-SCAN algorithm, with round robin between all outstanding I/O from all processes

� Four blocks from each process put on at once

Device-Driver Block Structure

Interprocess Communication

� Like UNIX, Linux informs processes that an event has occurred via signals

� There is a limited number of signals, and they cannot carry information: Only the fact that a signal occurred is available to a process

� The Linux kernel does not use signals to communicate with processes with are running in kernel mode, rather, communication within the kernel is accomplished via scheduling states and wait_queue structures

� Also implements System V Unix semaphores

� Process can wait for a signal or a semaphore

� Semaphores scale better

� Operations on multiple semaphores can be atomic

Passing Data Between Processes

� The pipe mechanism allows a child process to inherit a communication channel to its parent, data written to one end of the pipe can be read a the other

� Shared memory offers an extremely fast way of communicating; any data written by one process to a shared memory region can be read immediately by any other process that has mapped that region into its address space

� To obtain synchronization, however, shared memory must be used in conjunction with another Interprocess-communication mechanism

Security

� The pluggable authentication modules (PAM) system is available under Linux

� PAM is based on a shared library that can be used by any system component that needs to authenticate users

� Access control under UNIX systems, including Linux, is performed through the use of unique numeric identifiers (uid and gid)

� Access control is performed by assigning objects a protections mask, which specifies which access modes—read, write, or execute—are to be granted to processes with owner, group, or world access

Security (Cont.)

� Linux augments the standard UNIX setuid mechanism in two ways:

� It implements the POSIX specification’s saved user-id mechanism, which allows a process to repeatedly drop and reacquire its effective uid

� It has added a process characteristic that grants just a subset of the rights of the effective uid

� Linux provides another mechanism that allows a client to selectively pass access to a single file to some server process without granting it any other privileges

� Foundation for the executive and the subsystems

� Never paged out of memory; execution is never preempted

� Four main responsibilities:

� thread scheduling

� interrupt and exception handling

� low-level processor synchronization

� recovery after a power failure

� Kernel is object-oriented, uses two sets of objects

� dispatcher objects control dispatching and synchronization (events, mutants, mutexes, semaphores, threads and timers)

� control objects (asynchronous procedure calls, interrupts, power notify, power status, process and profile objects)

System Components — Kernel

Kernel — Process and Threads

� The process has a virtual memory address space, information (such as a base priority), and an affinity for one or more processors.

� Threads are the unit of execution scheduled by the kernel’s dispatcher.

� Each thread has its own state, including a priority, processor affinity, and accounting information.

� A thread can be one of six states: ready, standby, running, waiting, transition, and terminated.

Kernel — Scheduling

� The dispatcher uses a 32-level priority scheme to determine the order of thread execution.

� Priorities are divided into two classes

The real-time class contains threads with priorities ranging from 16 to 31

The variable class contains threads having priorities from 0 to 15

� Characteristics of Windows 7’s priority strategy

� Trends to give very good response times to interactive threads that are using the mouse and windows

� Enables I/O-bound threads to keep the I/O devices busy

� Complete-bound threads soak up the spare CPU cycles in the background

Kernel — Scheduling (Cont.)

� Scheduling can occur when a thread enters the ready or wait state, when a thread terminates, or when an application changes a thread’s priority or processor affinity.

� Real-time threads are given preferential access to the CPU; but 7 does not guarantee that a real-time thread will start to execute within any particular time limit .

� This is known as soft realtime.

Windows 7 Interrupt Request Levels

Kernel — Trap Handling

� The kernel provides trap handling when exceptions and interrupts are generated by hardware of software.

� Exceptions that cannot be handled by the trap handler are handled by the kernel's exception dispatcher.

� The interrupt dispatcher in the kernel handles interrupts by calling either an interrupt service routine (such as in a device driver) or an internal kernel routine.

� The kernel uses spin locks that reside in global memory to achieve multiprocessor mutual exclusion.

Executive — Object Manager

� Windows 7 uses objects for all its services and entities; the object manger supervises the use of all the objects

� Generates an object handle

� Checks security

� Keeps track of which processes are using each object

� Objects are manipulated by a standard set of methods, namely create, open, close, delete, query name, parse and security.

Executive — Naming Objects

� The Windows 7 executive allows almost any object to be given a name, which may be either permanent or temporary. Exceptions are process, thread and some others object types.

� Object names are structured like file path names in MS-DOS and UNIX.

� Windows 7 implements a symbolic link object, which is similar to symbolic links in UNIX that allow multiple nicknames or aliases to refer to the same file.

� A process gets an object handle by creating an object by opening an existing one, by receiving a duplicated handle from another process, or by inheriting a handle from a parent process.

� Each object is protected by an access control list.

Executive — Virtual Memory Manager

� The design of the VM manager assumes that the underlying hardware supports virtual to physical mapping a paging mechanism, transparent cache coherence on multiprocessor systems, and virtual addressing aliasing.

� The VM manager in Windows 7 uses a page-based management scheme with a page size of 4 KB.

� The Windows 7 VM manager uses a two step process to allocate memory

� The first step reserves a portion of the process’s address space

� The second step commits the allocation by assigning space in the system’s paging file(s)

Virtual-Memory Layout

Virtual Memory Manager (Cont.)

� The virtual address translation in Windows 7 uses several data structures

� Each process has a page directory that contains 1024 page directory entries of size 4 bytes.

� Each page directory entry points to a page table which contains 1024 page table entries (PTEs) of size 4 bytes.

� Each PTE points to a 4 KB page frame in physical memory.

� A 10-bit integer can represent all the values form 0 to 1023, therefore, can select any entry in the page directory, or in a page table.

� This property is used when translating a virtual address pointer to a bye address in physical memory.

� A page can be in one of six states: valid, zeroed, free standby, modified and bad.

Virtual-to-Physical Address Translation

� 10 bits for page directory entry, 20 bits for page table entry, and 12 bits for byte offset in page

Page File Page-Table Entry

5 bits for page protection, 20 bits for page frame address, 4 bits to select a paging file, and 3 bits that describe the page state. V = 0

Executive — Process Manager

� Provides services for creating, deleting, and using threads and processes

� Issues such as parent/child relationships or process hierarchies are left to the particular environmental subsystem that owns the process.

Executive — Local Procedure Call Facility

� The LPC passes requests and results between client and server processes within a single machine.

� In particular, it is used to request services from the various Windows 7 subsystems.

� When a LPC channel is created, one of three types of message passing techniques must be specified.

� First type is suitable for small messages, up to 256 bytes; port's message queue is used as intermediate storage, and the messages are copied from one process to the other.

� Second type avoids copying large messages by pointing to a shared memory section object created for the channel.

� Third method, called quick LPC was used by graphical display portions of the Win32 subsystem.

Executive — I/O Manager

� The I/O manager is responsible for � file systems� cache management � device drivers� network drivers

� Keeps track of which installable file systems are loaded, and manages buffers for I/O requests

� Works with VM Manager to provide memory-mapped file I/O

� Controls the Windows 7 cache manager, which handles caching for the entire I/O system

� Supports both synchronous and asynchronous operations, provides time outs for drivers, and has mechanisms for one driver to call another

File I/O

Executive — Security Reference Monitor

� The object-oriented nature of Windows 7 enables the use of a uniform mechanism to perform runtime access validation and audit checks for every entity in the system.

� Whenever a process opens a handle to an object, the security reference monitor checks the process’s security token and the object’s access control list to see whether the process has the necessary rights.

Executive – Plug-and-Play Manager

� Plug-and-Play (PnP) manager is used to recognize and adapt to changes in the hardware configuration.

� When new devices are added (for example, PCI or USB), the PnP manager loads the appropriate driver.

� The manager also keeps track of the resources used by each device.

Environmental Subsystems

� User-mode processes layered over the native Windows 7 executive services to enable 7 to run programs developed for other operating system.

� Windows 7 uses the Win32 subsystem as the main operating environment; Win32 is used to start all processes.

� It also provides all the keyboard, mouse and graphical display capabilities.

� MS-DOS environment is provided by a Win32 application called the virtual dos machine (VDM), a user-mode process that is paged and dispatched like any other Windows 7 thread.

Environmental Subsystems (Cont.)

� 16-Bit Windows Environment:

� Provided by a VDM that incorporates Windows on Windows

� Provides the Windows 3.1 kernel routines and sub routines for window manager and GDI functions

� The POSIX subsystem is designed to run POSIX applications following the POSIX.1 standard which is based on the UNIX model.

Environmental Subsystems (Cont.)

� OS/2 subsystems runs OS/2 applications

� Logon and Security Subsystems authenticates users logging on to Windows 7 systems

� Users are required to have account names and passwords.

� The authentication package authenticates users whenever they attempt to access an object in the system.

� Windows 7 uses Kerberos as the default authentication package

File System

� The fundamental structure of the Windows 7 file system (NTFS) is a volume

� Created by the Windows 7 disk administrator utility

� Based on a logical disk partition

� May occupy a portions of a disk, an entire disk, or span across several disks

� All metadata, such as information about the volume, is stored in a regular file

� NTFS uses clusters as the underlying unit of disk allocation

� A cluster is a number of disk sectors that is a power of two

� Because the cluster size is smaller than for the 16-bit FAT file system, the amount of internal fragmentation is reduced

File System — Internal Layout

� NTFS uses logical cluster numbers (LCNs) as disk addresses

� A file in NTFS is not a simple byte stream, as in MS-DOS or UNIX, rather, it is a structured object consisting of attributes

� Every file in NTFS is described by one or more records in an array stored in a special file called the Master File Table (MFT)

� Each file on an NTFS volume has a unique ID called a file reference.

� 64-bit quantity that consists of a 48-bit file number and a 16-bit sequence number

� Can be used to perform internal consistency checks

� The NTFS name space is organized by a hierarchy of directories; the index root contains the top level of the B+ tree

File System — Recovery

� All file system data structure updates are performed inside transactions that are logged.

� Before a data structure is altered, the transaction writes a log record that contains redo and undo information.

� After the data structure has been changed, a commit record is written to the log to signify that the transaction succeeded.

� After a crash, the file system data structures can be restored to a consistent state by processing the log records.

File System — Recovery (Cont.)

� This scheme does not guarantee that all the user file data can be recovered after a crash, just that the file system data structures (the metadata files) are undamaged and reflect some consistent state prior to the crash.

� The log is stored in the third metadata file at the beginning of the volume.

� The logging functionality is provided by the Windows 7 log file service.

File System — Security

� Security of an NTFS volume is derived from the Windows 7 object model.

� Each file object has a security descriptor attribute stored in this MFT record.

� This attribute contains the access token of the owner of the file, and an access control list that states the access privileges that are granted to each user that has access to the file.

Volume Management and Fault Tolerance

� FtDisk, the fault tolerant disk driver for Windows 7, provides several ways to combine multiple SCSI disk drives into one logical volume

� Logically concatenate multiple disks to form a large logical volume, a volume set

� Interleave multiple physical partitions in round-robin fashion to form a stripe set (also called RAID level 0, or “disk striping”)

� Variation: stripe set with parity, or RAID level 5

� Disk mirroring, or RAID level 1, is a robust scheme that uses a mirror set — two equally sized partitions on tow disks with identical data contents

� To deal with disk sectors that go bad, FtDisk, uses a hardware technique called sector sparing and NTFS uses a software technique called cluster remapping

Volume Set On Two Drives

Stripe Set on Two Drives

Stripe Set With Parity on Three Drives

Mirror Set on Two Drives

File System — Compression

� To compress a file, NTFS divides the file’s data into compression units, which are blocks of 16 contiguous clusters.

� For sparse files, NTFS uses another technique to save space.

� Clusters that contain all zeros are not actually allocated or stored on disk.

� Instead, gaps are left in the sequence of virtual cluster numbers stored in the MFT entry for the file.

� When reading a file, if a gap in the virtual cluster numbers is found, NTFS just zero-fills that portion of the caller’s buffer.

File System — Reparse Points

� A reparse point returns an error code when accessed. The reparse data tells the I/O manager what to do next.

� Reparse points can be used to provide the functionality of UNIX mounts.

� Reparse points can also be used to access files that have been moved to offline storage.

Networking

� Windows 7 supports both peer-to-peer and client/server networking; it also has facilities for network management.

� To describe networking in Windows 7, we refer to two of the internal networking interfaces:

� NDIS (Network Device Interface Specification) — Separates network adapters from the transport protocols so that either can be changed without affecting the other.

� TDI (Transport Driver Interface) — Enables any session layer component to use any available transport mechanism.

� Windows 7 implements transport protocols as drivers that can be loaded and unloaded from the system dynamically.

Networking — Protocols

� The server message block (SMB) protocol is used to send I/O requests over the network. It has four message types:

- Session control

- File

- Printer

- Message

� The network basic Input/Output system (NetBIOS) is a hardware abstraction interface for networks

� Used to:

Establish logical names on the network

Establish logical connections of sessions between two logical names on the network

Support reliable data transfer for a session via NetBIOS requests or SMBs

Networking — Protocols (Cont.)

� Windows 7 uses the TCP/IP Internet protocol version 4 and version 6 to connect to a wide variety of operating systems and hardware platforms.

� PPTP (Point-to-Point Tunneling Protocol) is used to communicate between Remote Access Server modules running on Windows 7 machines that are connected over the Internet.

Networking — Protocols (Cont.)

� The Data Link Control protocol (DLC) is used to access IBM mainframes and HP printers that are directly connected to the network (possible on 32-bit only versions using unsigned drivers).

Networking — Dist. Processing Mechanisms

� Windows 7 supports distributed applications via named NetBIOS, named pipes and mailslots, Windows Sockets, Remote Procedure Calls (RPC), and Network Dynamic Data Exchange (NetDDE).

� NetBIOS applications can communicate over the network using TCP/IP.

� Named pipes are connection-oriented messaging mechanism that are named via the uniform naming convention (UNC).

� Mailslots are a connectionless messaging mechanism that are used for broadcast applications, such as for finding components on the network.

� Winsock, the windows sockets API, is a session-layer interface that provides a standardized interface to many transport protocols that may have different addressing schemes.

Distributed Processing Mechanisms (Cont.)

� The Windows 7 RPC mechanism follows the widely-used Distributed Computing Environment standard for RPC messages, so programs written to use Windows 7 RPCs are very portable.

� RPC messages are sent using NetBIOS, or Winsock on TCP/IP networks, or named pipes on LAN Manager networks.

� Windows 7 provides the Microsoft Interface Definition Language to describe the remote procedure names, arguments, and results.

Networking — Redirectors and Servers

� In Windows 7, an application can use the Windows 7 I/O API to access files from a remote computer as if they were local, provided that the remote computer is running an MS-NET server.

� A redirector is the client-side object that forwards I/O requests to remote files, where they are satisfied by a server.

� For performance and security, the redirectors and servers run in kernel mode.

Access to a Remote File

� The application calls the I/O manager to request that a file be opened (we assume that the file name is in the standard UNC format).

� The I/O manager builds an I/O request packet.

� The I/O manager recognizes that the access is for a remote file, and calls a driver called a Multiple Universal Naming Convention Provider (MUP).

� The MUP sends the I/O request packet asynchronously to all registered redirectors.

� A redirector that can satisfy the request responds to the MUP

� To avoid asking all the redirectors the same question in the future, the MUP uses a cache to remember with redirector can handle this file.

Access to a Remote File (Cont.)

� The redirector sends the network request to the remote system.

� The remote system network drivers receive the request and pass it to the server driver.

� The server driver hands the request to the proper local file system driver.

� The proper device driver is called to access the data.

� The results are returned to the server driver, which sends the data back to the requesting redirector.

Networking — Domains

� NT uses the concept of a domain to manage global access rights within groups.

� A domain is a group of machines running NT server that share a common security policy and user database.

� Windows 7 provides three models of setting up trust relationships

� One way, A trusts B

� Two way, transitive, A trusts B, B trusts C so A, B, C trust each other

� Crosslink – allows authentication to bypass hierarchy to cut down on authentication traffic.

Name Resolution in TCP/IP Networks

� On an IP network, name resolution is the process of converting a computer name to an IP address

e.g., www.bell-labs.com resolves to 135.104.1.14

� Windows 7 provides several methods of name resolution:

� Windows Internet Name Service (WINS)

� broadcast name resolution

� domain name system (DNS)

� a host file

� an LMHOSTS file

Name Resolution (Cont.)

� WINS consists of two or more WINS servers that maintain a dynamic database of name to IP address bindings, and client software to query the servers.

� WINS uses the Dynamic Host Configuration Protocol (DHCP), which automatically updates address configurations in the WINS database, without user or administrator intervention.

Programmer Interface — Access to Kernel Obj.

� A process gains access to a kernel object named XXX by calling the CreateXXX function to open a handle to XXX; the handle is unique to that process.

� A handle can be closed by calling the CloseHandle function; the system may delete the object if the count of processes using the object drops to 0.

� Windows 7 provides three ways to share objects between processes

� A child process inherits a handle to the object

� One process gives the object a name when it is created and the second process opens that name

� DuplicateHandle function:

Given a handle to process and the handle’s value a second process can get a handle to the same object, and thus share it

Programmer Interface — Process Management

� Process is started via the CreateProcess routine which loads any dynamic link libraries that are used by the process, and creates a primary thread.

� Additional threads can be created by the CreateThread function.

� Every dynamic link library or executable file that is loaded into the address space of a process is identified by an instance handle.

Process Management (Cont.)

� Scheduling in Win32 utilizes four priority classes:

- IDLE_PRIORITY_CLASS (priority level 4)- NORMAL_PRIORITY_CLASS (level8 — typical for most processes

- HIGH_PRIORITY_CLASS (level 13)- REALTIME_PRIORITY_CLASS (level 24)

� To provide performance levels needed for interactive programs, 7 has a special scheduling rule for processes in the NORMAL_PRIORITY_CLASS

� 7 distinguishes between the foreground process that is currently selected on the screen, and the background processes that are not currently selected.

� When a process moves into the foreground, 7 increases the scheduling quantum by some factor, typically 3.

Process Management (Cont.)

� The kernel dynamically adjusts the priority of a thread depending on whether it is I/O-bound or CPU-bound.

� To synchronize the concurrent access to shared objects by threads, the kernel provides synchronization objects, such as semaphores and mutexes

� In addition, threads can synchronize by using the WaitForSingleObject or WaitForMultipleObjects functions.

� Another method of synchronization in the Win32 API is the critical section.

Process Management (Cont.)

� A fiber is user-mode code that gets scheduled according to a user-defined scheduling algorithm.

� Only one fiber at a time is permitted to execute, even on multiprocessor hardware.

� Windows 7 includes fibers to facilitate the porting of legacy UNIX applications that are written for a fiber execution model.

� Windows 7 also introduced user-mode scheduling for 64-bit systems which allows finer grained control of scheduling work without requiring kernel transitions.

Programmer Interface — Interprocess Communication

� Win32 applications can have interprocess communication by sharing kernel objects.

� An alternate means of interprocess communications is message passing, which is particularly popular for Windows GUI applications

� One thread sends a message to another thread or to a window.

� A thread can also send data with the message.

� Every Win32 thread has its own input queue from which the thread receives messages.

� This is more reliable than the shared input queue of 16-bit windows, because with separate queues, one stuck application cannot block input to the other applications

Programmer Interface — Memory Management

� Virtual memory:

� VirtualAlloc reserves or commits virtual memory

� VirtualFree decommits or releases the memory

� These functions enable the application to determine the virtual address at which the memory is allocated

� An application can use memory by memory mapping a file into its address space

� Multistage process

� Two processes share memory by mapping the same file into their virtual memory

Memory Management (Cont.)

� A heap in the Win32 environment is a region of reserved address space

� A Win 32 process is created with a 1 MB default heap

� Access is synchronized to protect the heap’s space allocation data structures from damage by concurrent updates by multiple threads

� Because functions that rely on global or static data typically fail to work properly in a multithreaded environment, the thread-local storage mechanism allocates global storage on a per-thread basis

� The mechanism provides both dynamic and static methods of creating thread-local storage

REFRENCES

� Operating System Concepts – 9th Edition By Silberchatz, Galvin and Gagne