weeks [01 02] 20100921

114
IS 311 File Organization Dr. Yasser ALHABIBI Misr University for Science and Technology Faculty of Information Technology Fall Semester 2010/2011

Upload: mohamed-kamel

Post on 06-May-2015

592 views

Category:

Technology


0 download

TRANSCRIPT

Page 1: Weeks [01 02] 20100921

IS 311File Organization

Dr. Yasser ALHABIBI

Misr University for Science and Technology

Faculty of Information Technology

Fall Semester

2010/2011

Page 2: Weeks [01 02] 20100921

List of References Course notes

Available (handed to students part by part).

Essential books (text books) • Operating Systems: Internals and Design Principles, 6/E by williams stalling

Publisher Prentice Hall Co.

• Fundamentals of Database Systems, By Elmasri R and Navathe S. B., Publisher Benjamin Cummings

• Database Management, By F R McFadden & J A Hoofer,

Publisher Benjamin Cummings

• An Introduction to Database Systems, By C. J. Date,

Publisher Addison-Wesley

Recommended books • Other books that are related to Operating Systems, and Database Management

Systems

Page 3: Weeks [01 02] 20100921

Course Outline

• Basic file concepts: Storage characteristics, Magnetic tape, Hard magnetic disks, Blocking and extents

• Physical data transfer considerations: Buffered input/output, Memory, Mapped files, File systems

• Sequential chronological files: Description and Primitive operations, File functions, Algorithms

• Ordered files: Description and Primitive operations, Algorithms• Direct Access files: Description and Primitive operations, Hashing

functions, Algorithms for handling synonyms• Indexed sequential files: Description and Primitive operations,

Algorithms• Indexes: concepts, Linear indexes, Tree indexes, Description of

heterogeneous and homogeneous Trees, Tree indexing algorithms.

Page 4: Weeks [01 02] 20100921

Course format/policies

Page 5: Weeks [01 02] 20100921

Course Format

• Lectures will be largely interactive. You will be called on randomly to answer questions.

• 20-minutes fairly simple quiz on classes (Weeks: 3, 5, 7, 11,13, and 15) just before break.

• Course at Windows Live Group will host lecture slides, quiz answers, homework, general announcements.

Page 6: Weeks [01 02] 20100921

Grading

• Breakdown– 40% final – 20% Mid Term– 20% weekly in-class quizzes– 20% bi-weekly assignments (Lab Activity)

• May share ideas for weekly assignment but must be write up individually.

• Will drop lowest quiz score.

Page 7: Weeks [01 02] 20100921

Homework Submission• Homework due every other Saturday before

midnight – explicit submission instructions later.• Please do not ever ask for an extension.• Late assignments incur 10% per day penalty up

to 3 days. After 3 days, no credit.• Solutions will be posted after all homework are

submitted (or 3 days, whichever comes first)• Under special circumstances, you may be

excused from an assignment or quiz. Must talk to me ahead of time.

Page 8: Weeks [01 02] 20100921

Course Strategy

• Course will move quickly

• Each subject builds on previous ones – More important to be consistent than

occasionally heroic

• Start by hacking, back up to cover more fundamental topics

Page 9: Weeks [01 02] 20100921

Weeks 01 & 02

Reference:Operating Systems: Internals and Design Principles, 6/E by williams stalling,, Publisher Prentice Hall Co. [Chapter 11]

Page 10: Weeks [01 02] 20100921

Roadmap– I/O Devices– Organization of the I/O Function– Operating System Design Issues– I/O Buffering– Disk Scheduling– Raid– Disk Cache– UNIX SVR4 I/O– LINUX I/O– Windows I/O

Page 11: Weeks [01 02] 20100921

Categories of I/O Devices

• Difficult area of OS design– Difficult to develop a consistent solution due

to a wide variety of devices and applications

• Three Categories:– Human readable– Machine readable– Communications

Page 12: Weeks [01 02] 20100921

Human readable

• Devices used to communicate with the user– Printers and – terminals

• Video display• Keyboard• Mouse etc

Page 13: Weeks [01 02] 20100921

Machine readable

• Used to communicate with electronic equipment– Disk drives– USB keys– Sensors– Controllers– Actuators

Page 14: Weeks [01 02] 20100921

Communication

• Used to communicate with remote devices– Digital line drivers– Modems

Page 15: Weeks [01 02] 20100921

Differences in I/O Devices

• Devices differ in a number of areas– Data Rate– Application– Complexity of Control– Unit of Transfer– Data Representation– Error Conditions

Page 16: Weeks [01 02] 20100921

Data Rate

• May be massive difference between the data transfer rates of devices

Page 17: Weeks [01 02] 20100921

Application

– Disk used to store files requires file management software

– Disk used to store virtual memory pages needs special hardware and software to support it

– Terminal used by system administrator may have a higher priority

See Note Page

Page 18: Weeks [01 02] 20100921

Complexity of control

• A printer requires a relatively simple control interface.

• A disk is much more complex.

• This complexity is filtered to some extent by the complexity of the I/O module that controls the device.

Page 19: Weeks [01 02] 20100921

Unit of transfer

• Data may be transferred as – a stream of bytes or characters (e.g., terminal

I/O) – or in larger blocks (e.g., disk I/O).

Page 20: Weeks [01 02] 20100921

Data representation

• Different data encoding schemes are used by different devices, – including differences in character code and

parity conventions. Rule, principle

Equivalence, similarity

Page 21: Weeks [01 02] 20100921

Error Conditions

• The nature of errors differ widely from one device to another.

• Aspects include:– the way in which they are reported, – their consequences, – the available range of responses

Page 22: Weeks [01 02] 20100921

Roadmap– I/O Devices– Organization of the I/O Function– Operating System Design Issues– I/O Buffering– Disk Scheduling– Raid– Disk Cache– UNIX SVR4 I/O– LINUX I/O– Windows I/O

Page 23: Weeks [01 02] 20100921

Techniques for performing I/OORGANIZATION OF THE I/O FUNCTION

• Programmed I/O

• Interrupt-driven I/O

• Direct memory access (DMA)

See Note Page

Page 24: Weeks [01 02] 20100921

Evolution of the I/O Function

1. Processor directly controls a peripheral device

2. Controller or I/O module is added– Processor uses programmed I/O without

interrupts– Processor does not need to handle details of

external devices

See Note Page

Page 25: Weeks [01 02] 20100921

Evolution of the I/O Function cont…

3. Controller or I/O module with interrupts– Efficiency improves as processor does not

spend time waiting for an I/O operation to be performed

4. Direct Memory Access– Blocks of data are moved into memory without

involving the processor– Processor involved at beginning and end only

See Note Page

Page 26: Weeks [01 02] 20100921

Evolution of the I/O Function cont…

5. I/O module is a separate processor– CPU directs the I/O processor to execute an

I/O program in main memory.

6. I/O processor– I/O module has its own local memory– Commonly used to control communications

with interactive terminals

See Note Page

Page 27: Weeks [01 02] 20100921

Direct Memory Address

• Processor delegates I/O operation to the DMA module

• DMA module transfers data directly to or from memory

• When complete DMA module sends an interrupt signal to the processor

Page 28: Weeks [01 02] 20100921

DMA Configurations: Single Bus

• DMA can be configured in several ways• Shown here, all modules share the same

system bus

See Note Page

Page 29: Weeks [01 02] 20100921

DMA Configurations: Integrated DMA & I/O

• Direct Path between DMA and I/O modules• This substantially cuts the required bus cycles

See Note Page

Page 30: Weeks [01 02] 20100921

DMA Configurations: I/O Bus

• Reduces the number of I/O interfaces in the DMA module

See Note Page

Page 31: Weeks [01 02] 20100921

Roadmap– I/O Devices– Organization of the I/O Function– Operating System Design Issues– I/O Buffering– Disk Scheduling– Raid– Disk Cache– UNIX SVR4 I/O– LINUX I/O– Windows I/O

Page 32: Weeks [01 02] 20100921

Goals: Efficiency

• Most I/O devices extremely slow compared to main memory

• Use of multiprogramming allows for some processes to be waiting on I/O while another process executes

• I/O cannot keep up with processor speed– Swapping used to bring in ready processes– But this is an I/O operation itself

See Note Page

Page 33: Weeks [01 02] 20100921

Generality

• For simplicity and freedom from error it is desirable to handle all I/O devices in a uniform manner

• Hide most of the details of device I/O in lower-level routines

• Difficult to completely generalize, but can use a hierarchical modular design of I/O functions

See Note Page

Page 34: Weeks [01 02] 20100921

Hierarchical design

• A hierarchical philosophy leads to organizing an OS into layers

• Each layer relies on the next lower layer to perform more primitive functions

• It provides services to the next higher layer.

• Changes in one layer should not require changes in other layers

See Note Page

Page 35: Weeks [01 02] 20100921

Figure 11.4 A Model of I/O Organization

Page 36: Weeks [01 02] 20100921

Local peripheral device• Logical I/O:

– Deals with the device as a logical resource

• Device I/O:– Converts requested operations into

sequence of I/O instructions• Scheduling and Control

– Performs actual queuing and control operations

See Note Page

Page 37: Weeks [01 02] 20100921

Communications Port• Similar to previous but the logical

I/O module is replaced by a communications architecture, – This consist of a number of layers.– An example is TCP/IP,

Page 38: Weeks [01 02] 20100921

File System

• Directory management– Concerned with user operations

affecting files

• File System– Logical structure and operations

• Physical organisation– Converts logical names to physical

addresses

See Note Page

Page 39: Weeks [01 02] 20100921

Roadmap– I/O Devices– Organization of the I/O Function– Operating System Design Issues– I/O Buffering– Disk Scheduling– Raid– Disk Cache– UNIX SVR4 I/O– LINUX I/O– Windows I/O

Page 40: Weeks [01 02] 20100921

I/O Buffering

• Processes must wait for I/O to complete before proceeding– To avoid deadlock certain pages must remain

in main memory during I/O

• It may be more efficient to perform input transfers in advance of requests being made and to perform output transfers some time after the request is made.

See Note Page

Page 41: Weeks [01 02] 20100921

Block-oriented Buffering

• Information is stored in fixed sized blocks

• Transfers are made a block at a time– Can reference data b block number

• Used for disks and USB keys

See Note Page

Page 42: Weeks [01 02] 20100921

Stream-Oriented Buffering

• Transfer information as a stream of bytes

• Used for terminals, printers, communication ports, mouse and other pointing devices, and most other devices that are not secondary storage

See Note Page

Page 43: Weeks [01 02] 20100921

No Buffer

• Without a buffer, the OS directly access the device as and when it needs

Page 44: Weeks [01 02] 20100921

Single Buffer

• Operating system assigns a buffer in main memory for an I/O request

See Note Page

Page 45: Weeks [01 02] 20100921

Block OrientedSingle Buffer

• Input transfers made to buffer

• Block moved to user space when needed

• The next block is moved into the buffer– Read ahead or Anticipated Input

• Often a reasonable assumption as data is usually accessed sequentially

See Note Page

Page 46: Weeks [01 02] 20100921

Stream-orientedSingle Buffer

• Line-at-time or Byte-at-a-time

• Terminals often deal with one line at a time with carriage return signaling the end of the line

• Byte-at-a-time suites devices where a single keystroke may be significant– Also sensors and controllers

See Note Page

Page 47: Weeks [01 02] 20100921

Double Buffer

• Use two system buffers instead of one

• A process can transfer data to or from one buffer while the operating system empties or fills the other buffer

Page 48: Weeks [01 02] 20100921

Circular Buffer

• More than two buffers are used

• Each individual buffer is one unit in a circular buffer

• Used when I/O operation must keep up with process

See Note Page

Page 49: Weeks [01 02] 20100921

Buffer Limitations

• Buffering smoothes out peaks in I/O demand.– But with enough demand eventually all buffers

become full and their advantage is lost

• However, when there is a variety of I/O and process activities to service, buffering can increase the efficiency of the OS and the performance of individual processes.

See Note Page

Page 50: Weeks [01 02] 20100921

Roadmap– I/O Devices– Organization of the I/O Function– Operating System Design Issues– I/O Buffering– Disk Scheduling– Raid– Disk Cache– UNIX SVR4 I/O– LINUX I/O– Windows I/O

See Note Page

Page 51: Weeks [01 02] 20100921

Disk Performance Parameters

• The actual details of disk I/O operation depend on many things– A general timing diagram of disk I/O transfer

is shown here.

See Note PageSee Note Page

Page 52: Weeks [01 02] 20100921

Positioning the Read/Write Heads

• When the disk drive is operating, the disk is rotating at constant speed.

• Track selection involves moving the head in a movable-head system or electronically selecting one head on a fixed-head system.

See Note Page

Page 53: Weeks [01 02] 20100921

Disk Performance Parameters

• Access Time is the sum of:– Seek time: The time it takes to position the

head at the desired track– Rotational delay or rotational latency: The

time its takes for the beginning of the sector to reach the head

• Transfer Time is the time taken to transfer the data.

See Note Page

Page 54: Weeks [01 02] 20100921

Disk SchedulingPolicies

• To compare various schemes, consider a disk head is initially located at track 100.– assume a disk with 200 tracks and that the

disk request queue has random requests in it.

• The requested tracks, in the order received by the disk scheduler, are – 55, 58, 39, 18, 90, 160, 150, 38, 184.

Page 55: Weeks [01 02] 20100921

First-in, first-out (FIFO)

• Process request sequentially

• Fair to all processes

• Approaches random scheduling in performance if there are many processes

See Note Page

Page 56: Weeks [01 02] 20100921

Priority

• Goal is not to optimize disk use but to meet other objectives

• Short batch jobs may have higher priority

• Provide good interactive response time

• Longer jobs may have to wait an excessively long time

• A poor policy for database systems

See Note Page

Page 57: Weeks [01 02] 20100921

Last-in, first-out

• Good for transaction processing systems– The device is given to the most recent user so

there should be little arm movement

• Possibility of starvation since a job may never regain the head of the line

See Note Page

Page 58: Weeks [01 02] 20100921

Shortest Service Time First

• Select the disk I/O request that requires the least movement of the disk arm from its current position

• Always choose the minimum seek time

See Note Page

Page 59: Weeks [01 02] 20100921

SCAN

• Arm moves in one direction only, satisfying all outstanding requests until it reaches the last track in that direction then the direction is reversed

See Note Page

Page 60: Weeks [01 02] 20100921

C-SCAN

• Restricts scanning to one direction only

• When the last track has been visited in one direction, the arm is returned to the opposite end of the disk and the scan begins again

See Note Page

circular

Page 61: Weeks [01 02] 20100921

N-step-SCAN

• Segments the disk request queue into subqueues of length N

• Subqueues are processed one at a time, using SCAN

• New requests added to other queue when queue is processed

See Note Page

Page 62: Weeks [01 02] 20100921

FSCAN

• Two subqueues

• When a scan begins, all of the requests are in one of the queues, with the other empty.

• All new requests are put into the other queue.• Service of new requests is deferred until all of

the old requests have been processed.

See Note Page

Page 63: Weeks [01 02] 20100921

Performance Compared

Comparison of Disk Scheduling Algorithms

Page 64: Weeks [01 02] 20100921

Disk Scheduling Algorithms

Page 65: Weeks [01 02] 20100921

Roadmap– I/O Devices– Organization of the I/O Function– Operating System Design Issues– I/O Buffering– Disk Scheduling– Raid– Disk Cache– UNIX SVR4 I/O– LINUX I/O– Windows I/O

Page 66: Weeks [01 02] 20100921

Multiple Disks

• Disk I/O performance may be increased by spreading the operation over multiple read/write heads– Or multiple disks

• Disk failures can be recovered if parity information is stored

See Note Page

Page 67: Weeks [01 02] 20100921

RAID

• Redundant Array of Independent Disks

• Set of physical disk drives viewed by the operating system as a single logical drive

• Data are distributed across the physical drives of an array

• Redundant disk capacity is used to store parity information which provides recoverability from disk failure

See Note Page

Equivalence, similarity

Page 68: Weeks [01 02] 20100921

RAID 0 - Stripped

• Not a true RAID – no redundancy

• Disk failure is catastrophic

• Very fast due to parallel read/write

See Note Page

Disastrous, shattering

Page 69: Weeks [01 02] 20100921

RAID 1 - Mirrored

• Redundancy through duplication instead of parity.• Read requests can made in parallel.• Simple recovery from disk failure

See Note Page

Page 70: Weeks [01 02] 20100921

RAID 2 (Using Hamming code)

• Synchronized disk rotation

• Data stripping is used (extremely small)

• Hamming code used to correct single bit errors and detect double-bit errors

See Note Page

Coordinated, harmonized, matched

narrow piece

Playacting, preference

Page 71: Weeks [01 02] 20100921

RAID 3 bit-interleaved parity

• Similar to RAID-2 but uses all parity bits stored on a single drive

See Note Page

equivalence

Page 72: Weeks [01 02] 20100921

RAID 4 Block-level parity

• A bit-by-bit parity strip is calculated across corresponding strips on each data disk

• The parity bits are stored in the corresponding strip on the parity disk.

See Note Page

Page 73: Weeks [01 02] 20100921

RAID 5 Block-level Distributed parity

• Similar to RAID-4 but distributing the parity bits across all drives

See Note Page

Page 74: Weeks [01 02] 20100921

RAID 6 Dual Redundancy

• Two different parity calculations are carried out – stored in separate blocks on different disks.

• Can recover from two disks failing

See Note Page

Page 75: Weeks [01 02] 20100921

Roadmap– I/O Devices– Organization of the I/O Function– Operating System Design Issues– I/O Buffering– Disk Scheduling– Raid– Disk Cache– UNIX SVR4 I/O– LINUX I/O– Windows I/O

Page 76: Weeks [01 02] 20100921

Disk Cache

• Buffer in main memory for disk sectors

• Contains a copy of some of the sectors on the disk

• When an I/O request is made for a particular sector, – a check is made to determine if the sector is

in the disk cache.

• A number of ways exist to populate the cache

See Note Page

Page 77: Weeks [01 02] 20100921

Least Recently Used(LRU)

• The block that has been in the cache the longest with no reference to it is replaced

• A stack of pointers reference the cache– Most recently referenced block is on the top of

the stack– When a block is referenced or brought into

the cache, it is placed on the top of the stack

See Note Page

Page 78: Weeks [01 02] 20100921

Least Frequently Used(LFU)

• The block that has experienced the fewest references is replaced

• A counter is associated with each block

• Counter is incremented each time block accessed

• When replacement is required, the block with the smallest count is selected.

See Note Page

Page 79: Weeks [01 02] 20100921

Frequency-Based Replacement

See Note Page

Page 80: Weeks [01 02] 20100921

LRU Disk Cache Performance

See Note Page

Page 81: Weeks [01 02] 20100921

Roadmap– I/O Devices– Organization of the I/O Function– Operating System Design Issues– I/O Buffering– Disk Scheduling– Raid– Disk Cache– UNIX SVR4 I/O – LINUX I/O– Windows I/O

Page 82: Weeks [01 02] 20100921

Devices are Files

• Each I/O device is associated with a special file– Managed by the file system– Provides a clean uniform interface to users

and processes.

• To access a device, read and write requests are made for the special file associated with the device.

See Note Page

Page 83: Weeks [01 02] 20100921

UNIX SVR4 I/O

• Each individual device is associated with a special file

• Two types of I/O– Buffered– Unbuffered

See Note Page

Page 84: Weeks [01 02] 20100921

Buffer Cache

• Three lists are maintained– Free List– Device List– Driver I/O Queue

See Note Page

Page 85: Weeks [01 02] 20100921

Character Cache

• Used by character oriented devices – E.g. terminals and printers

• Either written by the I/O device and read by the process or vice versa– Producer/consumer model used

See Note Page

Page 86: Weeks [01 02] 20100921

Unbuffered I/O

• Unbuffered I/O is simply DMA between device and process– Fastest method– Process is locked in main memory and can’t

be swapped out– Device is tied to process and unavailable for

other processes

See Note Page

Page 87: Weeks [01 02] 20100921

I/O for Device TypesSee Note Page

Page 88: Weeks [01 02] 20100921

Roadmap– I/O Devices– Organization of the I/O Function– Operating System Design Issues– I/O Buffering– Disk Scheduling– Raid– Disk Cache– UNIX SVR4 I/O– LINUX I/O– Windows I/O

Page 89: Weeks [01 02] 20100921

Linux/Unix Similarities

• Linux and Unix (e.g. SVR4) are very similar in I/O terms– The Linux kernel associates a special file with

each I/O device driver. – Block, character, and network devices are

recognized.

See Note Page

Page 90: Weeks [01 02] 20100921

The Elevator Scheduler

• Maintains a single queue for disk read and write requests

• Keeps list of requests sorted by block number

• Drive moves in a single direction to satisfy each request

See Note Page

Page 91: Weeks [01 02] 20100921

Deadline scheduler

– Uses three queues• Incoming requests• Read requests go to

the tail of a FIFO queue

• Write requests go to the tail of a FIFO queue

– Each request has an expiration time

See Note Page

Page 92: Weeks [01 02] 20100921

Anticipatory I/O scheduler

• Elevator and deadline scheduling can be counterproductive if there are numerous synchronous read requests.

• Delay a short period of time after satisfying a read request to see if a new nearby request can be made

See Note Page

Page 93: Weeks [01 02] 20100921

Linux Page Cache

• Linux 2.4 and later, a single unified page cache for all traffic between disk and main memory

• Benefits:– Dirty pages can be collected and written out

efficiently– Pages in the page cache are likely to be

referenced again due to temporal locality

See Note Page

Page 94: Weeks [01 02] 20100921

Roadmap– I/O Devices– Organization of the I/O Function– Operating System Design Issues– I/O Buffering– Disk Scheduling– Raid– Disk Cache– UNIX SVR4 I/O– LINUX I/O– Windows I/O

Page 95: Weeks [01 02] 20100921

Windows I/O Manager

• The I/O manager is responsible for all I/O for the operating system

• It provides a uniform interface that all types of drivers can call.

See Note Page

Page 96: Weeks [01 02] 20100921

Windows I/O

• The I/O manager works closely with:– Cache manager – handles all file caching– File system drivers - routes I/O requests for

file system volumes to the appropriate software driver for that volume.

– Network drivers - implemented as software drivers rather than part of the Windows Executive.

– Hardware device drivers

See Note Page

Page 97: Weeks [01 02] 20100921

Asynchronous and Synchronous I/O

• Windows offers two modes of I/O operation: – asynchronous and synchronous.

• Asynchronous mode is used whenever possible to optimize application performance.

Harmony, orchestrate

Page 98: Weeks [01 02] 20100921

Software RAID

• Windows implements RAID functionality as part of the operating system and can be used with any set of multiple disks.

• RAID 0, 1 and RAID 5 are supported.

• In the case of RAID 1 (disk mirroring), the two disks containing the primary and mirrored partitions may be on the same disk controller or different disk controllers.

Page 99: Weeks [01 02] 20100921

Volume Shadow Copies

• Shadow copies are implemented by a software driver that makes copies of data on the volume before it is overwritten.

• Designed as an efficient way of making consistent snapshots of volumes to that they can be backed up.– Also useful for archiving files on a per-volume

basis

See Note Page

Page 100: Weeks [01 02] 20100921

Quick Review

Page 101: Weeks [01 02] 20100921

SUMMARY [1]

The computer system’s interface to the outside world is its I/O architecture. This architecture is designed to provide a systematic means of controlling interaction with the outside world and to provide the operating system with the information it needs to manage I/O activity effectively.

The I/O function is generally broken up into a number of layers, with lower layers dealing with details that are closer to the physical functions to be performed and higher layers dealing with I/O in a logical and generic fashion. The result is that changes in hardware parameters need not affect most of the I/O software.

A key aspect of I/O is the use of buffers that are controlled by I/O utilities rather than by application processes. Buffering smoothes out the differences between the internal speeds of the computer system and the speeds of I/O devices. The use of buffers also decouples the actual I/O transfer from the address space of the application process. This allows the operating system more flexibility in performing its memory-management function.

Page 102: Weeks [01 02] 20100921

SUMMARY [2]

The aspect of I/O that has the greatest impact on overall system performance is disk I/O. Accordingly, there has been greater research and design effort in this area than in any other kind of I/O. Two of the most widely used approaches to improve disk I/O performance are disk scheduling and the disk cache.

At any time, there may be a queue of requests for I/O on the same disk. It is the object of disk scheduling to satisfy these requests in a way that minimizes the mechanical seek time of the disk and hence improves performance. The physical layout of pending requests plus considerations of locality come into play.

A disk cache is a buffer, usually kept in main memory, that functions as a cache of disk blocks between disk memory and the rest of main memory. Because of the principle of locality, the use of a disk cache should substantially reduce the number of block I/O transfers between main memory and disk.

Page 103: Weeks [01 02] 20100921

Review Questions [Homework 01]

1.List and briefly define three techniques for performing I/O.

2.What is the difference between logical I/O and device I/O?

3.What is the difference between block-oriented devices and stream-oriented devices? Give a few examples of each.

4.Why would you expect improved performance using a double buffer rather than a single buffer for I/O?

Page 104: Weeks [01 02] 20100921

Review Questions [Homework 02]

5.What delay elements are involved in a disk read or write?

6.Briefly define the disk scheduling policies illustrated in Figure 11.7.

7.Briefly define the seven RAID levels.

8.What is the typical disk sector size?

Page 105: Weeks [01 02] 20100921

Review Questions [1] (self-study)•List three categories of I/O devices with one example for each [s. 11]•I/O devices differ in a number of areas. Comment by giving six differences [s. 15]•Data may be transferred as what? [s 19]•The nature of errors differs widely from one device to another. Comment by giving three different aspects. [s. 21]•List three techniques for performing I/O [s. 23]•List three of IO functions and explain briefly one of them. [s. 24/25/26]•Define DMA and draw its typical block diagram [s. 27]•DMA can be configured in several ways. Comment [s. 28/29/30]•List three different DMS configurations and contrast two of them [s. 28/29/30]•List three hierarchical designs of I/O organization and draw a pictorial diagram that contrast two of them. [s. 35/36/37/38]•Contrast block-oriented buffering and stream-oriented buffering with examples for each. [s. 41/42]•Contrast by drawing no-buffer and single-buffer. [s. 43/44]•Contrast block-oriented single buffering and stream-oriented single buffering. [s. 45/46]•Contrast by drawing single buffer and double buffer. [s. 44/47]•Contrast by drawing double buffer and circular buffer. [s. 47/48]•Discuss buffer limitations [s. 49]

Page 106: Weeks [01 02] 20100921

Review Questions [2] (self-study)•The actual details of disk I/O operation depend on many things. Draw a pictorial diagram that illustrates the general timing diagram of disk I/O transfer. [s. 51]•Disk performance parameters is measured by access time. Comment to show the different components that are considered in measuring the access time. [s. 53]•List four disk scheduling algorithms that are based on selection according to requester and conduct comparison between two of them. [s. 64]• List five disk scheduling algorithms that are based on selection according to requested item and conduct comparison between two of them. [s. 64]•Contrast RAID 0 and RAID 1 [s. 68/69]•Conduct comparison between RAID 0, RAID 1, and RAID 2 [s. 68/69]70]•Contrast RAID 2 and RAID 3 [s. 70/71]•Contrast RAID 4 and RAID 3 [s. 71/72]•Contrast RAID 5 and RAID 4 [s. 72/73]•Contrast RAID 6 and RAID 5 [s. 73/74]•Define disk cash. State two ways that are used to populate the cache. [s. 76]•Contrast LRU and LFU [s. 77/78]

Page 107: Weeks [01 02] 20100921

Review Statements [1]•Human readable devices: Devices used to communicate with the user such as Printers and terminals.•Machine readable devices: Used to communicate with electronic equipment such as disk drives, USB keys, sensors, controllers, and actuators.•Communication devices: Used to communicate with remote devices such as digital line drivers, and modems•Controller or I/O module is added means that the processor uses programmed I/O without interrupts, and also the processor does not need to handle details of external devices•Controller or I/O module with interrupts means that efficiency improves as processor does not spend time waiting for an I/O operation to be performed•Direct Memory Access means that blocks of data are moved into memory without involving the processor, and also Processor involved at beginning and end only•I/O module is a separate processor means that CPU directs the I/O processor to execute an I/O program in main memory.•I/O processor means that I/O module has its own local memory, and also it is commonly used to control communications with interactive terminals•Processor delegates I/O operation to the DMA module•DMA module transfers data directly to or from memory•When complete DMA module sends an interrupt signal to the processor•Most I/O devices extremely slow compared to main memory•Use of multiprogramming allows for some processes to be waiting on I/O while another process executes

Page 108: Weeks [01 02] 20100921

Review Statements [2]•I/O cannot keep up with processor speed•Swapping used to bring in ready processes but this is an I/O operation itself•For simplicity and freedom from error it is desirable to handle all I/O devices in a uniform manner•Hide most of the details of device I/O in lower-level routines•Difficult to completely generalize, but can use a hierarchical modular design of I/O functions •A hierarchical philosophy leads to organizing an OS into layers•Each layer relies on the next lower layer to perform more primitive functions•It provides services to the next higher layer. Changes in one layer should not require changes in other layers •Logical I/O deals with the device as a logical resource•Device I/O converts requested operations into sequence of I/O instructions• Scheduling and Control performs actual queuing and control operations•communications architecture consist of a number of layers such as TCP/IP, •Directory management concerned with user operations affecting files•File System represents logical structure and operations•Physical organisation converts logical names to physical addresses•Processes must wait for I/O to complete before proceeding to avoid deadlock certain pages must remain in main memory during I/O•It may be more efficient to perform input transfers in advance of requests being made and to perform output transfers some time after the request is made.

Page 109: Weeks [01 02] 20100921

Review Statements [3]•In block-oriented Buffering, information is stored in fixed sized blocks•In block-oriented Buffering, transfers are made a block at a time•Block-oriented Buffering are used for disks and USB keys•Stream-Oriented Buffering Transfer information as a stream of bytes•Stream-Oriented Buffering are used for terminals, printers, communication ports, mouse and other pointing devices, and most other devices that are not secondary storage•No Buffer: Without a buffer, the OS directly access the device as and when it needs •Single Buffer: Operating system assigns a buffer in main memory for an I/O request•Block Oriented Single Buffer: input transfers made to buffer, block moved to user space when needed, the next block is moved into the buffer, read ahead or anticipated Input, and often a reasonable assumption as data is usually accessed sequentially•Stream-oriented Single Buffer: line-at-time or Byte-at-a-time, terminals often deal with one line at a time with carriage return signaling the end of the line, and byte-at-a-time suites devices where a single keystroke may be significant, and also sensors and controllers •Double Buffer: Use two system buffers instead of one, a process can transfer data to or from one buffer while the operating system empties or fills the other buffer•Circular Buffer: More than two buffers are used, each individual buffer is one unit in a circular buffer, and are used when I/O operation must keep up with process•Buffer Limitations: Buffering smoothes out peaks in I/O demand, but with enough demand eventually all buffers become full and their advantage is lost, and however, when there is a variety of I/O and process activities to service, buffering can increase the efficiency of the OS and the performance of individual processes.

Page 110: Weeks [01 02] 20100921

Review Statements [4]•Access Time is the sum of Seek time and Rotational delay or rotational latency•Seek time: The time it takes to position the head at the desired track•Rotational delay or rotational latency: The time its takes for the beginning of the sector to reach the head•Transfer Time is the time taken to transfer the data. •First-in, first-out (FIFO): process request sequentially, fair to all processes, and approaches random scheduling in performance if there are many processes•Priority: goal is not to optimize disk use but to meet other objectives, short batch jobs may have higher priority, provide good interactive response time, longer jobs may have to wait an excessively long time, and a poor policy for database systems•Last-in, first-out: good for transaction processing systems, the device is given to the most recent user so there should be little arm movement, and possibility of starvation since a job may never regain the head of the line•Shortest Service Time First: select the disk I/O request that requires the least movement of the disk arm from its current position, and always choose the minimum seek time•SCAN: Arm moves in one direction only, satisfying all outstanding requests until it reaches the last track in that direction then the direction is reversed•C-SCAN: restricts scanning to one direction only, and when the last track has been visited in one direction, the arm is returned to the opposite end of the disk and the scan begins again•N-step-SCAN: segments the disk request queue into subqueues of length N, subqueues are processed one at a time, using SCAN, and new requests added to other queue when queue is processed

Page 111: Weeks [01 02] 20100921

Review Statements [5]•FSCAN: Two subqueues, when a scan begins, all of the requests are in one of the queues, with the other empty, all new requests are put into the other queue, and service of new requests is deferred until all of the old requests have been processed. •Disk I/O performance may be increased by spreading the operation over multiple read/write heads or multiple disks. •Disk failures can be recovered if parity information is stored•RAID stand for Redundant Array of Independent Disks that are set of physical disk drives viewed by the operating system as a single logical drive. Data are distributed across the physical drives of an array. Redundant disk capacity is used to store parity information which provides recoverability from disk failure•RAID 0 – Stripped: not a true RAID – no redundancy, disk failure is catastrophic, and very fast due to parallel read/write •RAID 1 – Mirrored: redundancy through duplication instead of parity, read requests can made in parallel, and simple recovery from disk failure•RAID 2 (Using Hamming code): synchronised disk rotation, data stripping is used (extremely small), and hamming code used to correct single bit errors and detect double-bit errors •RAID 3 bit-interleaved parity: similar to RAID-2 but uses all parity bits stored on a single drive

Page 112: Weeks [01 02] 20100921

Review Statements [6]•RAID 4 (Block-level parity): a bit-by-bit parity strip is calculated across corresponding strips on each data disk, and the parity bits are stored in the corresponding strip on the parity disk. •RAID 5 (Block-level Distributed parity): similar to RAID-4 but distributing the parity bits across all drives •RAID 6 (Dual Redundancy): Two different parity calculations are carried out (stored in separate blocks on different disks.), and can recover from two disks failing•Disk Cache is buffer in main memory for disk sectors that contains a copy of some of the sectors on the disk, when an I/O request is made for a particular sector, a check is made to determine if the sector is in the disk cache, and a number of ways exist to populate the cache•Least Recently Used (LRU): the block that has been in the cache the longest with no reference to it is replaced, a stack of pointers reference the cache, most recently referenced block is on the top of the stack, and when a block is referenced or brought into the cache, it is placed on the top of the stack•Least Frequently Used (LFU): the block that has experienced the fewest references is replaced, a counter is associated with each block, counter is incremented each time block accessed, and when replacement is required, the block with the smallest count is selected.

Page 113: Weeks [01 02] 20100921

Problems [1]

1.Consider a program that accesses a single I/O device and compare un-buffered I/O to the use of a buffer. Show that the use of the buffer can reduce the running time by at most a factor of two.

2.Generalize the result of Problem 11.1 to the case in which a program refers to n devices.

3.Calculate how much disk space (in sectors, tracks, and surfaces) will be required to store 300,000 120-byte logical records if the disk is fixed-sector with 512 bytes/sector, with 96 sectors/track, 110 tracks per surface, and 8 usable surfaces. Ignore any file header record(s) and track indexes, and assume that records cannot span two sectors.

4.Consider the disk system described in Problem 11.7, and assume that the disk rotates at 360 rpm. A processor reads one sector from the disk using interrupt-driven I/O, with one interrupt per byte. If it takes 2.5 μs to process each interrupt, what percentage of the time will the processor spend handling I/O (disregard seek time)?

Page 114: Weeks [01 02] 20100921

Problems [2]

5.Repeat the preceding problem using DMA, and assume one interrupt per sector.

6.It should be clear that disk striping can improve the data transfer rate when the strip size is small compared to the I/O request size. It should also be clear that RAID 0 provides improved performance relative to a single large disk, because multiple I/O requests can be handled in parallel. However, in this latter case, is disk striping necessary? That is, does disk striping improve I/O request rate performance compared to a comparable disk array without striping?

7.Consider a 4-drive, 200GB-per-drive RAID array. What is the available data storage capacity for each of the RAID levels, 0, 1, 3, 4, 5, and 6?