1 introduction to operating systems cs 499: special topics in cyber securitys. sean monemi
Post on 15-Jan-2016
222 views
TRANSCRIPT
1
Introduction to Operating Systems
CS 499: Special Topics in Cyber Security S. Sean Monemi
2
• Operating Systems Definition• Processes and Threads• Program Control Blocks• Process Scheduling and Interrupts
Overview
3
What is an Operating System
• A program that acts as an intermediary between a user and the computer hardware
• Operating system goals:– Make the computer system convenient to use– Use the computer hardware in an efficient manner– Execute user programs and make solving user
problems easier
4
What is an Operating System
• The operating system is that portion of the software that runs in Kernel mode or Supervisor mode
• It performs two basic unrelated functions:– Extending the machine– Managing resources
5
What is an Operating System
• An extended machine– Hides the messy details which must be performed– Presents user with a virtual machine, easier to use– Controls the execution of user programs and operations of
I/O devices
• Resource manager (allocator)– Manages and allocates resources– Each program gets time with the resource– Each program gets space on the resource
6
Example
• A computer system consists of– hardware– system programs– application programs
7
Process
8
Introduction• Computers perform operations concurrently
– Examples: • compiling a program• sending a file to a printer • rendering a Web page • playing music • receiving e-mail
• Process – an abstraction of a running program• The terms job and process used almost
interchangeably
9
Process Definition
• Process
– Processes enable systems to perform and track simultaneous activities
– A program in execution; process execution must progress in sequential fashion
– Processes transition between process states
– Operating systems perform operations on processes such as creating, destroying, suspending, resuming and waking
• A process includes:– program counter – stack– data section
10
Definition of Process
• A program in execution
– A process has its own address space consisting of:
• Text region– Stores the code that the processor executes
• Data region– Stores variables and dynamically allocated memory
• Stack region– Stores instructions and local variables for active procedure calls
11
Processes
a) Multiprogramming of four programsb) Conceptual model of 4 independent, sequential processesc) Only one program active at any instant
12
Process Creation
Ways to cause process creation
1. System initialization
2. Execution of a process creation system
3. User request to create a new process
4. Initiation of a batch job
13
Process Termination
Conditions which terminate processes
1. Normal exit (voluntary)
2. Error exit (voluntary)
3. Fatal error (involuntary)
4. Killed by another process (involuntary)
14
Process Management
• OS provide fundamental services to processes:
– Creating processes– Destroying processes– Suspending processes– Resuming processes– Changing a process’s priority– Blocking processes– Waking up processes– Dispatching processes – Interprocess communication (IPC)
15
Process States
• A process moves through a series of discrete process states:
– Running state • The process is executing on a processor• Instructions are being executed, using CPU
– Ready state • The process could execute on a processor if one were available• Runnable; temporarily stopped to let another process run
– Blocked state • The process is waiting for some event to happen before it can proceed• Unable to run until some external event happens
16
Process Transitions• State Transitions
4 possible state transitions:• ready to running:
– when a process is dispatched
• running to ready: – when the quantum expires
• running to blocked: – when a process blocks
• blocked to ready: – when the event occurs
17
Suspend and Resume• Suspending a process
– Indefinitely removes it from contention for time on a processor without being destroyed
– Useful for detecting security threats and for software debugging purposes
– A suspension may be
initiated by the process
being suspended or
by another process– A suspended process must
be Resumed by another p– Two suspended states:
• suspendedready• suspendedblocked
18
Process Hierarchies
• Parent creates a child process, child processes can create its own process
• Forms a hierarchy– UNIX calls this a "process group“
• Windows has no concept of process hierarchy– all processes are created equal
19
Process Control Blocks (PCBs)Process Descriptors
20
Process Control Blocks• PCBs maintain information that the OS needs to manage the
process– Typically include information such as
• Process identification number (PID)• Process state• Program counter• Scheduling priority• Credentials• A pointer to the process’s parent process• Pointers:
– to the process’s child processes– to locate the process’s data and instructions in memory– to allocated resources
21
Process Control Blocks
• Process table– The OS maintains
pointers to each process’s PCB in a system-wide or per-user process table
– Allows for quick access to PCBs
– When a process is terminated, the OS removes the process from the process table and frees all of the process’s resources
Process table and process control blocks
22
Context Switching
• Context switches
– Performed by the OS to stop executing a running process and begin executing a previously ready process
– Save the execution context of the running process to its PCB
– Load the ready process’s execution context from its PCB
– Must be transparent to processes
– Require the processor to not perform any “useful” computation• OS must therefore minimize context-switching time
– Performed in hardware by some architectures
23
Threads
24
The Thread Model
(a) Three processes (unrelated) each with one thread, each of them with a different address space
(b) One process with three threads (part of the same job), (Multithreading), all three of them share the same address space
25
The Thread Model
Each thread has its own stack
Each thread designate a portion of a program that may execute concurrently with other threads
26
Thread Definition• Processes are used to group resources together and
threads are the entities scheduled for execution on the CPU
• A traditional or heavyweight process is equal to a task with one thread
• A thread (or lightweight process) is a basic unit of CPU utilization; it consists of:– program counter– register set – stack space
• Thread shares with its peer threads:– code section– data section– OS resources
• collectively known as a task
27
The Thread Model
Items shared by all threads in a process Items private to each thread
28
Thread States• Thread states
– Born state– Ready state (runnable)– Running state– Dead state– Blocked state– Waiting state– Sleeping state
• Sleep interval specifies for how long a thread will sleep
29
Thread Operations
• Threads and processes have common operations– Create– Exit (terminate)– Suspend– Resume– Sleep– Wake
30
The Thread Library
Multithreading Library Procedure Calls:
• thread_create– Thread has the ability to create new threads
• thread_exit– When a thread has finished its work, it can exit
• thread_wait– One thread can wait for a (specific) thread to exit
• thread_yield– Allows a thread to voluntarily give up the CPU to let
another thread run
31
Windows XP Threads• Windows XP threads can create fibers
– Fiber is similar to thread– Fiber is scheduled for execution by the thread that creates it– Thread is scheduled by the scheduler
• Windows XP provides each process:– with a thread pool that consists of a number of worker threads, which are
kernel threads that execute functions specified by user threads
Windows XP thread state-transition diagram
32
Linux task state-transition diagram
Linux Threads• Linux allocates the same type of process descriptor to processes and threads (tasks)• Linux uses the UNIX-based system call fork to spawn child tasks• To enable threading, Linux provides a modified version named clone
– Clone accepts arguments that specify which resources to share with the child task
33
Thread Usage
A word processor with three threadsT1 – interact with userT2 – handles reformattingT3 – save to disk
34
Thread UsageA multithreaded
Web server
(a) Dispatcher thread(b) Worker thread
35
Threading Models
• Three most popular threading models
– User-level threads– Kernel-level threads– Hybrid- level threads (Combination of user-
level and kernel-level threads)
36
User-level Threads
A user-level threads package
- Thread Management done by User-Level Threads Library
– supported above the kernel, via a set of library calls at the user level
- Each process has its own private thread table to keep track of the threads in that process,
such as thread’ PC, SP, RX’s, state, etc.
- The thread table is managed by the run-time system
- Many-to-one thread mappings- OS maps all threads in a multithreaded
process to single execution context
Examples:• POSIX Pthreads• Mach C-threads• Solaris threads
37
Implementing Threads in User SpaceAdvantages:
• They can run with existing OS
• Thread switching is faster than trapping to the kernel– no trap and context switch is needed
– the memory cache need not be flashed
• Thread scheduling is very fast – local procedures used to save the thread’s state and the scheduler
• Each process has its own customized scheduling algorithm
• Better scalability
– kernel threads invariably require some table and stack space in the kernel
38
Implementing Threads in User Space
Disadvantages:
• The problem of how blocking system calls are implemented– a thread reads from the keyboard before any keys have been hit
– letting the thread actually make the system call is unacceptable
(this will stop all the threads)
• Kernel blocking the entire process – if a thread causes a page fault, the kernel, not even knowing about the existence of
threads, naturally blocks the entire process until the disk I/O is complete, even though other threads might be runnable
• If a thread starts running, no other thread in that process will ever run unless the 1st thread voluntary gives up the CPU– unless a thread enters the run-time system of its own free will, the scheduler will
never get a chance
39
Implementing Threads in the Kernel
A threads package managed by the kernel
• Supported by the Kernel- kernel know about & manage threads
- no run-time system is needed in each process
- no thread table in each process
• Kernel has a thread table- thread table keeps track of all threads in the system
• Thread makes a kernel call - when a thread wants to create a new thread
- when a thread wants to destroy an existing thread
- updates the kernel thread table
Examples- Windows 95/98/NT
- Solaris- Digital UNIX
40
Implementing Threads in the Kernel• Kernel-level threads attempt to address the limitations of user-level
threads by mapping each thread to its own execution context
– Kernel-level threads provide a one-to-one thread mapping
• Advantages: – Increased scalability, interactivity, and throughput
• Disadvantages: – Overhead due to context switching and reduced portability due to
OS-specific APIs• Kernel-level threads are not always the optimal solution for multithreaded
applications
41
Hybrid Implementations• The combination of user- and kernel-level thread implementation
– Many-to-many thread mapping (m-to-n thread mapping)• Number of user and kernel threads need not be equal• Can reduce overhead compared to one-to-one thread mappings by
implementing thread pooling
42
Pop-Up Threads
(a) before message arrives (b) after message arrives
• Threads are frequently useful in distributed system• No more of traditional approach to have a thread that is blocked
on a receive system call waiting for an incoming message• Pop-up thread - Creation of a new thread caused by the arrival of
a new message
43
Java Multithreading
• Java allows the application programmer to create threads that can port to many computing platforms
• Threads– Created by class Thread– Execute code specified in a Runnable object’s run method
• Java supports operations such as naming, starting and joining threads
44
Java Multithreading• Java allows the application programmer to create threads that can port to many
computing platforms• Threads
– Created by class Thread– Execute code specified in a Runnable object’s run method
• Java supports operations such as naming, starting and joining threads
Java threads being created, starting, sleeping and printing
45
Java Multithreading
46
Processor Scheduling
47
Scheduling Concepts
• Scheduling is a fundamental OS function
• Scheduling is central to OS design
• Almost, all computer resources are scheduled before use
• Maximum CPU utilization obtained with multiprogramming
• CPU scheduling is the basics of multiprogrammed OS
• Scheduler – the part of OS that makes the decision which process (in ready state) to run next
48
Scheduling Concepts
CPU–I/O Burst Cycle
The success of CPU scheduling,
depends on the following observed property of processes:
• Process execution consist of: – a cycle of CPU execution – and I/O wait.
• Process alternate back and forth between theses two states.
49
Scheduling ConceptsAlternating sequence of CPU and I/O bursts
• Process execution begins with:– a CPU burst
– then an I/O burst
– then another CPU burst
– then another I/O burst
– …..and so on
• Eventually, the final CPU burst ends with asystem request to terminate execution, rather than with another I/O burst
50
SchedulingHistogram of CPU-burst times
• Bursts of CPU usage alternate with periods of I/O wait– a CPU-bound program might have a few long CPU bursts– an I/O-bound program typically will have many short CPU bursts
51
Processor scheduling policy
• Processor scheduling policy
– Decides which process runs at given time
– Different schedulers will have different goals
• Maximize throughput• Minimize latency• Prevent indefinite postponement• Complete process by given deadline• Maximize processor utilization
52
Scheduling Levels• High-level scheduling
– Determines which jobs can compete for resources– Controls number of processes in system at one time
• Intermediate-level scheduling– Determines which processes can compete for
processors– Responds to fluctuations in system load
• Low-level scheduling– Assigns priorities– Assigns processors to processes
53
Scheduling Algorithms
• Nonpreemptivepicks a process to run and then just lets it run until it blocks
(either on I/O or waiting for another process) or voluntarily releases the CPU.
• Preemptivepicks a process and lets it run for a maximum of some fixed time.
54
Preemptive vs. Nonpreemptive Scheduling
• Preemptive processes– Can be removed from their current processor– Can lead to improved response times– Important for interactive environments– Preempted processes remain in memory
• Nonpreemptive processes– Run until completion or until they yield control of a
processor– Unimportant processes can block important ones
indefinitely
55
Scheduling Categories
• Batch– Quick response is not the issue– Nonpreemptive and Preemptive algorithms are
acceptable– Reduces process switches and improve performance
• Interactive– Preemption is essential
• Real Time– Preemption not needed
56
CPU Scheduler• Selects from among the processes in memory that are
ready to execute, and allocates the CPU to one of them.
• CPU scheduling decisions may take place when a process:1. Switches from running to waiting state.
2. Switches from running to ready state.
3. Switches from waiting to ready.
4. Terminates.
• Scheduling under 1 and 4 is nonpreemptive.
• All other scheduling is preemptive.
57
Scheduling Objective Items• CPU utilization
– keep the CPU as busy as possible• Throughput
– # of processes that complete their execution per time unit
• Turnaround time – amount of time to execute a particular process
• Waiting time – amount of time a process has been waiting in the ready queue
• Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment)
58
Scheduling Objectives• Different objectives depending on system
– Maximize throughput– Maximize number of interactive processes receiving
acceptable response times– Minimize resource utilization– Avoid indefinite postponement– Enforce priorities– Minimize overhead– Ensure predictability
• Several goals common to most schedulers– Fairness– Predictability– Scalability
59
Scheduling Optimization Criteria
• Max CPU utilization• Max throughput• Min turnaround time • Min waiting time • Min response time
60
Scheduling Algorithms
• Scheduling algorithms– Decide when and for how long each process runs– Make choices about:
• Preemptibility• Priority• Running time• Run-time-to-completion• fairness
61
Scheduling Algorithm Goals
62
Scheduling Algorithms
• Batch– First-In First-Out– Shortest Job First– Shortest Remaining Time
Next– Three-Level Scheduling
• Real Time– Static– Dynamic
• Interactive– Round-Robin Scheduling– Priority Scheduling– Multiple Queues– Shortest Process Next– Guaranteed Scheduling– Lottery Scheduling– Fair-Share Scheduling
63
First-In-First-Out (FIFO) Scheduling
• FIFO scheduling– Simplest scheme– Processes dispatched according to arrival time– Nonpreemptible– Rarely used as primary scheduling algorithm
See Examples…
64
Shortest-Process-First (SPF) Scheduling
• Scheduler selects process with smallest time to finish– Lower average wait time than FIFO
• Reduces the number of waiting processes
– Potentially large variance in wait times– Nonpreemptive
• Results in slow response times to arriving interactive requests
– Relies on estimates of time-to-completion• Can be inaccurate or falsified
– Unsuitable for use in modern interactive systems
65
Shortest-Job-First (SJF) Scheduling
• Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time.
• When several equally important jobs are sitting in the queue waiting to be started, the scheduler picks the shortest job first.
• Two schemes: – Nonpreemptive – once CPU given to the process it cannot be preempted
until completes its CPU burst.
– Preemptive – if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the Shortest-Remaining-Time-First (SRTF).
• SJF is optimal – gives minimum average waiting time for a given set of processes.
66
Shortest-Remaining-Time (SRT) Scheduling
• SRT scheduling– Preemptive version of SPF– Shorter arriving processes preempt a running process– Very large variance of response times: long processes
wait even longer than under SPF– Not always optimal
• Short incoming process can preempt a running process that is near completion
• Context-switching overhead can become significant
See Examples…
67
Three level scheduling
Three level scheduling:
1. Admission scheduler – which job in input queue?2. Memory scheduler – which process in disk?3. CPU scheduler – which ready process in main memory?
68
Round-Robin (RR) Scheduling
• Round-robin scheduling– Based on FIFO– Processes run only for a limited amount of time called a
time slice or quantum– Preemptible– Requires the system to maintain several processes in
memory to minimize overhead– Often used as part of more complex algorithms
69
Round-Robin (RR) Scheduling
• Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue.
• If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once.
No process waits more than (n-1)q time units.
70
Round-Robin (RR) Scheduling• Quantum size
– Determines response time to interactive requests
– Very large quantum size• Processes run for long periods• Degenerates to FIFO
– Very small quantum size• System spends more time context switching
than running processes
– Middle-ground• Long enough for interactive processes to issue
I/O request• Batch processes still get majority of processor
time
See Examples…
71
Highest-Response-Ratio-Next (HRRN) Scheduling
• HRRN scheduling– Improves upon SPF scheduling– Still nonpreemptive– Considers how long process has been waiting– Prevents indefinite postponement
72
Fair Share Scheduling (FSS)• FSS controls users’ access to system resources
– Some user groups more important than others– Ensures that less important groups cannot monopolize resources– Unused resources distributed according to the proportion of resources each group
has been allocated– Groups not meeting resource-utilization goals get higher priority
Standard UNIX process scheduler. The scheduler grants the processor to users, each of whom may have many processes.
Fair share scheduler. The fair share scheduler divides system resource capacity into portions, which are then allocated by process schedulers assigned to various fair share groups.
73
Priority Scheduling
A scheduling algorithm with four priority classes
Priority scheduling – Each process is assigned a priority, and the runnable process with the highest priority is allowed to run first
74
Deadline Scheduling
• Deadline scheduling– Process must complete by specific time– Used when results would be useless if not delivered
on-time– Difficult to implement
• Must plan resource requirements in advance• Incurs significant overhead• Service provided to other processes can degrade
75
Scheduling in Real-Time Systems
Real-time Systems Categories:
• Soft real-time computing – requires that critical processes receive priority over less fortunate ones.– Missing an occasional deadline is undesirable, but
nevertheless tolerable
• Hard real-time systems – required to complete a critical task within a guaranteed amount of time. – absolute deadlines that must be met
76
Real-Time Scheduling• Real-time scheduling
– Related to deadline scheduling– Processes have timing constraints– Also encompasses tasks that execute periodically
• Two categories– Soft real-time scheduling
• Does not guarantee that timing constraints will be met• For example, multimedia playback
– Hard real-time scheduling• Timing constraints will always be met• Failure to meet deadline might have catastrophic results• For example, air traffic control
77
Real-Time Scheduling• Static real-time scheduling
– Does not adjust priorities over time– Low overhead– Suitable for systems where conditions rarely change
• Hard real-time schedulers
– Rate-monotonic (RM) scheduling• Process priority increases monotonically with the
frequency with which it must execute
– Deadline RM scheduling• Useful for a process that has a deadline that is not equal to
its period
78
Real-Time Scheduling• Dynamic real-time scheduling
– Adjusts priorities in response to changing conditions– Can incur significant overhead, but must ensure that the
overhead does not result in increased missed deadlines– Priorities are usually based on processes’ deadlines
• Earliest-deadline-first (EDF)– Preemptive, always dispatch the process with the
earliest deadline• Minimum-laxity-first
– Similar to EDF, but bases priority on laxity, which is based on the process’s deadline and its remaining run-time-to-completion
79
Scheduling in Real-Time Systems
Real-time System Responding to Events:• Periodic – occurring at regular intervals• Aperiodic – occurring unpredictably
Schedulable real-time system• Given
– m periodic events (multiple periodic event streams)
– event i occurs within period Pi and requires Ci seconds of CPU to handle each event
• Then the load can only be handled if:
1
1m
i
i i
C
P
See Examples…
80
Java Thread Scheduling• Operating systems provide varying thread
scheduling support– User-level threads
• Implemented by each program independently• Operating system unaware of threads
– Kernel-level threads• Implemented at kernel level• Scheduler must consider how to allocate processor time to
a process’s threads
81
Java Thread Scheduling
• Java threading scheduler– Uses kernel-level threads if available– User-mode threads implement timeslicing
• Each thread is allowed to execute for at most one quantum before preemption
– Threads can yield to others of equal priority• Only necessary on nontimesliced systems• Threads waiting to run are called waiting, sleeping or
blocked
82
Thread Scheduling – User level
Possible scheduling of user-level threads• 50-msec process quantum• threads run 5 msec/CPU burst
83
Thread Scheduling – Kernel level
Possible scheduling of kernel-level threads• 50-msec process quantum• threads run 5 msec/CPU burst