1 info 2950 prof. carla gomes [email protected] module algorithms and growth rates rose, chapter...

63
1 INFO 2950 Prof. Carla Gomes [email protected] Module Algorithms and Growth Rates Rose, Chapter 3

Upload: kelly-palmer

Post on 02-Jan-2016

214 views

Category:

Documents


0 download

TRANSCRIPT

1

INFO 2950

Prof. Carla [email protected]

Module Algorithms and Growth Rates

Rose, Chapter 3

2

The algorithm problem

Specification ofall legal inputs

and

Specification ofdesired output

as a function of the input

Any legalinput

The algorithm

The desiredoutput

Examples of algorithmic problems

Problem 1: Input: A list L, of integers Output: The sum of the integers on L

Problem 3: Input: A road map of citieswith distances attached to the road map, and two designated cities A and B Output: A description of theshortest path between A and B

Problem 2: Input: Two texts A and B in English Output: The list of common words in both texts

Instance of an algorithmic problemSize of an instance

An instance of an algorithmic problem is a concrete case of such a problem with specific input. The size of an instance is given by the size of its input.

Examples of instances:

– An instance of problem 1:

L= 2, 5, 26, 8, 170, 79, 1002

Problem 1: Input: A list L, of integers Output: The sum of the integers on L

Size of instance length of list

Size of instance = |L| = 7

We use a “natural”measure of input size.Why generally ok?Strictly speaking weshould count bits.

Examples of instances

Problem 3: Input: A road map of citieswith distances attached to the road map, and two designated cities A and B Output: A description of theshortest path between A and B

1

2

3

4

5

6

2

4

2 1

3

4

2

3

2

Size of instance Number of cities and roads

A particular instance:Size of instance:6 nodes9 edges

The size of an instance is given by the size of its input.

6

Algorithm

Definition:

An algorithm is a finite set of precise instructions for performing a computation or for solving a problem.

In general we describe algorithms using pseudocode: i.e., a language that is an intermediate step between an English language description of an algorithm and an implementation of this algorithm in a programming language

7

Properties of an Algorithm

Input: an algorithm has input values from a specified set.

Output: for each set of input values an algorithm produces output values from a specified set. The output values are the solution of the problem.

Definiteness: The steps of an algorithm must be defined precisely.

Correctness: An algorithm should produce the correct output values fro each set of input values.

Finiteness: an algorithm should produce the desired output after a finite (but perhaps large) number of steps for any input in the set.

Effectiveness: It must be possible to perform each step of an algorithm exactly and in a finite amount of time.

Generality: the procedure should be applicable for all the problems of the desired from, not just for a particular set of input values.

Distinction between: “problem” and “problem instance”Quite confusing for folks outside CS. Alg. should work for all instances!

Our Pseudocode Language

procedurename(argument: type)

variable := expression

informal statement

begin statements end

{comment}

if condition then statement [else statement]

for variable := initial value to final value statement

while condition statement

procname(arguments)

Not defined in book:

return expression

(*)(*)

(*) Statements

Declaration

procedure procname(arg: type)

Declares that the following text defines a procedure named procname that takes inputs (arguments) named arg which are data objects of the type type.– Example:

procedure maximum(L: list of integers)[statements defining maximum…]

10

Algorithm: Finding the Maximum Element in a Finite

Sequence

procedure max(a1,a2,…, an: integers)

max := a1

for i := 2 to n

if max < ai then max := ai

{max is the largest element}

Computer Programming

Programmer(human)

Compiler(software)

Algorithm

programming

Program in high-level language (C, Java, etc)

compilation

Equivalent program inassembly language

Equivalent program inmachine code

computer execution

12

Searching Algorithms

Searching problems:

the problem of locating an element in an ordered list.

Example: searching for a word in a dictionary.

13

Algorithm:The Linear Search Algorithm

procedure linear search( x: integer, a1,a2,…, an: distinct integers)

i := 1

while (i ≤ n and x ai )

i := i +1

if i < n then location := i

else location := 0

{location is the subscript of the term that equals x, or is 0 if x is not found}

14

Binary search

To search for 19 in the list

1 2 3 5 6 7 8 10 12 13 15 16 18 19 20 22First split:1 2 3 5 6 7 8 10 12 13 15 16 18 19 20 22Second split12 13 15 16 18 19 20 22Third Split

18 19 20 22

19 is located as 14th item.

Adapted from Michael P. Frank

Search alg. #2: Binary Search

Basic idea: On each step, look at the middle element of the remaining list to eliminate half of it, and quickly zero in on the desired element.

<x >x<x <x

16

Algorithm:The Binary Search Algorithm

procedure binary search( x: integer, a1,a2,…, an: increasing integers)

i := 1 {i is left endpoint of search interval}j := n {j is right endpoint of search interval}while i < j begin

m := (i +j)/2if x > am then i := m + 1else j := m

endif x = ai then location := ielse location := 0{location is the subscript of the term that equals x, or is 0 if x is not found}

17

Just because we know how to solve a given problem – we have an algorithm - that does not mean that the problem can be solved.

The procedure (algorithm) may be so inefficient that it would not be possible to solve the problem within a useful period of time.

So we would like to have an idea in terms of the “complexity” of our algorithm.

18

Complexity of Algorithms

19

Complexity of Algorithms

The complexity of an algorithm is the number of steps that it takes to transform the input data into the desired output.

Each simple operation (+,-,*,/,=,if,etc) and each memory access corresponds to a step.(*)

The complexity of an algorithm is a function of the size of the input (or size of the instance). We’ll denote the complexity of algorithm A by CA(n), where n is the size of the input.

(*) This model is a simplification but still valid to give us a good idea of the complexity of algorithms.

What does this mean for the complexity of, say, chess?

Complexity: CA(n) = O(1)

Two issues: (1) fixed input size(2) Memory access just 1 stepSo, model/defn. not always “useful”!

20

Example: Insertion Sort

From: Introduction to AlgorithmsCormen et al

21

best cost

worst cost

I n

I n

worst cost

best cost

I n – all possible instances of size n

Different notions of complexity

Worst case complexity of an algorithm A – the maximum number of computational steps required for the execution of Algorithm A, over all the inputs of the same size, s. It provides an upper bound for an algorithm. The worst that can happen given the most difficult instance – the pessimistic view.Best case complexity of an algorithm A -the minimum number of computational steps required for the execution of Algorithm A, over all the inputs of the same size, s. The most optimistic view of an algorithm– it tells us the least work a particular algorithm could possibly get away with for some one input of a fixed size – we have the chance to pick the easiest input of a given size. 

Linear search: Worst cost? Best cost? Average cost?

22

Average case complexity of an algorithm A - i.e., the average amount of resources the algorithm consumes assuming some plausible frequency of occurrence of each input.

Figuring out the average cost is much more difficult than figuring out either the

worst-cost or best-cost e.g., we have to assume a given probability distribution for the types of inputs we get.

 Practical difficulty: What is the distribution of “real-world”problem instances?

23

Different notions of complexity

In general thisis the notion that we use tocharacterize the complexity ofalgorithms

We performupper bound

analysis onalgorithms.

24

Algorithm “Good Morning”For I = 1 to n For J = I+1 to n ShakeHands(student(I), student(J))

Running time of “Good Morning”

Time = (# of HS) x (time/HS) + some overhead

We want an expression for T(n), running time of “Good Morning” on input of size n.

25

Growth Rates

Algorithm “Good Morning”For I = 1 to n For J = I+1 to n ShakeHands(student(I), student(J))

How many handshakes?1 2 3 4 5 n

1

2

3

4

5

n

I

J1 2 3 4 5 n

1

2

3

4

5

n

1 2 3 4 5 n

1

2

3

4

5

n

n2 - n

2

26

Growth Rates

Algorithm “Good Morning”For I = 1 to n For J = I+1 to n ShakeHands(student(I), student(J))

T(n) = s(n2- n)/2 + t

Where s is time for one HS, and t is time for getting organized.

But do we need to characterize the complexity of algorithms with such a detail? What is the most important aspect that we care about?

Scaling! with n

27

Comparing algorithms wrt complexity

Let us consider two algorithms A1 and A2,

with complexities:

CA1(n) = 0.5 n2

CA2(n) = 5 n

Which one is has larger complexity?

28

CA2(n) = 5 n ≥ CA1(n) = 0.5 n2 for n ≤ 10

CA1(n) = 0.5 n2 >CA2(n) = 5 n for n >10

When we look at the complexity of algorithms we think asymptotically –i.e., we compare two algorithms as the problem sizes tend to infinity!

Called: asymptotic complexity (concern: growth rate)

Game

30

I’m thinking of an integer between [1,64].

You guess the number.

To your guesses my answer is, High, Low, Yes.

How many guesses do you need in the worst case?

What strategy are we assuming?

31

Growth Rates

In general we only worry about growth rates because:

Our main objective is to analyze the cost performance of algorithms asymptotically. (reasonable in part because computers get faster and faster every year.)

Another obstacle to having the exact cost of algorithms is that sometimes the algorithms are quite complicated to analyze.

When analyzing an algorithm we are not that interested in the exact time the algorithm takes to run – often we only want to compare two algorithms for the same problem – the thing that makes one algorithm more desirable than another is its growth rate relative to the other algorithm’s growth rate.

32

Growth of Rates

Algorithm analysis is concerned with:

• Type of function that describes run time (we ignore constant factors since different machines have different speed/cycle)

• Large values of n

33

Growth of functions

Important definition:

For functions f and g from the set of integers to the set of real numbers we say

f(x) is O(g(x))to denote

C,k so that n > k, |f(n)| C |g(n)|

We say “f(n) is big O of g(n)”

Recipe for proving f(n) = O(g(n)): find a constant C and k (called witnesses to the fact that f(x) is

O(g(x))) so that the inequality holds.

Will be applied to running time, so

you’ll usually consider T(n)

(>= 0)

Note: when C and k are found, there are infinitely many pairs of witnesses. Sometimes it is also said f(x) = O(g(x)), even though this is not a real equality.

k

f(x) is O(g(x))

C = 4k = 1alsoC = 3k = 2

x2 + 2x + 1 is O(x2)For x>10 ≤ x2 + 2x + 1 ≤ x2 + 2 x2 + x2 ≤ 4 x2

Note:

When f(x) is O(g(x)), and h(x) is a function that has larger absolute values than g(x) does for sufficiently large values of x, it follows that f(x) is O(h(x)). In other words, the function g(x) in the relationship can be replace by a function with larger absolute values. This can be seen given that:

|f(x)| ≤ C|g(x)| if x > k

and if (h(x)| > |g(x)| for all x > k, then

|f(x)| ≤ C|h(x)| if x > k

Therefore f(x) is O(h(x))

37

Growth of functions (examples)

f(x) = O(g(x))iff

c,k so that x>k, |f(x)| Cg(x)|

3n = O(15n) since n>0, 3n 1 15n

There’s k There’s C

38

The complexity of A2 is of lower order than that of A1. While A1 growsquadratically O(n2) A1 only grows linearly O(n).

39

x2 vs. (x2 + x) (x <=20)

40

x2 vs. (x2 + x) (x2 + x) is O(n2) (oh of n-squared)

41

Growth of functions (examples)

f(x) is O(g(x))iff

c,k so that x>k, |f(x)| C|g(x)|

Yes, since x> __, x2 x3 1

a) Yes, and I can prove it.

b) Yes, but I can’t prove it.

c) No, x=1/2 implies x2 > x3

d) No, but I can’t prove it.

x2 is O(x3) ?

C = 1k = 1

42

Growth of functions (examples)

f(x) = O(g(x))iff

c,k so that x>k, |f(x)| C|g(x)|

1000x2 is O(x2) since x> __, 1000x2 ____ ·x2 0 1000

C = 1000k = 0

43

Growth of functions (examples)

f(x) = O(g(x))iff

c,k so that x>k, |f(x)| C|g(x)|

Prove that x2 + 100x + 100 is O((1/100)x2)

100x 100x2

x2 + 100x + 100 201x2 when x > 1100 100x2

20100·(1/100)x2

k = 1, C = 20100

Growth of functions (examples)

Prove that 5x + 100 is O(x/2)

Nothing works for k

Need x> ___, 5x + 100 ___ · x/2 Try c = 10

x> ___, 5x + 100 10 · x/2

k = 200, c = 11

Similar problem, different technique.

Try c = 11

x> ___, 5x + 100 5x + x/2

x> __ _, 100 x/2 200

45

Theorem 1

.,,,,

)(

110

011

1

numbersrealareaaaawhere

axaxaxaxfLet

nn

nn

nn

)()( nxOisxfThen

Proof:

Assume the triangle inequality that states: |x| + |y| |x + y|

1|;|||||||)()(

1|;|||||||

|)(|

|)||||||(|

)/||/||/|||(|

||||||||

|||)(|

011

011

011

01

11

011

1

011

1

kaaaaCxOisxf

Therefore

xaaaaC

When

Cxxf

Therefore

aaaax

xaxaxaax

axaxaxa

axaxaxaxf

nnn

nn

n

nnn

nnnn

n

nn

nn

nn

nn

47

Estimating Functions

Example1:

Estimate the sum of the first n positive integers

1;1)()21(

21

2

2

kCnOisn

So

nnnnn

48

Estimating Functions

Example2:

Estimate f(n) = n! and log n!

.1;1),log(!log

loglog!log

log

.1;1),(!

321!

KCnnOisn

So

nnnn

theTaking

KCnOisn

So

n

nnnn

nn

n

n

n

49

Growth of functions

Guidelines:

In general, only the largest term in a sum matters.

a0xn + a1xn-1 + … + an-1x1 + anx0 = O(xn)

n dominates lg n.

n5lg n = O(n6)

List of common functions in increasing O() order:

1 n (n lg n) n2 n3 … 2n n!

Constant time

Linear time

Quadratic time

Exponential time

Note: log scale on y axis.

51

Combination of Growth of functions

Theorem:

If f1(x) = O(g1(x)) and f2(x)=O(g2(x)), then f1(x) + f2(x) is O(max{|g1(x)|,|g2(x)|})

c = c1+c2,

k = max{k1,k2}

Proof: Let h(x) = max{|g1(x)|,|g2(x)|}

Need to find constants c and k so that x>k, |f1(x) + f2(x)| c |h(x)|

We know |f1(x) | c1| g1(x)|and |f2(x)| c2 |g2(x)| and using triangle inequality |f1(x) + f2(x)| ≤ |f1(x)| + |f2(x)|

And |f1(x)| + |f2(x)| c1|g1(x)| + c2|g2(x)|

c1|h(x)| + c2|h(x) |

= (c1+c2)•h(x)

52

Growth of functions – two more theorems

Theorem:

If f1(x) = O(g1(x)) and f2(x)=O(g2(x)), then f1(x)·f2(x) = O(g1(x)·g2(x))

Theorem:

If f1(x) = O(g (x)) and f2(x)=O(g (x)), then (f1+f2)(x) = O(g (x))

53

Growth of functions - two definitions

If f(x) = O(g(x)) then we write g(x) = (f(x)).“g is big-omega of

f”“lower bound”What does this mean?

If c,k so that x>k, f(x) c·g(x), then: k,c’ so that x>k, g(x) c’f(x) c’ = 1/c

If f(x) = O(g(x)), and f(x) = (g(x)), then f(x) = (x)

“f is big-theta of g”

When we write f=O(g), it is like f gWhen we write f= (g), it is like f gWhen we write f= (g), it is like f = g.

54

Growth of functions - other estimates

For functions f and g, f = o(g) if c>0 k so that n>k, f(n) c·g(n),

“f is little-o of g”

What does this mean?No matter how tiny c is, cg eventually dominates f.

Example: Show that n2 = o(n2log n)

Proof foreshadowing: find a k (possibly in terms of c) that makes the inequality hold.

Big difference – for all c!!!

55

Growth of functions - other estimates

For functions f and g, f = o(g) if c>0 k so that n>k, f(n) c·g(n),

“f is little-o of g”

Example: Show that n2 = o(n2log n)

This inequality holds when n >

21/c.

Proof foreshadowing: find a k (possibly in terms of c) that makes the inequality hold.

Choose c arbitrarily. How large does n have to be so that n2 c n2log n?

1 c log n

1/c log n

21/c n So, k = 21/c.

So, can take a while… consider c = 1/1000000

The big difference between little o and big O is that the former has to hold for all c.

56

57

Growth of functions - other estimates

For functions f and g, f = o(g) if c>0 k so that n>k, f(n) c·g(n),

“f is little-o of g”

Example: Show that 10n2 = o(n3)

This inequality holds when n >

10/c.

Proof foreshadowing: find a k (possibly in terms of c) that makes the inequality hold.

Choose c arbitrarily. How large does n have to be so that 10n2 c n3?

10/c n

So, k = 10/c.

58

Growth of functions - other estimates

For functions f and g, if f = o(g) then g = (f) “g is little-omega

of f”

60

How do computer scientists differentiate between good (efficient) and bad (not efficient) algorithms?

61

How do computer scientists differentiate between good (efficient) and bad (not efficient) algorithms?

The yardstick is that any algorithm that runs in no more than

polynomial time is an efficient algorithm; everything else is not.

62

Ordered functions by their growth rates

cOrder

constant 1

logarithmic 2

polylogarithmic 3

nr ,0<r<1

nsublinear 4

linear 5

nr ,1<r<2 subquadratic 6

quadratic 7

cubic 8

nc,c≥1

rn, r>1

polynomial 9

exponential 10

lg n

lgc n

n3

n2

Efficient algorithms

Not efficient algorithms

63

64

exponential

polynomial

N2

Binary B&B alg.

Polynomial vs. exponential growth (Harel 2000)

LP’s interior pointMin. Cost Flow AlgsTransportation AlgAssignment AlgDijkstra’s alg.