discrete maths objective to describe the big-oh notation for estimating the running time of programs...

71
Discrete Maths Objective to describe the Big-Oh notation for estimating the running time of programs 242-213, Semester 2, 2013-2014 10. Running Time of Programs 1

Upload: erick-grooms

Post on 29-Mar-2015

213 views

Category:

Documents


0 download

TRANSCRIPT

  • Slide 1

Discrete Maths Objective to describe the Big-Oh notation for estimating the running time of programs 242-213, Semester 2, 2013-2014 10. Running Time of Programs 1 Slide 2 Overview 1. Running Time 2. Big-Oh and Approximate Running Time 3. Big-Oh for Programs 4. Analyzing Function Calls 5. Analyzing Recursive Functions 6. Further Information 2 Slide 3 1. Running Time What is the running time of this program? void main() { int i, n; scanf("%d", &n); for(i=0; i= 2)exponential, "> 1.4. Common Growth Formulae & Names Formula (n = input size) Name nlinear n 2 quadratic n 3 cubic n m polynomial, e.g. n 10 m n ( m >= 2)exponential, e.g. 5 n n!factorial 1constant log nlogarithmic n log n log log n 12 Slide 13 1.5. Execution Times 3950100100010 6 n39501001ms1sec n 2 9812.5ms10ms1sec12 days n 3 27729125ms1sec16.7 min31,710yr 2 n 851236yr4*10 16 yr3*10 287 yr3*10 301016 yr log n23671020 n (no. of instructions) growth formula T() if n is 50, you will wait 36 years for an answer! Assume 1 instruction takes 1 microsec (10 -6 secs) to execute. How long will n instructions take? 13 Slide 14 Notes Logarithmic running times are best. Polynomial running times are acceptable, if the power isnt too big e.g. n 2 is ok, n 100 is terrible Exponential times mean sloooooooow code. some size problems may take longer to finish than the lifetime of the universe! 14 Slide 15 1.6. Why use T(n)? T() can guide our choice of which algorithm to implement, or program to use e.g. selection sort or merge sort? T() helps us look for better algorithms in our own code, without expensive implementation, testing, and measurement. 15 Slide 16 2. Big-Oh and Approximate Running Time Big-Oh mathematical notation simplifies the process of estimating the running time of programs it uses T(n), but ignores constant factors which depend on compiler/machine behaviour continued 16 Slide 17 The Big-Oh value specifies running time independent of: machine architecture e.g. dont consider the running speed of individual machine operations machine load (usage) e.g. time delays due to other users compiler design effects e.g. gcc versus Borland C 17 Slide 18 Example In the code fragment example on slide 9, we assumed that assigment and testing takes 1 time unit. This means: T(n) = 4n -1 The Big-Oh value, O(), uses the T(n) value but ignores constants (which will actually vary from machine to machine). This means: T(n) is O(n) we say "T(n) is order n" 18 Slide 19 More Examples T(n) valueBig Oh value: O() 10n 2 + 50n+100 O(n 2 ) (n+1) 2 O(n 2 ) n 10 O(2 n ) 5n 3 + 1 O(n 3 ) These simplifications have a mathematical reason, which is explained in section 2.2. hard to understand 19 Slide 20 2.1. Is Big-Oh Useful? O() ignores constant factors, which means it is a more reliable measure across platforms/compilers. It can be compared with Big-Oh values for other algorithms. i.e. linear is better than polynomial and exponential, but worse than logarithmic 20 Slide 21 2.2. Definition of Big-Oh The connection between T() and O() is: when T(n) is O( f(n) ), it means that f(n) is the most important thing in T() when n is large More formally, for some integer n 0 and constant c > 0 T(n) is O( f(n) ), if for all integers n >= n 0, T(n) 3.4.3. Time for a Binary Conversion void main() { int i; (1) scanf(%d, &i); (2) while (i > 0) { (3) putchar(0 + i%2); (4) i = i/2; } (5) putchar(\n); } continued 48 Slide 49 Lines 1, 2, 3, 4, 5: each O(1) Block of 3-4 is O(1) + O(1) = O(1) While of 2-4 loops at most (log 2 i)+1 times total running time = O(1 * log 2 i+1) = O(log 2 i) Block of 1-5: = O(1) + O(log 2 i) + O(1) = O(log 2 i) why? 49 Slide 50 Why (log 2 i)+1 ? Assume i = 2 k Start 1 st iteration, i = 2 k Start 2 nd iteration, i = 2 k-1 Start 3 rd iteration, i = 2 k-2 Start k th iteration, i = 2 k-(k-1) = 2 1 = 2 Start k+1 th iteration, i = 2 k-k = 2 0 = 1 the while will terminate after this iteration Since 2 k = i, so k = log 2 i So k+1, the no. of iterations, = (log 2 i)+1 50 Slide 51 Using a Structure Tree block 1-5 1 1 5 5 while 2-4 block 3-4 3 3 4 4 O(1) O(log 2 i) 51 Slide 52 3.4.4. Time for a Selection Sort void selectionSort(int A[], int n) { int i, j, small, temp; (1) for (i=0; i < n-1; i++) { (2) small = i; (3) for( j= i+1; j < n; j++) (4) if (A[j] < A[small]) (5) small = j; (6) temp = A[small]; (7) A[small] = A[i]; (8) A[i] = temp; } } 52 Slide 53 Selection Sort Structure Tree for 1-8 6 6 7 7 block 2-8 for 3-5 2 2 5 5 if 4-5 8 8 53 Slide 54 Lines 2, 5, 6, 7, 8: each is O(1) If of 4-5 is O(max(1,0)+1) = O(1) For of 3-5 is O( (n-(i+1))*1) = O(n-i-1) = O(n), simplified Block of 2-8 = O(1) + O(n) + O(1) + O(1) + O(1) = O(n) For of 1-8 is: = O( (n-1) * n) = O(n 2 - n) = O(n 2 ), simplified if partelse part 54 Slide 55 4. Analyzing Function calls In this section, we assume that the functions are not recursive we add recursion in section (5) Size measures for all the functions must be similar, so they can be combined to give the programs Big-Oh value. 55 Slide 56 Example Program #include int bar(int x, int n); int foo(int x, int n): void main() { int a, n; (1) scanf(%d, &n); (2) a = foo(0, n); (3) printf(%d\n, bar(a,n)); } continued 56 Slide 57 int bar(int x, int n) { int i; (4) for(i = 1; i 5.1. Factorial Running Time Step 1. Basis: T(1) = O(1) Induction: T(n) = O(1) + T(n-1), for n > 1 Step 2. Simplify the relation by replacing the O() notation with constants. Basis: T(1) = a Induction: T(n) = b + T(n-1), for n > 1 64 Slide 65 The simplest way to solve T(n) is to calculate it for some values of n, and then guess the general expression. T(1) = a T(2) = b + T(1)= b + a T(3) = b + T(2)= 2b + a T(4) = b + T(3)= 3b + a Obviously, the general form is: T(n) = ((n-1)*b) + a = bn + (a-b) continued 65 Slide 66 Step 3. Translate back: T(n) = bn + (a-b) Replace constants by Big-Oh notation: T(n) = O(n) + O(1) = O(n) The running time for recursive factorial is O(n). That is fast. 66 Slide 67 5.2. Recursive Selection Sort void rSSort(int A[], int n) { int imax, i; if (n == 1) return; else { imax = 0; /* A[0] is biggest */ for (i=1; i A[imax]) imax = i; swap(A, n-1, imax); rSSort(A, n-1); } } 67 Slide 68 Running Time Step 1. Basis: T(1) = O(1) Induction: T(n) = O(n-1) + T(n-1), for n > 1 Step 2. Basis: T(1) = a Induction: T(n) = b(n-1) + T(n-1), for n > 1 continued multiple of n-1 the loop call to rSSort() Assume swap() is O(1), so ignore n == the size of the array 68 Slide 69 Solve the relation: T(1) = a T(2) = b + T(1)= b + a T(3) = 2b + T(2)= 2b + b + a T(4) = 3b + T(3)= 3b + 2b + b + a General Form: T(n) = (n-1)b +... + b + a = a + b(n-1)n/2 continued 69 Slide 70 Step 3. Translate back: T(n) = a + b(n-1)n/2 Replace constants by Big-Oh notation: T(n) = O(1) + O(n 2 ) + O(n) = O(n 2 ) The running time for recursive selection sort is O(n 2 ). That is slow for large arrays. 70 Slide 71 6. Further Information Discrete Mathematics and its Applications Kenneth H. Rosen McGraw Hill, 2007, 7th edition chapter 3, sections 3.2 3.3 71