analysis of algorithm. reasons to analyze algorithms performance prediction compare the...
TRANSCRIPT
Reasons to analyze algorithms
Performance prediction
Compare the performance of different algorithm for the same task and provide some guarantees on how well they perform
Understanding some theoretical bases for how algorithms perform This helps us to avoid performance bugs Clients get poor results because programmer did not
understand performance characteristics of the algorithm
CS2336: Computer Science II
Running time
CS2336: Computer Science II
How many time some operation has to be performed in order to get the computation done
Suppose two algorithms perform the same task. Which one is better?
First approach: implement them in Java and run the programs to get execution time Two problems for this approach:
First, The execution time of a particular program is dependent on the system load.
Second, the execution time is dependent on specific input.
Consider linear search and binary search for example. If an element to be searched happens to be the first in the list, linear search will find the element quicker than binary search.
Running time: Growth Rate
Second approach: Growth Rate Analyze algorithms independent of computers and
specific input Approximates the effect of a change on the size of
the input We can see how fast the execution time increases as
the input size increasesyou can compare two algorithms by examining their
growth rates.
CS2336: Computer Science II
Linear Search
The linear search approach compares the key element, key, sequentially with each element in the array list. The method continues to do so until the key matches an element in the list or the list is exhausted without a match being found. If a match is made, the linear search returns the index of the element in the array that matches the key. If no match is found, the search returns -1.
Linear Search Animation
6 4 1 9 7 3 2 8
6 4 1 9 7 3 2 8
6 4 1 9 7 3 2 8
6 4 1 9 7 3 2 8
6 4 1 9 7 3 2 8
6 4 1 9 7 3 2 8
3
3
3
3
3
3
animation
Key List
Big O Notation
“Linear Search Algorithm”
for (int i = 0; i < n; i++) {
if (key == a[i]) {
return i; // Found key, return index.
}
}
CS2336: Computer Science II
If the key is not in the array, it requires n comparisons for an array of size n
If the key is in the array, it requires n/2 comparisons on average
The algorithm’s execution time is proportional to the size of the array
Big O Notation
If you double the size of the array, you will expect the number of comparisons to double.
The algorithm grows at a linear rate.
The growth rate has an order of magnitude of n. Big O notation to abbreviate for “order of magnitude.”
The complexity of the linear search algorithm is O(n), pronounced as “order of n.”
CS2336: Computer Science II
Best, Worst, and Average Cases
For the same input size, an algorithm’s execution time may vary, depending on the input
An input that results in the shortest execution time is called the best-case input
An input that results in the longest execution time is called the worst-case input worst-case analysis is very useful. You can show that the algorithm will never be slower
than the worst-case. Worst-case analysis is easier to obtain and is thus
common.
CS2336: Computer Science II
Ignoring Multiplicative Constants
The linear search algorithm requires: n comparisons in the worst-case n/2 comparisons in the average-case
both cases require O(n) time The multiplicative constant (1/2) can be omitted Algorithm analysis is focused on growth rate and
multiplicative constants have no impact on growth rates
The growth rate for n/2 or 100n is the same as n i.e., O(n) = O(n/2) = O(100n).
CS2336: Computer Science II
Ignoring Non-Dominating Terms
CS2336: Computer Science II
“Find max number”
int max = a[0];
for (int i = 1; i < n; i++)
if (a[i] > max) max = a[i];
Return max;
It takes n-1 times of comparisons to find maximum number in a list of n elements
The complexity of this algorithm is O(n) The Big O notation allows you to ignore the non-dominating part
Useful Mathematic Summations
CS2336: Computer Science II
12
1222....2222
1
1....
2
)1()1(....321
1)1(3210
1)1(3210
nnn
nnn
a
aaaaaaa
nnnn
Repetition: Simple Loops
T(n) = (a constant c) * n = cn = O(n)
for (i = 1; i <= n; i++) {
k = k + 5;
} constant time
executedn times
Ignore multiplicative constants (e.g., “c”).
Time Complexity
Repetition: Nested Loops
T(n) = (a constant c) * n * n = cn2 = O(n2)
for (i = 1; i <= n; i++) {
for (j = 1; j <= n; j++) {
k = k + i + j;
}
} constant time
executedn times
Ignore multiplicative constants (e.g., “c”).
Time Complexity
inner loopexecutedn times
Repetition: Nested Loops
T(n) = c + 2c + 3c + 4c + … + nc = cn(n+1)/2 = (c/2)n2 + (c/2)n = O(n2)
for (i = 1; i <= n; i++) {
for (j = 1; j <= i; j++) {
k = k + i + j;
}
} constant time
executedn times
Ignore non-dominating terms
Time Complexity
inner loopexecutedi times
Ignore multiplicative constants
Repetition: Nested Loops
T(n) = 20 * c * n = O(n)
for (i = 1; i <= n; i++) {
for (j = 1; j <= 20; j++) {
k = k + i + j;
}
}constant time
executedn times
Time Complexity
inner loopexecuted20 times
Ignore multiplicative constants (e.g., 20*c)
Sequence
T(n) = c *10 + 20 * c * n = O(n)
for (i = 1; i <= n; i++) {
for (j = 1; j <= 20; j++) {
k = k + i + j;
}
}
executedn times
Time Complexity
inner loopexecuted20 times
for (j = 1; j <= 10; j++) { k = k + 4;
}
executed10 times
Selection
T(n) = test time + worst-case (if, else) = O(n) + O(n)
= O(n)
if (list.contains(e)) {
System.out.println(e);
}
else
for (Object t: list) {
System.out.println(t);
}
Time Complexity
Let n be list.size().Executedn times.
O(n)
Constant Time
The Big O notation estimates the execution time of an algorithm in relation to the input size. If the time is not related to the input size, the algorithm is said to take constant time with the notation O(1). For example, a method that retrieves an element at a given index in an array takes constant time, because it does not grow as the size of the array increases.
Computation of an
result = 1;
for (i = 1; i <= n; i++) {
result *= a ;
}
CS2336: Computer Science II
O(n)
i result
1 a
2 a2
3 a3
… …
k ak
…. …
n an
Computation of an
result = a;
for (i = 1; i <= k; i++) {
result = result * result ;
}
CS2336: Computer Science II
i result
1 a2^i a2
2 a2^i a4
3 a2^i a8
… …
k a2^k an
n=2k => lg n = k
O(lg n)
If you square the input size, you only double the
time for the algorithm
Binary Search
For binary search to work, the elements in the array must already be ordered. Without loss of generality, assume that the array is in ascending order.
e.g., 2 4 7 10 11 45 50 59 60 66 69 70 79
The binary search first compares the key with the element in the middle of the array.
Binary Search, cont.
If the key is less than the middle element, you only need to search the key in the first half of the array.
If the key is equal to the middle element, the search ends with a match.
If the key is greater than the middle element, you only need to search the key in the second half of the array.
Consider the following three cases:
Binary Search, cont.
[0] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12]
2 4 7 10 11 45 50 59 60 66 69 70 79
key is 11
key < 50
list
mid
[0] [1] [2] [3] [4] [5] key > 7
key == 11
high low
mid high low
list
[3] [4] [5]
mid high low
list
2 4 7 10 11 45
10 11 45
From Idea to Soluton
/** Use binary search to find the key in the list */
public static int binarySearch(int[] list, int key) {
int low = 0;
int high = list.length - 1;
while (high >= low) {
int mid = (low + high) / 2;
if (key < list[mid])
high = mid - 1;
else if (key == list[mid])
return mid;
else
low = mid + 1;
}
return -1 - low;
}
Logarithm: Analyzing Binary Search
Each iteration in the algorithm contains a fixed number
of operations, denoted by c. Let T(n) denote the time
complexity for a binary search on a list of n elements.
Without loss of generality, assume n is a power of 2
and k=logn. Since binary search eliminates half of the
input after two comparisons
)(lognO
kn 2
T(n) = c + T(n/2) = c + c + T(n/4) = c + c + c + T(n/23) = 4c + T(n/24) = ck + T(n/2k)
Logarithmic Time
An algorithm with the O(logn) time complexity is
called a logarithmic algorithm. The base of the log
is 2, but the base does not affect a logarithmic
growth rate, so it can be omitted. The logarithmic
algorithm grows slowly as the problem size
increases. If you square the input size, you only
double the time for the algorithm.
Quadratic Time
An algorithm with the O(n2) time complexity is
called a quadratic algorithm. The quadratic
algorithm grows quickly as the problem size
increases. If you double the input size, the time
for the algorithm is quadrupled. Algorithms with
a nested loop are often quadratic.
Insertion Sort
2 9 5 4 8 1 62 9 5 4 8 1 6
2 5 9 4 8 1 6
2 4 5 8 9 1 61 2 4 5 8 9 6
2 4 5 9 8 1 6
1 2 4 5 6 8 9
int[] myList = {2, 9, 5, 4, 8, 1, 6}; // Unsorted
animation
The insertion sort algorithm sorts a list of values by repeatedly inserting an unsorted element into
a sorted sublist until the whole list is sorted.
How to Insert?
The insertion sort algorithm sorts a list of values by repeatedly inserting an unsorted element into a sorted sublist until the
whole list is sorted.
[0] [1] [2] [3] [4] [5] [6]
2 5 9 4 list Step 1: Save 4 to a temporary variable currentElement
[0] [1] [2] [3] [4] [5] [6]
2 5 9 list Step 2: Move list[2] to list[3]
[0] [1] [2] [3] [4] [5] [6]
2 5 9 list Step 3: Move list[1] to list[2]
[0] [1] [2] [3] [4] [5] [6]
2 4 5 9 list Step 4: Assign currentElement to list[1]
InsertionSort
public static void insertionSort(int a[]) {
n = a.length;
for (int i = 1; i < n; i++) {
int temp = a[i];
int j;
for(j = i - 1; j >= 0 && temp < a[j]; j--) {
a[j+1] = a[j];
}
a[j+1] = temp;
}
}
CS2336: Computer Science II
Analyzing Insertion Sort
At the kth iteration, to insert an element to a array of size k, it may take k comparisons to find the insertion position, and k moves to insert the element. Let T(n) denote the complexity for insertion sort and c denote the total number of other operations such as assignments and additional comparisons in each iteration. So,
Ignoring constants and smaller terms, the complexity of the
insertion sort algorithm is O(n2).
Towers of Hanoi
There are n disks labeled 1, 2, 3, . . ., n, and three towers labeled A, B, and C.
No disk can be on top of a smaller disk at any time.
All the disks are initially placed on tower A.
Only one disk can be moved at a time, and it must be the top disk on the tower.
Towers of Hanoi, cont.
A B
Original position
C A B
Step 4: Move disk 3 from A to B
C
A B
Step 5: Move disk 1 from C to A
C A B
Step 1: Move disk 1 from A to B
C
A C B
Step 2: Move disk 2 from A to C
A B
Step 3: Move disk 1 from B to C
C A B
Step 7: Mve disk 1 from A to B
C
A B
Step 6: Move disk 2 from C to B
C
Solution to Towers of Hanoi
A B
Original position
C
.
.
.
A B
Step 1: Move the first n-1 disks from A to C recursively
C
.
.
.
A B
Step2: Move disk n from A to C
C
.
.
.
A B
Step3: Move n-1 disks from C to B recursively
C
.
.
.
n-1 disks
n-1 disks
n-1 disks
n-1 disks
The Towers of Hanoi problem can be decomposed into three subproblems.
Solution to Towers of Hanoi
Move the first n - 1 disks from A to C with the assistance of tower B.
Move disk n from A to B. Move n - 1 disks from C to B with the assistance of
tower A.
Analyzing Towers of HanoiLet T(n) denote the complexity for the algorithm that moves disks and c denote the constant time to move one disk, i.e., T(1) is c. So,
)2()12(2...22
2...2)1(2)))2((2(2
)1(2)1()1()(
21
21
nnnn
nn
Occccc
cccTccnT
cnTnTcnTnT
2kT(n-k)
T(1) = C : n – k = 1 => k = n - 1
Exponential algorithms are not practical. If the disk move at rate of 1 disk per second then it would take 232 = 136 years to move
32 disks
Comparing Common Growth Functions
)2()()()log()()(log)1( 32 nOnOnOnnOnOnOO
)1(O Constant time)(lognO Logarithmic
time )(nO Linear time )log( nnO Log-linear time
)( 2nO Quadratic time
)( 3nO Cubic time )2( nO Exponential
time