big oh notation
DESCRIPTION
Big Oh Notation. Greek letter Omicron ( Ο ) is used to denote the limit of asymptotic growth of an algorithm If algorithm processing time grows linearly with the input set n , then we say the algorithm is Order n, or O(n). This notation isolates an algorithm’s run-time from other factors: - PowerPoint PPT PresentationTRANSCRIPT
![Page 1: Big Oh Notation](https://reader036.vdocuments.site/reader036/viewer/2022062520/56815b1c550346895dc8cf11/html5/thumbnails/1.jpg)
Big Oh Notation• Greek letter Omicron (Ο) is used to denote the
limit of asymptotic growth of an algorithm• If algorithm processing time grows linearly with
the input set n, then we say the algorithm is Order n, or O(n).
• This notation isolates an algorithm’s run-time from other factors:– Size of the problem set– Initialization time– Processor speed and instruction set
![Page 2: Big Oh Notation](https://reader036.vdocuments.site/reader036/viewer/2022062520/56815b1c550346895dc8cf11/html5/thumbnails/2.jpg)
Big-Oh notation• Let b(x) be the bubble sort algorithm• We say b(x) is O(n2)
– This is read as “b(x) is big-oh n2”– This means that the input size increases, the running time
of the bubble sort will increase proportional to the square of the input size
• In other words, by some constant times n2
• Let l(x) be the linear (or sequential) search algorithm• We say l(x) is O(n)
– Meaning the running time of the linear search increases directly proportional to the input size
![Page 3: Big Oh Notation](https://reader036.vdocuments.site/reader036/viewer/2022062520/56815b1c550346895dc8cf11/html5/thumbnails/3.jpg)
Big-Oh notation
• Consider: b(x) is O(n2)– That means that b(x)’s running time is less
than (or equal to) some constant times n2
• Consider: l(x) is O(n)– That means that l(x)’s running time is less
than (or equal to) some constant times n
![Page 4: Big Oh Notation](https://reader036.vdocuments.site/reader036/viewer/2022062520/56815b1c550346895dc8cf11/html5/thumbnails/4.jpg)
Big-Oh proofs• Show that f(x) = x2 + 2x + 1 is O(x2)
– In other words, show that x2 + 2x + 1 ≤ c*x2
• Where c is some constant• For input size greater than some x
• We know that 2x2 ≥ 2x whenever x ≥ 1• And we know that x2 ≥ 1 whenever x ≥ 1• So we replace 2x+1 with 3x2
– We then end up with x2 + 3x2 = 4x2
– This yields 4x2 ≤ c*x2
• This, for input sizes 1 or greater, when the constant is 4 or greater, f(x) is O(x2)
• We could have chosen values for c and x that were different
![Page 5: Big Oh Notation](https://reader036.vdocuments.site/reader036/viewer/2022062520/56815b1c550346895dc8cf11/html5/thumbnails/5.jpg)
Big-Oh proofs
![Page 6: Big Oh Notation](https://reader036.vdocuments.site/reader036/viewer/2022062520/56815b1c550346895dc8cf11/html5/thumbnails/6.jpg)
Sample Big-Oh problems
• Show that f(x) = x2 + 1000 is O(x2)– In other words, show that x2 + 1000 ≤ c*x2
• We know that x2 > 1000 whenever x > 31– Thus, we replace 1000 with x2
– This yields 2x2 ≤ c*x2
• Thus, f(x) is O(x2) for all x > 31 when c ≥ 2
![Page 7: Big Oh Notation](https://reader036.vdocuments.site/reader036/viewer/2022062520/56815b1c550346895dc8cf11/html5/thumbnails/7.jpg)
Sample Big-Oh problems
• Show that f(x) = 3x+7 is O(x)– In other words, show that 3x+7 ≤ c*x
• We know that x > 7 whenever x > 7– Uh huh….– So we replace 7 with x– This yields 4x ≤ c*x
• Thus, f(x) is O(x) for all x > 7 when c ≥ 4
![Page 8: Big Oh Notation](https://reader036.vdocuments.site/reader036/viewer/2022062520/56815b1c550346895dc8cf11/html5/thumbnails/8.jpg)
A variant of the last question• Show that f(x) = 3x+7 is O(x2)
– In other words, show that 3x+7 ≤ c*x2
• We know that x > 7 whenever x > 7– Uh huh….– So we replace 7 with x– This yields 4x < c*x2
– This will also be true for x > 7 when c ≥ 1
• Thus, f(x) is O(x2) for all x > 7 when c ≥ 1
![Page 9: Big Oh Notation](https://reader036.vdocuments.site/reader036/viewer/2022062520/56815b1c550346895dc8cf11/html5/thumbnails/9.jpg)
What that means
• If a function is O(x)– Then it is also O(x2)– And it is also O(x3)
• Meaning a O(x) function will grow at a slower or equal to the rate x, x2, x3, etc.
![Page 10: Big Oh Notation](https://reader036.vdocuments.site/reader036/viewer/2022062520/56815b1c550346895dc8cf11/html5/thumbnails/10.jpg)
Function growth rates• For input size n = 1000
• O(1) 1• O(log n) ≈10• O(n) 103
• O(n log n) ≈104
• O(n2) 106
• O(n3) 109
• O(n4) 1012
• O(nc) 103*c c is a consant• 2n ≈10301
• n! ≈102568
• nn 103000
Many interesting problems fall into these categories
![Page 11: Big Oh Notation](https://reader036.vdocuments.site/reader036/viewer/2022062520/56815b1c550346895dc8cf11/html5/thumbnails/11.jpg)
Function growth rates
Logarithmic scale!
![Page 12: Big Oh Notation](https://reader036.vdocuments.site/reader036/viewer/2022062520/56815b1c550346895dc8cf11/html5/thumbnails/12.jpg)
Integer factorization
• Factoring a composite number into it’s component primes is O(2n)– Where n is the number of bits in the number
• This, if we choose 2048 bit numbers (as in RSA keys), it takes 22048 steps– That’s about 10617 steps!
![Page 13: Big Oh Notation](https://reader036.vdocuments.site/reader036/viewer/2022062520/56815b1c550346895dc8cf11/html5/thumbnails/13.jpg)
Formal Big-Oh definition
• Let f and g be functions. We say that f(x) is O(g(x)) if there are constants c and k such that
|f(x)| ≤ C |g(x)|whenever x > k
![Page 14: Big Oh Notation](https://reader036.vdocuments.site/reader036/viewer/2022062520/56815b1c550346895dc8cf11/html5/thumbnails/14.jpg)
Formal Big-Oh definition
![Page 15: Big Oh Notation](https://reader036.vdocuments.site/reader036/viewer/2022062520/56815b1c550346895dc8cf11/html5/thumbnails/15.jpg)
Big Omega (Ω) and Big Theta (Θ)
• If Big-Oh a less-than relationship:– then Big Omega is greater-than
• | f(x) | > C | g(x) | when x > n0
– and Big Theta is equals• if f(x) is O(g(x)) and Ω(g(x)), then it is Θ(g(x))
• x2 is Θ(x2)• x is O(x2)• x2 is Ω(x)
![Page 16: Big Oh Notation](https://reader036.vdocuments.site/reader036/viewer/2022062520/56815b1c550346895dc8cf11/html5/thumbnails/16.jpg)
A useful recursive algorithm
• Merge sort
procedureprocedure mergesortmergesort ( (LL = = aa11,…a,…ann))if if nn>1 >1 thenthen
mm := := floorfloor(n/2)(n/2)LL11 := := aa11,a,a22,…,a,…,amm
LL22 := := aam+1m+1,a,am+2m+2,…,a,…,ann
LL := := mergemerge((mergesortmergesort((LL11),),mergesortmergesort((LL22))))
{L is now sorted into elements of increasing order}{L is now sorted into elements of increasing order}
![Page 17: Big Oh Notation](https://reader036.vdocuments.site/reader036/viewer/2022062520/56815b1c550346895dc8cf11/html5/thumbnails/17.jpg)
mergesort needs mergeprocedureprocedure mergemerge((LL11, L, L22: lists): lists)LL := empty list := empty listwhile while LL11 and and LL22 are both nonempty are both nonemptybeginbegin
remove smaller of first element of remove smaller of first element of LL11 and and LL22 from the list it is in and put it at the left end of from the list it is in and put it at the left end of LL
if if removal of this element make one list removal of this element make one list emptyempty
thenthenremove all elements from the other list and remove all elements from the other list and
append them to append them to LLendendreturnreturn LL
{L is the merged list with elements in increasing order}{L is the merged list with elements in increasing order}
![Page 18: Big Oh Notation](https://reader036.vdocuments.site/reader036/viewer/2022062520/56815b1c550346895dc8cf11/html5/thumbnails/18.jpg)
Time complexity
• First: how many recursive calls are there for n inputs?
![Page 19: Big Oh Notation](https://reader036.vdocuments.site/reader036/viewer/2022062520/56815b1c550346895dc8cf11/html5/thumbnails/19.jpg)
Satisfiability• Consider a Boolean expression of the form:• (x1 x2 x3) (x2 x3 x4) (x1 x4 x5)
– This is a conjunction of disjunctions• Is such an equation satisfiable?
– In other words, can you assign truth values to all the xi’s such that the equation is true?
– The above problem is easy (only 3 clauses of 3 variables) – set x1, x2, and x4 to true
• There are other possibilities: set x1, x2, and x5 to true, etc.– But consider an expression with 1000 variables and
thousands of clauses
![Page 20: Big Oh Notation](https://reader036.vdocuments.site/reader036/viewer/2022062520/56815b1c550346895dc8cf11/html5/thumbnails/20.jpg)
Satisfiability• If given a solution, it is easy to check if such a solution
works– Plug in the values – this can be done quickly, even by hand
• However, there is no known efficient way to find such a solution– The only definitive way to do so is to try all possible values for the
n Boolean variables– That means this is O(2n)!– Thus it is not a polynomial time function
• NP stands for “Not Polynomial”
• Cook’s theorem (1971) states that SAT is NP-complete– There still may be an efficient way to solve it, though!
![Page 21: Big Oh Notation](https://reader036.vdocuments.site/reader036/viewer/2022062520/56815b1c550346895dc8cf11/html5/thumbnails/21.jpg)
NP Completeness
• There are hundreds of NP complete problems– It has been shown that if you can solve one of them
efficiently, then you can solve them all– Example: the traveling salesman problem
• Given a number of cities and the costs of traveling from any city to any other city, what is the cheapest round-trip route that visits each city once and then returns to the starting city?
• Not all algorithms that are O(2n) are NP complete– In particular, integer factorization (also O(2n)) is not
thought to be NP complete
![Page 22: Big Oh Notation](https://reader036.vdocuments.site/reader036/viewer/2022062520/56815b1c550346895dc8cf11/html5/thumbnails/22.jpg)
NP Completeness• It is “widely believed” that there is no efficient solution to
NP complete problems– In other words, everybody has that belief
• If you could solve an NP complete problem in polynomial time, you would be showing that P = NP– And you’d get a million dollar prize (and lots of fame!)
• If this were possible, it would be like proving that Newton’s or Einstein’s laws of physics were wrong
• In summary: – NP complete problems are very difficult to solve, but easy to
check the solutions of– It is believed that there is no efficient way to solve them
![Page 23: Big Oh Notation](https://reader036.vdocuments.site/reader036/viewer/2022062520/56815b1c550346895dc8cf11/html5/thumbnails/23.jpg)
Reserve
![Page 24: Big Oh Notation](https://reader036.vdocuments.site/reader036/viewer/2022062520/56815b1c550346895dc8cf11/html5/thumbnails/24.jpg)
An aside: inequalities• If you have a inequality you need to show:
x < y• You can replace the lesser side with something greater:
x+1 < y• If you can still show this to be true, then the original
inequality is true
• Consider showing that 15 < 20– You can replace 15 with 16, and then show that 16 < 20.
Because 15 < 16, and 16 < 20, then 15 < 20
![Page 25: Big Oh Notation](https://reader036.vdocuments.site/reader036/viewer/2022062520/56815b1c550346895dc8cf11/html5/thumbnails/25.jpg)
An aside: inequalities• If you have a inequality you need to show:
x < y• You can replace the greater side with something lesser:
x < y-1• If you can still show this to be true, then the original
inequality is true
• Consider showing that 15 < 20– You can replace 20 with 19, and then show that 15 < 19.
Because 15 < 19, and 19 < 20, then 15 < 20
![Page 26: Big Oh Notation](https://reader036.vdocuments.site/reader036/viewer/2022062520/56815b1c550346895dc8cf11/html5/thumbnails/26.jpg)
An aside: inequalities• What if you do such a replacement and can’t
show anything?– Then you can’t say anything about the original
inequality
• Consider showing that 15 < 20– You can replace 20 with 10– But you can’t show that 15 < 10– So you can’t say anything one way or the other about
the original inequality