randomized algorithms

Post on 23-Jan-2016

43 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Randomized Algorithms. Andreas Klappenecker [using some slides by Prof. Welch]. Discrete Probability Distributions. A probability distribution is called discrete if and only if its image is countable. For example, the outcomes of the experiments: flipping two coins once - PowerPoint PPT Presentation

TRANSCRIPT

1

Randomized Algorithms

Andreas Klappenecker

[using some slides by Prof. Welch]

2

Discrete Probability Distributions

A probability distribution is called discrete if and only if its image is countable.

For example, the outcomes of the experiments:• flipping two coins once • flipping one coin infinitely oftencan be modeled by discrete probability distributions.

3

Uniform Probability Distribution

Let X be a random variable that takes integral values in S = {1,2,…,n}.

The random variable X is said to be uniformly distributed if and only if

Pr[X=k] = 1/nholds for all k in S.

4

Expected Value of a Random Variable

The most commonly used property of a random variable is its expectation value

E[X] = ∑ v Pr[X = v].

The expectation value averages over all possible values of the random variable, weighted by the probability.

v

5

Example

Let X be a uniformly distributed random variable with values in S= {1,2,…,n}.Then the expectation value of X is E[X] = ∑ v Pr[X = v]

= (1/n) ∑ v (sum is over S)= (1/n)n(n+1)/2= (n+1)/2.

6

Linearity of Expectation

Let X and Y be random variables, and a, b be real numbers.

ThenE[aX+bY] = aE[X] + bE[Y]

This is a very useful property when calculating expectation values.

7

Variance

The variance Var[X] of a random variable X is defined as

Var[X] = E[(X-E[X])2] = E[X2]-E[X]2.

[Thus, the variance measures the squared deviation from the expectation value.]

8

Example

A uniformly distributed random variable X with values in {1,2,…,n} has the variance

Var[X] = (n2 -1)/12

Indeed,Var[X] = E[X2]-E[X]2

= (1/n) ∑ v2 - ((n+1)/2)2 = (1/n)n(n+1)(2n+1)/6 – ((n+1)/2)2

= (n2 -1)/12

9

Exercise

Calculate yourself the expectation and variance of all probability distributions given in the lecture notes.

The expectation is usually easy. For the variance, you might need some tricks. Read the lecture notes on probability theory!

10

Randomized Algorithms (Recap)

11

Randomized Algorithms

• Instead of relying on a (perhaps incorrect) assumption that inputs exhibit some distribution, make your own input distribution by, say, permuting the input randomly or taking some other random action

• On the same input, a randomized algorithm has multiple possible executions

• No one input elicits worst-case behavior• Typically we analyze the average case

behavior for the worst possible input

12

Probabilistic Analysis vs. Randomized Algorithm

• Probabilistic analysis of a deterministic algorithm:• assume some probability distribution on

the inputs

• Randomized algorithm:• use random choices in the algorithm

13

Random Permutation

14

Randomly Permuting an Array

RandomPermute(A)Input: array A[1..n]for i := 1 to n doj := value in [i..n] chosen uniformly at

random;swap(A[i], A[j]);

od;

15

Analysis

Our goal is to show that after the i-th iteration of the for loop:

A[1..i] equals each permutation of i elements from {1,…,n} with probability (n–i)!/n!•Basis: After first iteration, A[1] contains each permutation of 1 element from {1,…,n} with probability (n–1)!/n! = 1/n

• true since A[1] is swapped with an element drawn from the entire array uniformly at random

16

Induction Basis

Claim: After the first iteration, A[1] contains each permutation of one element from the set {1,…,n} with probability (n–1)!/n! = 1/n.

The claim is indeed true, since A[1] is swapped with an element drawn from the entire array uniformly at random

17

Induction Step

Induction: Assume that after (i–1)-st iteration of the for loopA[1..i–1] equals each permutation of i–1 elements from {1,…,n} with probability (n–(i–1))!/n!

The probability that A[1..i] contains the permutation (x1, x2, …, xi ) is equal to

• the probability that A[1..i–1] contains (x1, x2, …, xi–1) after the (i–1)-st iteration

• and that the i-th iteration puts xi in A[i].

18

Formalizing

• Let E1 be the event that A[1..i–1] contains (x1, x2, …, xi–1) after the (i–1)-st iteration.

• Let E2 be the event that the i-th iteration puts xi in A[i].

• We need to show that Pr[E1E2] = (n–i)!/n!.

[The events E1 and E2 are not independent: if some element appears in A[1..i –1], then it is not available to appear in A[i]. ]

19

Conditional Probability

• Formalizes having partial knowledge about the outcome of an experiment

• Example: flip two fair coins. • Probability of two heads is 1/4• Probability of two heads when you already

know that the first coin is a head is 1/2

• Conditional probability of A given that B occurs is Pr[A|B] is defined to be

Pr[AB]/Pr[B]

20

Conditional Probabilities to the Rescue

Recall that Pr[AB] = Pr[A|B]·Pr[B]

21

Calculating the Probabilities

• Recall: E1 is event that A[1..i–1] = (x1,…,xi–1) • Recall: E2 is event that A[i] = xi

• Pr[E1E2] = Pr[E2|E1]·Pr[E1]• Pr[E2|E1] = 1/(n–i+1) because

• xi is available in A[i..n] to be chosen, since E1 already occurred and did not include xi

• every element in A[i..n] is equally likely to be chosen

• Pr[E1] = (n–(i–1))!/n! by inductive hypothesis• So Pr[E1E2] = [1/(n–i+1)]·[(n–(i–1))!/n!] = (n–i)!/n!

22

Conclusion

• After the last (n-th) iteration, the inductive hypothesis tells us that the array A[1..n] contains a random permutation of n elements from {1,…,n} with probability

(n–n)!/n! = 1/n!• Thus the algorithm gives us a permutation

of the array chosen uniformly at random.

top related