met a heuristic s

Upload: hoquerashed

Post on 05-Apr-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/31/2019 Met a Heuristic s

    1/21

    Lec - 1

  • 7/31/2019 Met a Heuristic s

    2/21

    Stochastic optimization is the general class of

    algorithms and techniques which employ

    some degree of randomness to find optimal

    (or as optimal as possible) solutions to hardproblems.

    Metaheuristics are the most general of these

    kinds of algorithms, and are applied to a verywide range of problems.

  • 7/31/2019 Met a Heuristic s

    3/21

    Metaheuristics are applied to I know it when I see it

    problems.

    Theyre algorithms used to findanswers to problems

    when you have very little to help you:

    you dont know what the optimal solution looks like you dont know how to go about finding it in a principled

    way

    you have very little heuristic information to go on,

    and brute-force search is out of the question because thespace is too large.

    But if youre given a candidate solution to your problem,

    you can test it and assess howgood it is.

    That is, you know a good one when you see it.

  • 7/31/2019 Met a Heuristic s

    4/21

    Suppose youre trying to find an optimal set of robotbehaviors for a soccer goalie robot.

    What do you have?

    You have a simulator for the robot

    and can test any given robot behavior set and assign it aquality (you know a good one when you see it).

    And youve come up with a definition for what robotbehavior sets look like in general.

    What you do not have? You have no idea what the optimal behavior set is

    nor even how to go about finding it.

  • 7/31/2019 Met a Heuristic s

    5/21

    Simplest way is to Do a Random Search:

    just try random behavior sets as long as you have time,

    and return the best one you discovered.

  • 7/31/2019 Met a Heuristic s

    6/21

    You may consider the following alternative strategy

    Start with a random behavior set.

    Then make a small, random modification to it and try thenew version.

    If the new version is better, throw the old one away. Else throw the new version away.

    Now make another small, random modification to yourcurrent version.

    If this newest version is better, throw away your current

    version, else throw away the newest version. Repeat as long as you can.

    This is called Hill Climbing!

  • 7/31/2019 Met a Heuristic s

    7/21

    Similar solutions tend to behave similarly (and tend to

    have similar quality),

    This is usually true for many problems

    So small modifications will generally result in small, well-

    behaved changes in quality, allowing us to climb the hill

    of quality up to good solutions.

    This heuristic belief is one of the central defining features

    of metaheuristics: indeed, nearly all metaheuristics are essentially elaborate

    combinations of hill-climbing and random search.

  • 7/31/2019 Met a Heuristic s

    8/21

  • 7/31/2019 Met a Heuristic s

    9/21

    This is not a metaheuristic technique!

    A traditional mathematical method for finding themaximum of a function

    Consider a functionf(x), which we want to maximize

    Identify the slope and move up it

  • 7/31/2019 Met a Heuristic s

    10/21

  • 7/31/2019 Met a Heuristic s

    11/21

    So, very simple method

    This method doesnt require us to compute or even know

    f(x).

    But it does assume we can compute the slope ofx, that is,we havef(x).

    Gradient Descent: it is used to find the minimum of a

    function.

    x

  • 7/31/2019 Met a Heuristic s

    12/21

  • 7/31/2019 Met a Heuristic s

    13/21

    For one dimension:

  • 7/31/2019 Met a Heuristic s

    14/21

    When does the algorithm end???

    As soon as we have found the ideal solution

    Or, we have run out of time!

    How to know when we have found the ideal solution?

    Easy! When the slope is zero

    But WAIT!!!! Slope is zero for minima as well! However, we

    can get around this.

    But, Slope is zero also for saddle points!

  • 7/31/2019 Met a Heuristic s

    15/21

  • 7/31/2019 Met a Heuristic s

    16/21

    Overshooting the Maximum:

    As we get close to the maximum

    of the function, Gradient Ascent

    will overshoot the top and land

    on the other side of the hill.

    It mayovershoot the top many

    times, bouncing back and forth

    as it moves closer to the

    maximum.

  • 7/31/2019 Met a Heuristic s

    17/21

    One reason for Overshooting:

    size of the jumps is entirely based on the current slope.

    slope is steeper => jump is larger

    One Solution: Adjust \alpha

    \alpha smaller => no overshooting

    \alpha smaller =>Long time to march up the hills

    \alpha higher => constant overshooting; high convergencetime

    So, we want \alpha to be just right

  • 7/31/2019 Met a Heuristic s

    18/21

    Assume that we can computef(x).

    Then we can modify as follows:

    This dampens \alpha as we approach zero slope.

    Multidimensional version is not so easy!!! Multidimensional version off(x) is a complex matrix

    called a Hessian.

    Also, matrix inversion is involved!!

  • 7/31/2019 Met a Heuristic s

    19/21

    Hessian consists ofpartial second

    derivatives along each dimension

  • 7/31/2019 Met a Heuristic s

    20/21

    For one dimension:

  • 7/31/2019 Met a Heuristic s

    21/21

    Because it employs the second derivative,

    Newtons Method generally converges faster

    than regular Gradient Ascent

    it also gives us the information to determine ifwere at the top of a local maximum (as

    opposed to a minimum or saddle point)

    at a maximum,f(x) is zero and f(x) isnegative.