lecture 1 an introduction to optimization -- โฆqf-zhao/teaching/mh/lec01.pdfun-constrained...
TRANSCRIPT
![Page 1: Lecture 1 An Introduction to Optimization -- โฆqf-zhao/TEACHING/MH/Lec01.pdfUn-constrained Optimization โขUsually, ๐ฅis a โpointโ in an N-dimensional Euclidean space ๐
,](https://reader033.vdocuments.site/reader033/viewer/2022041623/5e40c24c16cdaf6f906628bf/html5/thumbnails/1.jpg)
Lecture 1An Introduction to Optimization โ
Classification and Case Study
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/1
![Page 2: Lecture 1 An Introduction to Optimization -- โฆqf-zhao/TEACHING/MH/Lec01.pdfUn-constrained Optimization โขUsually, ๐ฅis a โpointโ in an N-dimensional Euclidean space ๐
,](https://reader033.vdocuments.site/reader033/viewer/2022041623/5e40c24c16cdaf6f906628bf/html5/thumbnails/2.jpg)
Un-constrained Optimization
โข Generally speaking, an optimization problem has an objective function f(x).
โข The problem is represented by
min(max) ๐(๐ฅ), ๐๐๐ ๐๐๐ ๐ฅ
โข This is called an un-constrained optimization problem (็กๅถ็ดๆ้ฉๅๅ้ก).
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/2
![Page 3: Lecture 1 An Introduction to Optimization -- โฆqf-zhao/TEACHING/MH/Lec01.pdfUn-constrained Optimization โขUsually, ๐ฅis a โpointโ in an N-dimensional Euclidean space ๐
,](https://reader033.vdocuments.site/reader033/viewer/2022041623/5e40c24c16cdaf6f906628bf/html5/thumbnails/3.jpg)
Un-constrained Optimization
โข Usually, ๐ฅ is a โpointโ in an N-dimensional Euclidean space ๐ ๐, and ๐(๐ฅ) is a point in ๐ ๐.
โข In this course, we study only the case in which ๐ = 1. That is, we have only one objective to optimize.
โข Some special considerations are needed to extend the results obtained here to โmultiple objectiveโ cases.
โข Interested students may also study optimization in โnon-Euclideanโ spaces (i.e. manifolds).
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/3
![Page 4: Lecture 1 An Introduction to Optimization -- โฆqf-zhao/TEACHING/MH/Lec01.pdfUn-constrained Optimization โขUsually, ๐ฅis a โpointโ in an N-dimensional Euclidean space ๐
,](https://reader033.vdocuments.site/reader033/viewer/2022041623/5e40c24c16cdaf6f906628bf/html5/thumbnails/4.jpg)
Constrained Optimization
โข The domain can be a sub-space ๐ท of ๐ ๐.
โข We have constrained optimization problem:
โข ๐ท again can be defined by some functions
โ ๐ฅ๐ > 0, ๐ = 1,2, โฆ
โ ๐๐(๐ฅ) > 0, ๐ = 1,2, โฆ
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C)
Subject to
Lec01/4
โข min ๐๐๐ฅ ๐ ๐ฅ
โข ๐ . ๐ก. ๐ฅ โ ๐ท
![Page 5: Lecture 1 An Introduction to Optimization -- โฆqf-zhao/TEACHING/MH/Lec01.pdfUn-constrained Optimization โขUsually, ๐ฅis a โpointโ in an N-dimensional Euclidean space ๐
,](https://reader033.vdocuments.site/reader033/viewer/2022041623/5e40c24c16cdaf6f906628bf/html5/thumbnails/5.jpg)
Linear programming(็ทๅ่จ็ปๆณ)
โข If both ๐(๐ฅ) and ๐๐(๐ฅ) are linear functions, we have linear optimization problem, and this is usually called linear programming (LP).
โข For LP, we have very efficient algorithms already, and meta-heuristics are not needed.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/5
![Page 6: Lecture 1 An Introduction to Optimization -- โฆqf-zhao/TEACHING/MH/Lec01.pdfUn-constrained Optimization โขUsually, ๐ฅis a โpointโ in an N-dimensional Euclidean space ๐
,](https://reader033.vdocuments.site/reader033/viewer/2022041623/5e40c24c16cdaf6f906628bf/html5/thumbnails/6.jpg)
Non-linear programming(้็ทๅฝข่จ็ปๆณ)
โข If ๐(๐ฅ) or any ๐๐(๐ฅ) is non-linear, we have non-linear optimization problem, and this is often called non-linear programming (NLP).
โข Many methods have been proposed to solve this class of problems.
โข However, conventional methods usually finds local optimal solutions. Meta-heuristic methods are useful for finding global solutions.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/6
![Page 7: Lecture 1 An Introduction to Optimization -- โฆqf-zhao/TEACHING/MH/Lec01.pdfUn-constrained Optimization โขUsually, ๐ฅis a โpointโ in an N-dimensional Euclidean space ๐
,](https://reader033.vdocuments.site/reader033/viewer/2022041623/5e40c24c16cdaf6f906628bf/html5/thumbnails/7.jpg)
Local optimal and global optimal
โข For minimization problem, โ A solution ๐ฅโ is local optimal if ๐(๐ฅโ) < ๐(๐ฅ) for all ๐ฅ in
the ๐-neighborhood of ๐ฅโ, where ๐ > 0 is a real number, and is the radius of the neighborhood.
โ A solution ๐ฅโ is global optimal if ๐(๐ฅโ) < ๐(๐ฅ) for all ๐ฅin the search space (problem domain).
โข Meta-heuristics are useful for obtaining global optimal solutions efficiently.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/7
![Page 8: Lecture 1 An Introduction to Optimization -- โฆqf-zhao/TEACHING/MH/Lec01.pdfUn-constrained Optimization โขUsually, ๐ฅis a โpointโ in an N-dimensional Euclidean space ๐
,](https://reader033.vdocuments.site/reader033/viewer/2022041623/5e40c24c16cdaf6f906628bf/html5/thumbnails/8.jpg)
Example 1: Linear Programming
โข 2 materials are used for making two products. โข The prices of the products are 25 and 31 (in million yen), and those
of the materials are 0.5 and 0.8 (in million yen). โข Suppose that we produce x1 units for product1, and x2 units for
product2.โข We can get 25*x1+31*x2 million yen by selling the products.โข On the other hand, we must pay (7*x1+5*x2)*0.5 +
(4*x1+8*x2)*0.8 million yen to buy the materials.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C)
Material used in Product1 Material used in Product2
Material 1 7 5
Material 2 4 8
Lec01/8
![Page 9: Lecture 1 An Introduction to Optimization -- โฆqf-zhao/TEACHING/MH/Lec01.pdfUn-constrained Optimization โขUsually, ๐ฅis a โpointโ in an N-dimensional Euclidean space ๐
,](https://reader033.vdocuments.site/reader033/viewer/2022041623/5e40c24c16cdaf6f906628bf/html5/thumbnails/9.jpg)
Example 1: Linear Programming
โข The problem can be formulated as follows:
max ๐(๐ฅ1, ๐ฅ2) = 18.3๐ฅ1 + 22.1๐ฅ2๐ . ๐ก. ๐ฅ1 > 0; ๐ฅ2 > 0;
6.7๐ฅ1 + 8.9๐ฅ2 < ๐ต
โข The first set of constraints means that both products should be produced to satisfy social needs; and the second constraint is the budget limitation.
โข This is a typical linear programming problem, and can be solved efficiently using the well-known simplex algorithm.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/9
![Page 10: Lecture 1 An Introduction to Optimization -- โฆqf-zhao/TEACHING/MH/Lec01.pdfUn-constrained Optimization โขUsually, ๐ฅis a โpointโ in an N-dimensional Euclidean space ๐
,](https://reader033.vdocuments.site/reader033/viewer/2022041623/5e40c24c16cdaf6f906628bf/html5/thumbnails/10.jpg)
Example 2: Non-linear programming
โข Given ๐ observations: (๐ฅ1, ๐(๐ฅ1)), (๐ฅ2, ๐(๐ฅ2)), โฆ ,(๐ฅ๐, ๐(๐ฅ๐)) of an unknown function ๐(๐ฅ).
โข Find a polynomial ๐(๐ฅ) = ๐0+ ๐1๐ฅ + ๐2๐ฅ2, such that
min ๐ ๐0, ๐1, ๐2 =
๐=1
๐
๐ ๐ฅ๐ โ ๐ ๐ฅ๐ + ๐ ๐(๐ฅ)
โข Note that in this problem ๐(๐ฅ) is also a function of ๐0, ๐1, ๐๐๐ ๐2.
โข The first term is the approximation error, and the second term is regularization factor that can make the solution better (e.g. smoother).
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/10
![Page 11: Lecture 1 An Introduction to Optimization -- โฆqf-zhao/TEACHING/MH/Lec01.pdfUn-constrained Optimization โขUsually, ๐ฅis a โpointโ in an N-dimensional Euclidean space ๐
,](https://reader033.vdocuments.site/reader033/viewer/2022041623/5e40c24c16cdaf6f906628bf/html5/thumbnails/11.jpg)
Combinatorial optimization problems
โข If ๐(๐ฅ) or ๐๐(๐ฅ) cannot be given analytically (in closed-form), we have combinatorial problems.
โข For example, if ๐ฅ takes ๐ discrete values (e.g. integers), and if there are ๐พ variables, the number of all possible solutions will be ๐๐พ.
โข It is difficult to check all possible solutions in order to find the best one(s).
โข In such cases, meta-heuristics can provide efficient ways for obtaining good solutions using limited resources (e.g. time and memory space).
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/11
![Page 12: Lecture 1 An Introduction to Optimization -- โฆqf-zhao/TEACHING/MH/Lec01.pdfUn-constrained Optimization โขUsually, ๐ฅis a โpointโ in an N-dimensional Euclidean space ๐
,](https://reader033.vdocuments.site/reader033/viewer/2022041623/5e40c24c16cdaf6f906628bf/html5/thumbnails/12.jpg)
Example 3: Traveling salesman problem (TSP)
โข Given ๐ users located in ๐different places (cities).
โข The problem is to find a route so that the salesman can visit all users once (and only once), start from and return to his own place (to find the Hamiltonian cycle).
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C)
From Wikipedia
Lec01/12
![Page 13: Lecture 1 An Introduction to Optimization -- โฆqf-zhao/TEACHING/MH/Lec01.pdfUn-constrained Optimization โขUsually, ๐ฅis a โpointโ in an N-dimensional Euclidean space ๐
,](https://reader033.vdocuments.site/reader033/viewer/2022041623/5e40c24c16cdaf6f906628bf/html5/thumbnails/13.jpg)
Example 3: Traveling salesman problem (TSP)
โข In TSP, we have a route map which can be represented by a graph.
โข Each node is a user, and the edge between each pair of nodes has a cost (distance or time).
โข The evaluation function to be minimized is the total cost of the route.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C)
From Wikipedia
Lec01/13
For TSP, the number of all possible solutions is ๐!, and this is a well-known NP-hard combinatorial problem.
![Page 14: Lecture 1 An Introduction to Optimization -- โฆqf-zhao/TEACHING/MH/Lec01.pdfUn-constrained Optimization โขUsually, ๐ฅis a โpointโ in an N-dimensional Euclidean space ๐
,](https://reader033.vdocuments.site/reader033/viewer/2022041623/5e40c24c16cdaf6f906628bf/html5/thumbnails/14.jpg)
NP-hard and NP-complete
โข Problems that can be solved by a deterministic algorithm in polynomial time is called class P.
โข NP is a class of decision problems that can be solved by a non-deterministic algorithm in polynomial time.
โข A problem H is NP-hard if it is at least as hard as any NP problem.
โข NP-hard decision problems are NP-complete.
โข NP-complete and NP-hard problems can be solved more efficiently if we use meta-heuristics.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/14
P
NP-complete
NP
NP-hard
![Page 15: Lecture 1 An Introduction to Optimization -- โฆqf-zhao/TEACHING/MH/Lec01.pdfUn-constrained Optimization โขUsually, ๐ฅis a โpointโ in an N-dimensional Euclidean space ๐
,](https://reader033.vdocuments.site/reader033/viewer/2022041623/5e40c24c16cdaf6f906628bf/html5/thumbnails/15.jpg)
Example 4: The Knapsack problem
โข Knapsack problem is another NP-hard problem defined by:โ There are ๐ objects;
โ Each object has a weight and a value;
โ The knapsack has a capacity;
โ The user has a quota (minimum desired value);
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/15
The problem is to find a sub-set of the objects that can be put into the knapsack and can maximize the total value.
![Page 16: Lecture 1 An Introduction to Optimization -- โฆqf-zhao/TEACHING/MH/Lec01.pdfUn-constrained Optimization โขUsually, ๐ฅis a โpointโ in an N-dimensional Euclidean space ๐
,](https://reader033.vdocuments.site/reader033/viewer/2022041623/5e40c24c16cdaf6f906628bf/html5/thumbnails/16.jpg)
Example 4: The Knapsack problem KNAPSACK (in OS : set of objects; QUOTA : number; CAPACITY : number;
out S : set of objects; FOUND : boolean) Begin S := empty;
total_value := 0; total_weight := 0; FOUND := false; pick an order L over the objects; loop
choose an object O in L; add O to S; total_value:= total_value + O.value; total_weight:= total_weight + O.weight; if total_weight > CAPACITY then fail
else if total_value > = QUOTA FOUND:= true; succeed;
end enddelete all objects up to O from L;
end end
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/16
This is a non-deterministic algorithm. Each time we run the program, we get a different answer. By chance, we may get the best answer.
![Page 17: Lecture 1 An Introduction to Optimization -- โฆqf-zhao/TEACHING/MH/Lec01.pdfUn-constrained Optimization โขUsually, ๐ฅis a โpointโ in an N-dimensional Euclidean space ๐
,](https://reader033.vdocuments.site/reader033/viewer/2022041623/5e40c24c16cdaf6f906628bf/html5/thumbnails/17.jpg)
Example 5: Learning problems
โข Many optimization problems related to machine learning (learning from a given set of training data) are NP-hard/complete.
โข Examples included:โ Finding the smallest feature sub-set;โ Finding the most informative training
data set;โ Finding the smallest decision tree;โ Finding the best clusters;โ Finding the best neural network;โ Interpret a learned neural network.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/17
![Page 18: Lecture 1 An Introduction to Optimization -- โฆqf-zhao/TEACHING/MH/Lec01.pdfUn-constrained Optimization โขUsually, ๐ฅis a โpointโ in an N-dimensional Euclidean space ๐
,](https://reader033.vdocuments.site/reader033/viewer/2022041623/5e40c24c16cdaf6f906628bf/html5/thumbnails/18.jpg)
Homework
โข Try to find some other examples of optimization problems (at least two) from the Internet.
โข Tell if the problems are NP-hard, NP-complete, NP, or P.
โข Provide a solution (not necessarily the best one) for each of the problems.
โข Summarize your answer using a pdf-file, and submit the printed copy before the class of next week.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/18