application of mode-pursuing sampling method (mps)

13
 Global Optimization Project 3: Mode-Pursuing Sampling Method (MPS) Mi guel D ´ ı az- Ro dr´ ı g uez July 14, 2014 1 Objecti ve Thoroughly study the Mode-Pursuing Sampling Method. 2 Mode-Pursuing Sampling Method Engineering design search for the global optimal of a function with respect to the design variables. Global optimization algorith m such as simulated annea ling (SA) or genetic algor ithms (GA) can be use for nding the optimal. However, engineering design problems often rely on computationally ex- pensive objective functions. For instance, a design project involving a nite element solution (FEM). SA and GA algorithms expend huge time in nding the global minimum for this kind of problems because the algorith m are based on intensiv e evaluation of the objectiv e func tion. Meta modelin g appro ach es have become p opular for dealing with such proble ms. Sampling data from the ob ject ive function allow to t a surrogate model, which is computational less expensive than the original model; then, the optimization is perfor med over the surro gate model. Thus , meta modelin g appro ach es re- quire an accurate surrogate model such that its global optimum matches with the optimal solution of the origin al model. One such of method is the Mode-Persui ng Sampling Method s (MPS) which is the algorithm discussed in this report. MPS is an algor ithm relying on discriminat ive sampling that provides an int ellige nt mechan ism to use the information from past iterations in order to lead the search toward the global optima. The method applies to continuous var iables [7], and also to discrete var iables [6]. The basic of the algorithm is summarized in Figure 1. 3 Conceptua l Il lustratio n of t he MPS For understanding of the MPS, each step of the algorithm is illustrated by solving the function presented in [5]. That is, f  (x) = 2x 3 1  − 32x 1  + 1 (1) where the minimum is located at  f  (2.31) = −48.3. Figure 2 shows f  (x) within the interv al [ 5 3]. The in itial r andom sampl ing [( n + 1 ) (n + 2) /2 + 1 − n] = 3, being  n  the dimension of  x. The f unc tio n  f  (x) is calculated for each sample points, which are called expensive points. Figure 2 also plots the initial sample points (red dot). The initial samples allo w tting a linear spline int erpolat ing functio n. Then, n0 = 10 4 uniformly distri buted points are generat ed using the spline funct ion, which are called cheap points. Figure 5 shows the objective function and the sampling points from the linear spline function. In order to pursue the mode of the function,  ˆ f  (x) is sorted starting from its minimum value to its maximum value. The function values can be arrange in K contours  {K 1 ,K 2 ,...,K  n }, when the rst 1

Upload: miguel-diaz

Post on 06-Feb-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Application of Mode-Pursuing Sampling Method (MPS)

7/21/2019 Application of Mode-Pursuing Sampling Method (MPS)

http://slidepdf.com/reader/full/application-of-mode-pursuing-sampling-method-mps 1/13

Global Optimization

Project 3: Mode-Pursuing Sampling Method (MPS)

Miguel Dıaz-Rodrıguez

July 14, 2014

1 Objective

Thoroughly study the Mode-Pursuing Sampling Method.

2 Mode-Pursuing Sampling Method

Engineering design search for the global optimal of a function with respect to the design variables.Global optimization algorithm such as simulated annealing (SA) or genetic algorithms (GA) can beuse for finding the optimal. However, engineering design problems often rely on computationally ex-pensive objective functions. For instance, a design project involving a finite element solution (FEM).SA and GA algorithms expend huge time in finding the global minimum for this kind of problemsbecause the algorithm are based on intensive evaluation of the objective function. Metamodelingapproaches have become popular for dealing with such problems. Sampling data from the ob jectivefunction allow to fit a surrogate model, which is computational less expensive than the original model;then, the optimization is performed over the surrogate model. Thus, metamodeling approaches re-quire an accurate surrogate model such that its global optimum matches with the optimal solutionof the original model. One such of method is the Mode-Persuing Sampling Methods (MPS) which isthe algorithm discussed in this report.MPS is an algorithm relying on discriminative sampling that provides an intelligent mechanismto use the information from past iterations in order to lead the search toward the global optima.The method applies to continuous variables [7], and also to discrete variables [6]. The basic of thealgorithm is summarized in Figure 1.

3 Conceptual Illustration of the MPS

For understanding of the MPS, each step of the algorithm is illustrated by solving the functionpresented in [5]. That is,

f  (x) = 2x31 − 32x1 + 1 (1)

where the minimum is located at  f  (2.31) = −48.3.

Figure 2 shows f  (x) within the interval [−5 3]. The initial random sampling [(n + 1) (n + 2) /2 + 1 − n] =3, being  n  the dimension of  x. The function  f  (x) is calculated for each sample points, which arecalled expensive points. Figure 2 also plots the initial sample points (red dot).The initial samples allow fitting a linear spline interpolating function. Then,  n0 = 104 uniformlydistributed points are generated using the spline function, which are called cheap points. Figure 5shows the objective function and the sampling points from the linear spline function.In order to pursue the mode of the function,  f  (x) is sorted starting from its minimum value to itsmaximum value. The function values can be arrange in K contours  {K 1, K 2, . . . , K  n}, when the first

1

Page 2: Application of Mode-Pursuing Sampling Method (MPS)

7/21/2019 Application of Mode-Pursuing Sampling Method (MPS)

http://slidepdf.com/reader/full/application-of-mode-pursuing-sampling-method-mps 2/13

 

Mode-pursuing sampling of np points

Find [(n+1)(n+2)/2+1] points around

the current mode; obtain the sub-region

 R R   !<! |1| 2

1

Perform quadratic fitting in the sub-region

Randomly generate [n/2] points;

Refit the function in the sub-region;

Add the points to the point set.

d  R New   Diff  R   !!   <<!

and|1|2

Update the speed

control factor r 

Perform local optimization

Exit

Initial random sampling [(n+1)(n+2)/2+1-np] points

Yes

No

No

The min in the sub-region?No

Yes

Yes

Obtain its real function

value and add it to the

point set.

Start

Figure 1: Flowchart of the Mode-Pursuing Sampling Method [7]

−3 −2 −1 0 1 2 3 4 5−80

−60

−40

−20

0

20

40

60

80

100

x

         f         (      x         )

Figure 2: Objective function for the illustrative example, blue line  f  (x), red dots represent samplingpoints (expensive points).

2

Page 3: Application of Mode-Pursuing Sampling Method (MPS)

7/21/2019 Application of Mode-Pursuing Sampling Method (MPS)

http://slidepdf.com/reader/full/application-of-mode-pursuing-sampling-method-mps 3/13

−3 −2 −1 0 1 2 3 4 5−80

−60

−40

−20

0

20

40

60

80

100

x

   f   (  x   ) ,  s  p   l   i  n  e

Figure 3: Example case 1, blue line  f  (x), red dots represent cheap points ( f  (x) interpolation func-

tion).

contour contains lower values of the objective function and the contour  K n the maximum values. Thefunction  f  (x) is used to find the cumulative distribution function by computing the cumulative sum

of  g (x) =  c0 −  f  (x), where  co  = max( f  (x)) (G (x) = cumsum(g (x)). The probability of selecting

points near to the function mode can be increased by modifying  G (x) using  G (x)1/b

.

Figure 4 shows the curve representing  f  (x),  g (x) (For the case of N/K=1, which means that onlyone contour  E 1   is considered),  G (x) the cumulative distribution function. The figure shows that,due to the fact that the   G   curve is relatively flat between 7000 to 10000, this points have lowerchances to be selected for further sampling than other sampling points. However, points in thatarea always have probabilities larger than zero. On the other hand, in order to better control thesampling process [4] introduce a speed control factor. Figure 4 also shows the speed  G   function. It

can be noted that samples from 5000 to 10000 have lower chances to be selected for further samplingthan others. Thus, the probability for further sampling near to the minimum value increased withthe speed  G  function.The MPS progress by generating more sample points that will be around the current minimum pointincreasing the chances of finding a better minimum. In order to find the global minimum a quadraticmodel is fitted to some of the expensive points that are in a sufficiently small neighbourhood aroundthe current minimum.The quadratic model can be expressed as follows,

y =  β 0 +ni=1

β ixi +ni=1

β iix2i  +

i<j

nj=1

β ijxixj   (2)

where β i,  β ii,  β ij, stand for the regression coefficients,  x  is the vector of design variables, and  y   the

response. The above equation is the standard equation for response surface methodology (RSM) [5].The model’s goodness fitness can be assessed by the  R2 coefficient. That is,

R2 =

ni=1

(y −  y)2

ni=1

(yi −  y)2

(3)

where yi   are the fitted function values,  yi  are the function values at the sample points and y   is the

3

Page 4: Application of Mode-Pursuing Sampling Method (MPS)

7/21/2019 Application of Mode-Pursuing Sampling Method (MPS)

http://slidepdf.com/reader/full/application-of-mode-pursuing-sampling-method-mps 4/13

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 100000

20

40

60

   f   (  x   )

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 100000

20

40

60

  g   (  x   )

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 100000

0.5

1

   G   (  x   )

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 100000

0.5

1

  s  p  e  e   d   G   (  x   )

Sample points

Figure 4: Example of MPS function F1.

mean of  yi. In general, R2 gets values between [0 1]. The closer the  R2 value to 1, the better themodeling accuracy.The quadratic model is fitted considering the expensive points of the sub-region ([(n + 1)(n + 2)/2 + 1]),and  R2 value is computed. If 1 − R2 < R, where R  is a user define threshold value, then generaten/2 expensive points within the sub-region. After that the model is fitted again using all points in thesub-region and compute a new the  R2

new. Finally, If 1 −R2new < R, then perform local optimization

to find the optimum  x∗.The process ends when   x∗   lies in the sub-region, if not, this point is added to the original set of expensive points, and the process continue to the next iteration.Figure 5 shows the quadratic model fit around the neighborhood points for the first iteration, whileFigure 6 shows the second and third iteration. Figure8(c) presents a detail view of the fitted modelshowing that the quadratic model matches with the objective function in the neighborhood of thesolution (R2 > 0.999).

4 Implementation of the MPS in Matlab

The MPS was implemented in Matlab. A briefly description of the method explaining relevant details

for its implementation is presented below.

Step 1:   Generate m  initial points  x1, x2, . . . , xm  that are randomly generated on S(f). This param-eter is usually small. The objective function is computed over these  m  points, they are calledexpensive points (in terms of computational burden).

1   %STEP 1

2   % initial sampling

4

Page 5: Application of Mode-Pursuing Sampling Method (MPS)

7/21/2019 Application of Mode-Pursuing Sampling Method (MPS)

http://slidepdf.com/reader/full/application-of-mode-pursuing-sampling-method-mps 5/13

−3 −2 −1 0 1 2 3 4 5−80

−60

−40

−20

0

20

40

60

80

100

x

         f         (      x         )

Figure 5: Example case 1, blue line  f  (x), red line represent second order polynomial, black dots

represent sampling expensive points.

3   y = sampling(nv,nInit,xlv,xuv);

4

5   % call the objective function and calculate the objective function's value

6   fy=objfun(y);

1   function   [x] = sampling(nv,ns,lb,ub)

2   sample='y';

3   x=[];

4   while   sample='n'

5   y = rand(nv,ns);

6   for   i=1:ns

7

  y(:,i)=(y(:,i)'.*(ub−lb)+lb)';8   x=[x,y(:,i)];

9   [m,n]=size(x);

10   if(n==ns)

11   sample='n';

12   break;

13   end

14   end

15   end

Step 2:  Use the m  function values f  (x1) , f  (x2) , . . . , f   (xm) to fit a linear spline function.

ˆf  (x) =mi=1

αi ||x − xi||   (4)

such that   ˆf  (x) =  f  (xi),  i  = 1, 2, . . . , m where ||∗||   stands for the Euclidean norm.

1   %STEP 2

2   A = zeros(nupdate);

3   for   i = 1:nupdate

4   A(i,:) = sum(abs(repmat(y(:,i),1,nupdate)−y),1);   % ...

f(x)=a1 |x−x1 |+a2 |x−x2|+...+an |x−xn|, ...

a(a1,a2,...an)*A=f(x)(f(x1),f(x2),...f(xn))

5

Page 6: Application of Mode-Pursuing Sampling Method (MPS)

7/21/2019 Application of Mode-Pursuing Sampling Method (MPS)

http://slidepdf.com/reader/full/application-of-mode-pursuing-sampling-method-mps 6/13

−3 −2 − 1 0 1 2 3 4 5−80

−60

−40

−20

0

20

40

60

80

100

x

         f         (      x         )

(a)

−3 − 2 −1 0 1 2 3 4 5−80

−60

−40

−20

0

20

40

60

80

100

x

         f         (      x         )

(b)

0.5 1 1.5 2 2.5 3 3.5 4

−60

−55

−50

−45

−40

−35

−30

−25

−20

x

         f         (      x         )

x=2.319f(x)=−48.267

(c)

Figure 6: Evolution of the fitting model in the aplication of MPS for study case: (a) Second Iteration;(b) Third Iteration; (c) Zoom close to the minimum for the Third Iteration

5   end

6   coef = A\fy'

Step 3:   Define  g (x) =  c0 −  f  (x), where  c0  is any constant such that  c0  ≥  f (x), for all  x   in  S (f ).Since  g(x) is nonnegative on  S (f ), it can be viewed as a PDF, up to a normalizing constant,whose modes are located at those  xi’s where the function values are the lowest among  f  (xi).Then, the sampling algorithm provided by [4] can be applied to generate a random samplexm+1, xm+2, . . . , x2m   from  S (f ) according to  g (x). These sample points have the tendency toconcentrate about the minimum of  f  (x∗).

1   % sampling n0 points

2   x=sampling(nv,n0,xlv,xuv);

3   fx = zeros(n0,1);

4   for   i = 1:n0

5   fx(i) = sum(abs(repmat(x(:,i),1,nupdate)−y),1)*coef;   % make the n0 ...

points on the model

6   end

6

Page 7: Application of Mode-Pursuing Sampling Method (MPS)

7/21/2019 Application of Mode-Pursuing Sampling Method (MPS)

http://slidepdf.com/reader/full/application-of-mode-pursuing-sampling-method-mps 7/13

The function must be positive, f  (x) >  0 in order to build  g(x) function.

1   if   (min(fx)   <0)

2   fx=fx−min(fx);   % so that function is positive

3   end

Then, the PDF g (x) is normalized.

1   [fx id] = sort(fx);

2   x = x(:,id);

3   gx=max(fx)−fx;

4   gx=cumsum(gx)/sum(gx);   %PDF function

Generating a new set of points approaching towards the global minimum.

1   u = rand(nv,1);

2   nu = zeros(k,1);

3   for   i = 1:(k−1)

4   id = find(u   <   fx(i));

5   nu(i) = length(id);   % nu is the number of points to be picked at ...

each contour

6   u(id) = [];7   end

8

9   x1 = [];

10   for   i = 1:k

11   id = (i−1)*nk + ceil(nk*rand(nu(i),1));

12   x1 = [x1 x(:,id)];

13   end

14   % Find the real function values at those selected sample points

15   y = [x 1 y] ;   % add selected samples x1 into y

16   fy=[objfun(x1) fy];   % call objective function and calculate new objective ...

function's values

Step 4:  Combine the sample points obtained in Step 3 with the initial points in Step 1 to form theset x1, x2,...,x2m  and repeat Steps 2 to 3 until satisfying a certain stopping criterion.

1   if(singular=='n')

2   % calculate (n+1) r square

3   [rsquare b]=rsquarefunc(xx,fkk');

4   fid = fopen('b.dat','w');   % save coefficient matrix

5   fprintf(fid,'%6.4f\t', b);

6   fclose(fid);

7   if(rsquare≥   0.98)

8   % randomly produce round(nv/2) samples in space [ min(ykk) max(ykk)] to ...

check fitting a second−order function

9   y11 = sampling(nv,round(nv/2),min(ykk,[],2)',max(ykk,[],2)');

10   f11=objfun2(y11);

11   ykk=[ykk y11];

12   fkk=[fkk f11];

13   y=[y y11];   % in order count the number of evaluation

14   fy=[fy f11];   % in order count the number of evaluation

15   xx=coefmatrix(ykk);

16   rsquare=rsquarefunc(xx,fkk');   %printing rsquare

17   fit=fittingfunc(ykk);

18   % obj=objfun(ykk);

19   % maxdiff=max(abs(obj−fit));

20   maxdiff=max(abs(fkk−fit));

21   if(rsquare≥0.99 & (maxdiff   ≤   diff))

22   % calculate minimum point and minimum function value

23   options=optimset('LargeScale','off');

7

Page 8: Application of Mode-Pursuing Sampling Method (MPS)

7/21/2019 Application of Mode-Pursuing Sampling Method (MPS)

http://slidepdf.com/reader/full/application-of-mode-pursuing-sampling-method-mps 8/13

24   [x,fval]=fmincon('fittingfunc',ykk(:,1),[],[],[],[],xlv,xuv,[],options);

25   f=objfun2(x);

26   y=[x y];

27   fy=[f fy];

28   % check if the minimun point exists in the fitting area

29   if   (min(ykk,[],2)≤x & x≤max(ykk,[],2))

30   dialog='n';

31   end

32   end

33   end34   end

5 Experimental Results

5.1 Unconstrained optimization

The performance of the MPS is evaluated by solving the unconstrained optimization presented inTable 1. The first eight test functions are taken from [2], test functions 9 and 11 from [5] and testfunction 10 from [7]. Due to the limitation of the algorithm, for  n > 10 the MPS loses performance(n   stand for the number of variables), functions 1-10 are optimized for 2 variables.

Table 1: Test Objective Function, unconstrained optimization problems, unimodal and multimodalfunctions.

Name Function Limits

F1   f 1  =2i=1

x2i   −5.12 ≤  xi ≤  5.12

F2   f 2  = 100

x21 − x2

2+ (1 − x1)2 −2.048 ≤  xi ≤  2.048

F3   f 3  =2i=1

int (xi)   −5.12 ≤  xi ≤  5.12

F4   f 4  =2i=1

ix4i  + Gauss (0, 1)

  −1.28 ≤  xi ≤  1.28

F5   f 5  = 0.002 +25j=1

1

j+2

i=1

(xi−aij)6

  −655.36 ≤  xi ≤  655.35

F6   f 6  = 10V   +2i=1

−xi, sin 

|xi|

, V  = 4189.829101   −500 ≤  xi ≤  500

F7   f 7  = 20A +2i=1

x2i  − 10cos (2πxi), A = 10   −5.12 ≤  xi ≤  5.12

F8   f 8  = 1 +2i=1

  xi

2

4000

−2

i=1

cos

xi√ i

  −500 ≤  xi ≤  500

F9   f 8  =

1 + (x1 + x2 + 1)

2 19 − 14x1 + 3x2

2 − 14x2 + 3x22

·

30 + (2x1 − 3x2)2

18 − 32x1 + 12x21 + 48x2

−36x1x2 + 27x22

−2 ≤  xi ≤  2

F10   f 10  = 4x21 −

  2110

x41 +   1

3x61 + x1x2 − 4x2

2 + 4x42   −2 ≤  xi ≤  2

F11   f 11  = 2x31 − 32x1 + 1   −5 ≤  xi ≤  3

Each function is optimized 30 times. The algorithm stops when a global minimum is found or when200 iterations have been reached. The global minimum is considered a point that lies in the fittinginterval with a R2 = 0.999, and with maximal difference between the function values less than 0.01.The performance is measured by the percentage of the number of the times the algorithm finds theoptimal solution. The solution is considered optimal when the difference between the MPS solutionand the actual optimal function values is less than 0.01 (normalized with respect to the boundariesof the search space). Figure 7 shows the results for the unconstrained test functions. Overall, the

8

Page 9: Application of Mode-Pursuing Sampling Method (MPS)

7/21/2019 Application of Mode-Pursuing Sampling Method (MPS)

http://slidepdf.com/reader/full/application-of-mode-pursuing-sampling-method-mps 9/13

1 2 3 4 5 6 7 8 9 10 110

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Function Fi

   P  e  r   f  o  r  m  a  n  c  e   I  n   d  e  x

Figure 7: Performance of the MPS when solving the unconstrained test function F1-F11, Results

correspond to a total of 30 runs.

MPS has found the optimal solution, but has failed for F3 which is a function that contains manyflat surface. In halve of the cases MPS presents a performance above 40, but most of the time itfails when searching for the global minimum. The performance of the algorithm could improved if the number of iteration is increased. Due to the fact that overtime the algorithm guaranties thatconverge to the global minimum [7]. Also, the number of contours can be increased to study whetherthis fact improves the performance or not.Figure 8 and 9 show the sample points generated by the MPS method for the test function F1-F10.The red dot (*) locates the minimum value provided by the MPS. Overall, the MPS generates pointsclose to the optimal point.

5.2 Constrained optimization5.2.1 Case Study 1:

In order to evaluate the performance of the MPS for constrained optimization problem, two morecases are presented. The first problem is taken from [3], and can be written as follows:

min   −x1 − x2

subjected to

x2  ≤  2x41 − 8x3

1 + 8x21 + 2

x2 ≤  4x41 − 32x3

1 + 88x21 − 96x1 + 36

0 ≤  x1  ≤  30 ≤  x2  ≤  5

(5)

The above problem was solved using the MPS algorithm. A total of 30 runs were performed. TheMPS was able to find the global minimun with 96% of success   x1   = 2.3295,   x2   = 3.1785, andf  (x∗) =  −5.508. In Figure 10 the blue dots represent the sampling points generated by the MPS, thered line represents the first constraint function, while the black line the second constraint function.Thefigure also shows how the sampling points are generated and how those point are concentratingtoward the optimal. The global minimum locates at the constraint boundaries. The number of function evaluations performed by the MPS was 17 which is less than the 21 number of evaluationobtained using  fmicon   function of Matlab.

9

Page 10: Application of Mode-Pursuing Sampling Method (MPS)

7/21/2019 Application of Mode-Pursuing Sampling Method (MPS)

http://slidepdf.com/reader/full/application-of-mode-pursuing-sampling-method-mps 10/13

−5 −4 −3 −2 −1 0 1 2 3 4 5

−5

−4

−3

−2

−1

0

1

2

3

4

5

1

2   3

4

5

6

7

89

x1

      x        2

(a)

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

x1

      x        2

(b)

−5 −4 −3 −2 −1 0 1 2 3 4 5

−5

−4

−3

−2

−1

0

1

2

3

4

5

x1

      x        2

(c)

−1 −0.5 0 0.5 1

−1

−0.5

0

0.5

1

x1

      x        2

(d)

−600 −400 −200 0 200 400 600

−600

−400

−200

0

200

400

600

x1

      x        2

(e)

−500 −400 −300 −200 −100 0 100 200 300 400 500−500

−400

−300

−200

−100

0

100

200

300

400

500

x1

      x        2

(f)

−5 −4 −3 −2 −1 0 1 2 3 4 5

−5

−4

−3

−2

−1

0

1

2

3

4

5

x1

      x

        2

(g)

−500 −400 −300 −200 −100 0 100 200 300 400 500−500

−400

−300

−200

−100

0

100

200

300

400

500

x1

      x

        2

(h)

Figure 8: Sampling point generated when optimizing functions F1-F8: (a) F1; (b) F2; (c) F3; (d)F4; (e) F5; (f) F6; (g) F7; and, (h) F8.   10

Page 11: Application of Mode-Pursuing Sampling Method (MPS)

7/21/2019 Application of Mode-Pursuing Sampling Method (MPS)

http://slidepdf.com/reader/full/application-of-mode-pursuing-sampling-method-mps 11/13

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

x1

      x        2

(a)

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

x1

      x        2

(b)

Figure 9: Sampling point generated when optimizing functions F9-F10: (a) F9; (b) F10

0 0.5 1 1.5 2 2.5 30

0.5

1

1.5

2

2.5

3

3.5

4

x1

      x        2

 

Figure 10: Constrained optimization test problem 1: blue dot stand for objective function evaluation,red line for constraint function 1, and black line for constraint function 2

5.2.2 Case Study 2:

The second constrained optimization problem, taken from [1], consist of the design of a two-framemember subjected to the out-of-plane load, P, as is shown in Figure 11. Besides L = 100 inches,there are three design variables: the frame width (d), height (h), and wall thickness (t), having thefollowing ranges of interest: 2.5 ≤  d  ≤  10, 2.5 ≤  h ≤  10, and 0.1 ≤  t  ≤  1.0.The objective is to minimize the volume of the frame subject to stress constraints and size limitations.

The optimization problem can be written as:

11

Page 12: Application of Mode-Pursuing Sampling Method (MPS)

7/21/2019 Application of Mode-Pursuing Sampling Method (MPS)

http://slidepdf.com/reader/full/application-of-mode-pursuing-sampling-method-mps 12/13

 

th

d

LL

P

x y

z

(1)

(2)

(3)

Figure 11: Constrained optimization test problem 2: A two-member frame.

min   V   = 2L

2dt + 2ht + 2t2

subjected to

σ21 + 3τ 2

1/2≤ 40000

σ22 + 3τ 2

1/2≤ 40000

2.5 ≤  d  ≤  102.5 ≤  h  ≤  100.1 ≤  t ≤  1.0

(6)

In the above equations σ1, σ2, and τ  are respectively the bending stresses at point (1) (also point(3))and (2) and the torsion stress of each member. They are defined by the following equations:

σ1   =  M 1h

2I   (7)

σ2   =  M 2h

2I    (8)

τ    =  T 

2At  (9)

where,

M 1   =  2EI  (−3U 1 + U 2L)

L2  (10)

M 2   =  2EI  (−3U 1 + U 2L)

L2  (11)

T    =   −GJU 3

L  (12)

I    = (1/12)

dh3 − (d − 2t) (h − 2t)3

  (13)

J    = 2t

(d − t)2

(h − t)2

/ (d + h − 2t) (14)

A   = (d − t) (h − t) (15)

Given the constants  E   = 3.0E 7,   G  = 1.154E 7 and the load   P   =  −10000. The displacements  U 1(vertical displacement at point (2)),   U 2   (rotation about line (3)-(2)), and  U 3   (rotation about line(1)-(2)) are calculated by using the finite element method are given by:

12

Page 13: Application of Mode-Pursuing Sampling Method (MPS)

7/21/2019 Application of Mode-Pursuing Sampling Method (MPS)

http://slidepdf.com/reader/full/application-of-mode-pursuing-sampling-method-mps 13/13

EI 

L

24   −6L   6L−6L

4L2 +  GJ 

EI L2

  06L   0

4L2 +  GJ 

EI L2

U 1U 2U 3

 =

P 00

  (16)

The optimal solution for this problem is located at   d∗   = 7.798.h∗   = 10.00, and   t∗   = 0.01 withV  = 703.916.The problem was solved using the MPS . A total of 30 runs were conducted. The obtained optima

in each run was practically the same as the analytical optimum. The MPS performs 19 evaluationof the objective function to achieve the global minimum. The number of evaluations was less thanthat obtained using  fmicon  function of Matlab (38).

6 Conclusions

In this report the Mode-Pursuing Sampling optimization method was studied. The performance of the method was evaluated by solving unconstrained and constrianed optimization problems. All thesimulation were performed in Matlab, and a description of the implementation was presented. Theimplementation of the MPS requires the tunning of only one parameters, the difference coefficient,which is recommended to set to 0.01. For most of the test function, the MPS was able to identify theglobal optimum. The MPS method can deal with problem with computational expensive objective

function, like a FEM solution. This was shown when comparing the number of function evaluationssolving the constrained problem using MPS and  fmincon   function of Matlab. Since the number of points generated when evaluating the surrogate model is high, ten thousand points, the MPS couldsuffer of lack of computer memory. Thus, as the authors of the method have indicated, MPS suitedwell for problem with less than 10 design variables.Further research can be the implementation of the MPS in the design of Parallel Manipulator withmaximum workspace. One the one hand, finding the workspace can lead to a computer expensiveobjective function that additionally has constrains. On the other hand, the design parameters insuch kind of problems is less than 10.

References

[1] Arora, J. S. (1989). Introduction to Optimum Design . McGraw-Hill Higher Education, New York,NY, USA.

[2] Digalakis, J. and Margaritis, K. (2000). An experimental study of benchmarking functions forgenetic algorithms.   Systems, Man, and Cybernetics, 2000 IEEE International Conference on ,5:3810–3815 vol.5.

[3] Floudas, C. A. and Pardalos, P. M. (1990).  A Collection of Test Problems for Constrained Global 

Optimization Algorithms . Springer-Verlag New York, Inc., New York, NY, USA.

[4] Fu, J. C. and Wang, L. (2002). A random-discretization based monte carlo sampling method andits applications.  Methodology And Computing In Applied Probability , 4(1):5–25.

[5] Gary, W., Dong, Z., and Aitchison, P. (2001). Adaptive response surface method - a global opti-

mization scheme for approximation-based design problems.  Engineering Optimization , 33(6):707–733.

[6] Sharif, B., Wang, G. G., and ElMekkawy, T. Y. (2008). Mode pursuing sampling method fordiscrete variable optimization on expensive black-box functions.   Journal of Mechanical Design ,130(2):021402–1–021402–11.

[7] Wang, L., Shan, S., and Wang, G. G. (2004). Mode-pursuing sampling method for global opti-mization on expensive black-box functions.  Engineering Optimization , 36(4):419–438.

13