[american institute of aeronautics and astronautics 9th aiaa/issmo symposium on multidisciplinary...

7
An SQP adapted Simple Decomposition for Engineering Design Mehdi Lachiheb and Hichem Smaoui Unit´ e de Recherche de M´ ecanique Appliqu´ ee Ecole Polytechnique de Tunisie e-mail: [email protected] [email protected] Abstract: The paper presents a large scale nonlin- ear programming algorithm that integrates Sacher’s simple decomposition into the iterative context of the sequential quadratic programming method. Further- more, analysis of the evolution of the optimum set of extreme points of the sequence of quadratic pro- gramming problems gave way to the development of a procedure for initiating the decomposition with a whole set of extreme points. This set is determined at the start of each new iteration, based on the results of the preceding iteration, bypassing the solution of many master problems. In particular, the algorithm is tested on example structural optimization prob- lems. Results indicate a typically small number of vectors in the set of extreme points. 1 Introduction The sequential quadratic programming method (SQP ) [1-6] was developed by Biggs, Han and Pow- ell for solving nonlinear optimization problems. The solution of a general nonlinear programming problem (P ) min f (x) g i (x) 0 i I h i (x)=0 i L x IR n where I = {1, 2, ..., m} and L = {1, 2, ..., l}, is carried out by iteratively solving a sequence of quadratic pro- gramming (QP) problems of the form (PQ k ) min Q(d)= 1 2 d t B k d + d t f (x k ) g i (x k ) t d + g i (x k ) 0 i I h i (x k ) t d + h i (x k )=0 i L d IR n Clearly, the quadratic problems (PQ k ) are approxi- mations of the original problem P. The sequence of quadratic problems usually converges within a small number of iterations that tends to be independent of the size of the problem. This tendency is similarly exhibited by the sequential approximation methods largely used in structural optimization. Whereas spe- cial purpose sequential approximation methods rely on specific high quality approximation functions, an advantage of the SQP method is its generality with respect to the form of problem functions. The nonlin- earity of the constraints, which is apparently missed by the linearization, is actually well accounted for in problem PQ k via the quadratic term of the objective function which approximates the Hessian of the La- grangian function. Being interested in tackling large scale problems, we use Sacher’s simple decomposition [7,8] for solving the quadratic programming problems as suggested in [9]. Sacher’s decomposition consists in transforming 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization 4-6 September 2002, Atlanta, Georgia AIAA 2002-5557 Copyright © 2002 by the author(s). Published by the American Institute of Aeronautics and Astronautics, Inc., with permission.

Upload: hichem

Post on 09-Dec-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [American Institute of Aeronautics and Astronautics 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization - Atlanta, Georgia ()] 9th AIAA/ISSMO Symposium on Multidisciplinary

An SQP adapted Simple

Decomposition for Engineering

Design

Mehdi Lachiheb and Hichem Smaoui

Unite de Recherche de Mecanique Appliquee

Ecole Polytechnique de Tunisiee-mail: [email protected]

[email protected]

Abstract: The paper presents a large scale nonlin-ear programming algorithm that integrates Sacher’ssimple decomposition into the iterative context of thesequential quadratic programming method. Further-more, analysis of the evolution of the optimum setof extreme points of the sequence of quadratic pro-gramming problems gave way to the development ofa procedure for initiating the decomposition with awhole set of extreme points. This set is determined atthe start of each new iteration, based on the resultsof the preceding iteration, bypassing the solution ofmany master problems. In particular, the algorithmis tested on example structural optimization prob-lems. Results indicate a typically small number ofvectors in the set of extreme points.

1 Introduction

The sequential quadratic programming method(SQP ) [1-6] was developed by Biggs, Han and Pow-ell for solving nonlinear optimization problems. Thesolution of a general nonlinear programming problem

(P)

min f(x)

gi(x) ≤ 0 i ∈ Ihi(x) = 0 i ∈ L

x ∈ IRn

where I = {1, 2, ..., m} and L = {1, 2, ..., l}, is carriedout by iteratively solving a sequence of quadratic pro-gramming (QP) problems of the form

(PQk)

min Q(d) = 1

2dtBkd + dt∇f(xk)

∇gi(xk)td + gi(x

k) ≤ 0 i ∈ I∇hi(x

k)td + hi(xk) = 0 i ∈ L

d ∈ IRn

Clearly, the quadratic problems (PQk) are approxi-mations of the original problem P. The sequence ofquadratic problems usually converges within a smallnumber of iterations that tends to be independent ofthe size of the problem. This tendency is similarlyexhibited by the sequential approximation methodslargely used in structural optimization. Whereas spe-cial purpose sequential approximation methods relyon specific high quality approximation functions, anadvantage of the SQP method is its generality withrespect to the form of problem functions. The nonlin-earity of the constraints, which is apparently missedby the linearization, is actually well accounted for inproblem PQk via the quadratic term of the objectivefunction which approximates the Hessian of the La-grangian function.Being interested in tackling large scale problems, weuse Sacher’s simple decomposition [7,8] for solvingthe quadratic programming problems as suggested in[9]. Sacher’s decomposition consists in transforming

9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization4-6 September 2002, Atlanta, Georgia

AIAA 2002-5557

Copyright © 2002 by the author(s). Published by the American Institute of Aeronautics and Astronautics, Inc., with permission.

Page 2: [American Institute of Aeronautics and Astronautics 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization - Atlanta, Georgia ()] 9th AIAA/ISSMO Symposium on Multidisciplinary

the original quadratic programming problem QPk,whose variables form the space vector, into a prob-lem whose variables are the coefficients of the convexcombinations expressing the space vector in terms ofthe extreme points of the feasible set. Solving a QPproblem is then achieved via the iterative solutionof two problems: a master problem and a subprob-lem. The latter has the advantage of being a linearprogramming problem, and the former has a singleconstraint and much fewer variables than problem Phas, that is less than n + 1 − m′, where m′ is therank of the Jacobian of the active constraints includ-ing equalities.The SQP algorithm combined with the simple de-composition, which will be denoted SQPD [9] in thefollowing, has been applied to a significant numberof analytical test problems essentially constructedin a way to exhibit specific features such as ill-conditioning of the objective function or large setof extreme points (SEP). Results indicate good con-vergence of the sequence of quadratic problems andremarkable precision in the solution by the decompo-sition method.Moreover, examination of the evolution of the opti-mum set of extreme points (SEP) from a SQP it-eration to another led to the development of a pro-cedure that aims at reducing the computational ef-fort [10] devoted to the generation of intermediateextreme points. The underlying idea consists in ini-tiating the decomposition process with a whole SEPinstead of a single extreme point. The initial SEPis determined from the results of the preceding iter-ation of the SQP sequence without solving a seriesof master problems and subproblems. In particular,the algorithm is tested on example structural opti-mization problems. Results indicate a typically smallnumber of vectors in the set of extreme points.

2 Sacher’s simple decomposi-

tion:

Sacher’s simple decomposition applies to quadraticprogramming problems of the form:

(PQ):

min 1

2xtBx + ctx

A1x ≥ b1

A2x = b2

x ≥ 0

where x = (x1, x2, ..., xn) ∈ IRn is the vector of vari-ables, B is a n × n positive semi-definite matrix, A1

and A2 are respectively m1 ×n and m2 ×n matrices,c, b1 and b2 are vectors of dimensions n, m1 and m2

respectively.Let S = {x ∈ IRn, A1x ≥ b1, A2x = b2 andx ≥ 0} be the feasible set for problem (PQ). S

is a convex polytope, therefore there exist p extremepoints x1, x2 , x3 , ..., xp (p ≥ 1) and q extreme raysd1 , d2 , d3 , ..., dq , (q ≥ 0) such that

∀ x ∈ S , ∃ u1, ..., up , v1, ..., vq ∈ IR+ /p∑

i=1

ui = 1 and x =p∑

i=1

uixi +

q∑

i=1

vjdj .

In matrix notation

x = Uu + V v = Ww

where W = (U , V ) and w =

(

uv

)

.

Substituting Ww for x in problem (PQ) gives rise toan equivalent problem (MP) defined by

(MP) :

min 1

2wtQw + stw

p∑

i=1

ui = 1

w ≥ 0

where Q = W tBW is a positive semi-definitematrix and s = W tc is a (p + q)-vector.

2.1 Algorithm

Sacher’s simple decomposition algorithm can be sum-marized in the following steps [7].• Step 1 : Let U and V be two matri-ces made up columnwise of extreme points and ex-treme rays respectively. U has at least one columnwhereas V may be empty.• Step 2 : Solve the master problem (PE ). If itis unbounded the problem (PQ) is also unbounded.

Page 3: [American Institute of Aeronautics and Astronautics 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization - Atlanta, Georgia ()] 9th AIAA/ISSMO Symposium on Multidisciplinary

Otherwise, let

(

uv

)

denote the solution of the

master problem and let

x = Uu + V v.

• Step 3 : Solve the subproblem

( SP)

min htx

A1x ≥ b1

A2x = b2

x ≥ 0

where h = BUu + BV v + c = Bx + c . Ifthe solution of (SP) is bounded, then it must coin-cide with an extreme point which will be denoted byxk. Otherwise let dk be a feasible descent direc-tion ( htdk < 0).• Step 4 : If (SP) is bounded and has a solutionxk such that htx = htxk, then x is the solution ofproblem (PQ). Otherwise go to Step 5.• Step 5 : If there exists i ∈ IN / ui =0 ( resp. vi = 0 ) then eliminate the extremepoint xi ( resp. the extreme ray di ) . If subproblem(SP) is bounded, then replace U by ( U , xk ) oth-erwise replace V by (V, dk). Go to Step 1.

2.2 Solution of the master problem:

The structure of the master problem makes it suitablefor solution by a penalty method. When the feasibleset for the original quadratic problem is bounded thevector w in problem (MP ) is made up solely ofcombinations of extreme points with components ui

verifyingp∑

i=1

ui = 1 . The barrier function used [7]

is:

K(x, r) = −rnk∑

i=1

log xi.

where nk is the current number of extreme pointsand extreme rays. The above function is not usablein general if the feasible set is unbounded. However,it is applicable under the assumption of positive de-fineteness of matrix B. In case the objective functionfor (MP ) is not strictly convex one can choose an-other penalty function K(x, r) defined by

K(x, r) = −rnk∑

i=1

H(xi)

where

H(xi) =

{

log xi if xi ≤ 11 − 1

xiif xi ≥ 1

which is continuous and differentiable over S. An ad-vantage of the adopted choice for the penalty func-tion is in that the barrier function is strictly convexeven when the objective function for (MP ) is convexbut not strictly so. This ensures uniqueness of theoptimum for any value of r.

3 Approximation of the opti-

mum SEP

In the following the feasible set will be assumed tobe bounded. Feasible solutions can, therefore, be ex-pressed in terms of extreme points only.The SQPD algorithm using the standard simple de-composition has been subjected to testing on variousexample problems. Examination of the variations ofthe extreme points through the SQP iterations hasshown that, in some problems, the number of ex-treme points getting in and out of the SEP is verylarge. Considering that the generation of each ex-treme point requires the solution of a large LP prob-lem in addition to that of the master problem, oneclearly sees a potential for improvement in the overallcomputational effort if the number of extreme pointgenerations could be reduced. On the other hand, ithas been noted that, in most cases and especially atthe tail of the SQP sequence, to each point in theoptimum SEP of problem QPk is associated a point,in the optimum SEP of problem QPk+1, defined bythe same columns in the coefficient matrix. In an at-tempt to construct an approximation of the k + 1st

optimum SEP the following approach is considered.Let {xi, i = 1, ..., nk} be the optimum SEP for the kth

iteration. For each point xj we seek a correspondingextreme point, for the feasible set Sk+1, that we char-acterize as the closest one to xj . The new extremepoint, denoted yj , is sought as the solution of theproblem:

Page 4: [American Institute of Aeronautics and Astronautics 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization - Atlanta, Georgia ()] 9th AIAA/ISSMO Symposium on Multidisciplinary

(LPkj)

min∑

i∈L

xi

x ∈ Sk+1

where L = {i / ‖xji‖ ≤ ε} where ε is a small nonneg-

ative real number.In some cases of ill-conditioning or degeneracy, thepoints yj , j = 1, ..., nk are not necessarily affinely in-dependent, which may cause the number of pointsin the SEP to exceed the limit n + 1 − m′ in subse-quent steps of the decomposition procedure. There-fore, we select the largest affinely independent subsetof extreme points to form the initial group of extremepoints for problem (QP )k+1.

4 Numerical examples

4.1 Powell’s problem

Powell’s problem [4] is an example with a smallnumber of variables and exhibiting pronounced non-linearity. Table 1. presents the sequence of optimumSEP corresponding to a run of the SQP algorithmstarting at the solution x0 = (0,−2, 2, 0,−1) usingthe unmodified version of the simple decomposition.It can be seen that the maximum number of extremepoints used at a given step is 4, that is less thann + 1 = 6.The basic columns stabilize from thethird iteration for extreme point x3, from the fourthiteration for x1 and from the sixth for x4. It canbe noted that the latter leaves the optimum SEP atiteration 4 and reenters it at the sixth iteration.The optimum solution obtained is x∗ =(−0.699034,−0.869963, 2.789922, 0.6968791,−0.69657065) and the objective value is 0.4388502.On the other hand it should be noted that con-vergence of the SQP sequence is achieved within 8iterations with a tolerence of 10−5 on the norm ofthe direction d, i.e. the same number of iterations asreported in [4].

4.2 Ten bar truss design problem

In this example the optimum design problem for theten bar truss structure shown in Figure 1 is consid-

ered. The detailed problem statement is given in [11].The truss is to be designed for minimum self weightsubject to stress constraints and minimum gage re-straints on the cross sectional areas which constitutethe design variables of the problem. The allowablestress is 25000psi in tension in compression.The problem is solved by the SQPD algorithm us-ing the unmodified simple decomposition. The se-quence converges within 6 iterations with a toler-ence of 10−5 on ‖d‖. The optimal solution ob-tained is x∗ = (7.9379 0.1 8.0621 3.9379 0.1 0.15.7447 5.5690 5.5690 0.1) in in2 and the optimumvolume is 15931.8 in3. Table 2. shows the sequenceof optimum SEP. It is interesting to note that, exceptfor the first iteration, the optimum SEP reduces to asingleton. Moreover, the unique extreme point corre-sponds to a stable set of basic columns, with respectto both the original design variables and the slackvariables.It should be emphasized here that the SEP shownin Table 2. are the optimum ones. In other words,at intermediate stages of the decomposition process,the SEP may vary in size as extreme points get inand out. Furthermore, one clearly sees that by ap-plying the initial SEP approach proposed in Section3.3 one would identify the right extreme point at theonset and save the effort of generating all intermedi-ate extreme points. Moreover, even when the opti-mum extreme point is recognized at the first iterationof the simple decomposition, the modified decompo-sition presents a computational advantage since thenew extreme point is generated by solving problemLPkj which requires significantly more pivots thanproblem (SP ) which generates the new extreme pointin the standard decomposition.

4.3 Forty-seven-bar transmission

tower

In this example a larger truss structure is consideredand design variable linking is applied so that the num-ber of design variables is reduced to 27. The detailedproblem statement is given in [11].The truss is to be designed for minimum self weightsubject to stress constraints given an allowable stress

Page 5: [American Institute of Aeronautics and Astronautics 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization - Atlanta, Georgia ()] 9th AIAA/ISSMO Symposium on Multidisciplinary

limit of 35 kips, and 0.1 in2 minimum gage restrainton the cross sectional areas which constitute the de-sign variables of the problem. Starting from an in-tial design with all cross sections at 10 in2 and us-ing the SQPD algorithm the solution is obtainedwithin 10 SQP iterations. The optimal solution isx∗ = (0.931220, 0.768537, 0.140893, 0.10, 0.232367,0.485688, 0.760479, 0.278421, 0.320305, 0.514005,0.185333, 0.516679, 0.533977, 0.492247, 0.401566,0.10, 1.066225, 0.121519, 0.10, 1.180685, 0.111531,0.10, 1.274096, 0.10, 0.10 , 1.340856, 0.10) and theoptimum volume is 2113.929 in3. As observed in theprevious example, the optimum SEP is a singleton ex-cept for the first iteration. This is in agreement withthe statement that the number of retained extremepoints is governed by the number n+1−m′. Indeed,in both problems the number of active constraints,including side constraints, is equal to the number ofindependent design variables, that is n + 1−m′ = 1.

5 Conclusion

The SQPD algorithm developed in the present workcombines the SQP method for nonlinear program-ming with Sacher’s simple decomposition in orderto solve large scale nonlinear optimization problems.The method is characterized by remarkable accuracyand robustness. Interestingly, when applied to typi-cal structural optimization test problems the numberof vectors forming the set of extreme points reducesto unity. On the other hand, a modified decompo-sition strategy that takes advantage of the iterativecontext leads to sigificant improvement in computa-tional effort.The algorithm, particularly in its modified version,is being tested on more complex example structuralproblems in order to assess the effect of the modi-fied decomposition strategy on larger scale problemsand to compare the efficiency of the SQPD methodwith existing special purpose sequential approxima-tion techniques.

References

[1] Fiacco, A.V. and McCormick, G.P.,”NonlinearProgramming : Sequential Unconstrained Min-imization Techniques” John Wiley and Sons,New York, 1968, SEC.2.4.

[2] J.V. Burke and S-P.Han, ”A Robust SequentialQuadratic Programming Method”, Mathemati-cal Programming 43 (1989), 277-303.

[3] David F. Shanno and Kang Hoh Phua,” Numer-ical Experience With Sequential Quadratic Pro-gramming Algorithms for Equality ConstrainedNonlinear Programming”, ACM Transactions onMathematical Software, Vol. 15, No. 1, March1989, p. 49-63.

[4] M.J.D. Powell, ” Algorithms for Nonlinear Con-straints that use Lagrangian Functions”, Math.prog., Vol. 14, no.2 1978.

[5] M.J.D. Powell, ”The Convergence of VariableMetric Methods for Nonlinear Constrained Opti-mization Calculations”, Proceedings of the Spe-cial Interest Group on Mathematical Program-ming Symposium-Univ. of Wisconsin-Madison.July 1977.

[6] M.J.D. Powell, ”A Fast Algorithm for Non-linear Constrained Optimization Calculations”,no.DAMPTP77/NA 382, University of Cam-bridge, England, 1977.

[7] M. Ben Daya, ”A Hybrid Decomposition Ap-proach for Convex Quadratic Programming”,King Fahd University of Petroleum and Miner-als, under code SE/LINPROG/138, 1994.

[8] G.B Dantzig, ”Linear Programming and Exten-sions”, Princeton University Press (1963).

[9] M. Lachiheb, ” Extension de la decompositionhybride pour la programmation non lineaire ”,Memoire de D.E.A. Mathematiques appliquees,ENIT, Tunis (1997).

[10] Gil Kalai, ” Linear Programming, the SimplexAlgorithm and Simple Polytopes”, Institute of

Page 6: [American Institute of Aeronautics and Astronautics 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization - Atlanta, Georgia ()] 9th AIAA/ISSMO Symposium on Multidisciplinary

Mathematics, Hebrew University of Jerusalem,Mathematical Programming 79.(1997). pp.217-233.

[11] Uri Kirsch, ”Optimum Structural Design”,McGraw-Hill Book Company, 1981.

Page 7: [American Institute of Aeronautics and Astronautics 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization - Atlanta, Georgia ()] 9th AIAA/ISSMO Symposium on Multidisciplinary

Iter. 1 2 3 4 5 6 7 8

x1

x2

x3

x4

‖d‖

.00000

49.416

24.166

39.100

0.0000

1.E + 5

49.416

50024.

20039.

1.E + 5

.00000

49.416

50024.

20039.

1.E + 5

1.E + 5

49.416

24.166

39.100

0.0000

0.8074

0.0000

49.570

27.685

25.445

0.0000

1.E + 5

49.570

44132.

49049.

1.E + 5

0.0000

49.570

44132.

49049.

1.E + 5

1.E + 5

49.570

27.699

25.449

0.0000

0.3264

.00000

59.557

35.386

12.031

0.0000

309.49

.00000

23028.

64808.

1.E + 5

309.49

.00000

55.283

47.509

.00000

0.2187

.00000

68.526

42.095

.00000

.74450

184.783

.00000

13505.

76366.

1.E + 5

184.78

.00000

46.717

45.336

.00000

0.1700

.00000

75.862

49.420

.00000

15.535

146.59!

.00000

5334.4

90345.

1.E + 5

146.59

.00000

44.959

41.332

.00000

0.0679

.00000

82.122

53.821

.00000

25.021

.00000

82.122

.00000

16968.

16892.

127.80

.00000

44.336

38.406

0.0000

50.599

49.609

50.115

.00000

.00000

0.0044

.00000

82.283

53.782

.00000

24.958

.00000

82.283

.00000

90231.

90155.

127.43

.00000

44.194

38.554

.00000

50.112

49.927

50.021

.00000

.00000

0.0002

0.0000

82.282

53.770

.00000

24.933

.00000

82.282

27.242

1.E + 5

99975.

127.44

.00000

44.183

38.573

.00000

50.049

49.967

50.009

.00000

.00000

.00004

Table 1. Sequence of optimum SEP and ‖d‖ for Powell’s problem.

Iter. 1 2 3 4 5 6

x1 x2 x1 x1 x1 x1 x1

4.76881

.000000

8.09764

3.52469

.000000

.000000

4.39436

4.18233

5.19629

.000000

4.35332

.000000

8.89494

3.39200

.000000

.000000

5.29781

3.25080

4.99725

.000000

6.39218

.000000

7.88101

3.79158

.000000

.000000

5.60752

4.66185

5.43361

.000000

7.57217

.000000

7.96076

3.83744

.000000

.000000

5.64397

5.36208

5.46885

.000000

7.82922

.000000

7.96213

3.83787

.000000

.000000

5.64472

5.46714

5.46899

.000000

7.83786

.000000

7.96213

3.83787

.000000

.000000

5.64472

5.46899

5.46899

.000000

7.83787

.000000

7.96213

3.83787

.000000

.000000

5.74472

5.46899

5.46899

.000000

‖d‖ 2.090 2.008 1.210 0.0257 0.0086 0.00001

Table 2. Sequence of optimum SEP and ‖d‖ for ten bar truss problem.