matlan_matrixcalculus_optimization.pptx

135
OPTIMIZATION

Upload: aryoel06

Post on 03-Dec-2015

18 views

Category:

Documents


4 download

TRANSCRIPT

Matrix Calculus

OPTIMIZATION1OutlineThe Derivatives of Vector Functions

The Chain Rule for Vector Functions21 The Derivatives of Vector Functions

31.1 Derivative of Vector with Respect to Vector

1.2 Derivative of a Scalar with Respect to VectorIf y is a scalar

1.3 Derivative of Vector with Respect to Scalar

It is also called the gradient of y with respect to a vector variable x, denoted by .

5Example 1Given

and

6In Matlab>> syms x1 x2 x3 real;>> y1=x1^2-x2;>> y2=x3^2+3*x2;>> J = jacobian([y1;y2], [x1 x2 x3]) J = [ 2*x1, -1, 0][ 0, 3, 2*x3]Note: Matlab defines the derivatives as the transposes of those given in this lecture.>> J'ans =[ 2*x1, 0][ -1, 3][ 0, 2*x3]7Some useful vector derivative formulas

Homework8

Important Property of Quadratic Form xTCx

Proof:

If C is symmetric,

92 The Chain Rule for Vector FunctionsLet

where z is a function of y, which is in turn a function of x, we can write Each entry of this matrix may be expanded as

10The Chain Rule for Vector Functions (Cont.)Then

On transposing both sides, we finally obtain

This is the chain rule for vectors (different from the conventional chain rule of calculus, the chain of matrices builds toward the left) 11Example 2x, y are as in Example 1 and z is a function of y defined as

12In Matlab>> z1=y1^2-2*y2;>> z2=y2^2-y1;>> z3=y1^2+y2^2;>> z4=2*y1+y2;>> Jzx=jacobian([z1; z2; z3; z4],[x1 x2 x3])Jzx =[ 4*(x1^2-x2)*x1, -2*x1^2+2*x2-6, -4*x3][ -2*x1, 6*x3^2+18*x2+1, 4*(x3^2+3*x2)*x3][ 4*(x1^2-x2)*x1, -2*x1^2+20*x2+6*x3^2, 4*(x3^2+3*x2)*x3][ 4*x1, 1, 2*x3]>> Jzxans =[ 4*(x1^2-x2)*x1, -2*x1, 4*(x1^2-x2)*x1, 4*x1][ -2*x1^2+2*x2-6, 6*x3^2+18*x2+1, -2*x1^2+20*x2+6*x3^2, 1][ -4*x3, 4*(x3^2+3*x2)*x3, 4*(x3^2+3*x2)*x3, 2*x3]

13OutlineUnconstrained Optimization Functions of One VariableGeneral Ideas of OptimizationFirst and Second Order ConditionsLocal v.s. Global Extremum Functions of Several VariablesFirst and Second Order ConditionsLocal v.s. Global ExtremumConstrained Optimization Kuhn-Tucker Conditions Sensitivity Analysis Second Order Conditions

15Unconstrained OptimizationAn unconstrained optimization problem is one where you only have to be concerned with the objective function you are trying to optimize.An objective function is a function that you are trying to optimize.None of the variables in the objective function are constrained.16General Ideas of OptimizationThere are two ways of examining optimization.Maximization (example: maximize profit)In this case you are looking for the highest point on the function.Minimization (example: minimize cost)In this case you are looking for the lowest point on the function.Maximization f(x) is equivalent to minimization f(x)

17Graphical Representation of a Maximumyxy = f(x) = -x2 + 8x816418Questions Regarding the MaximumWhat is the sign of f '(x) when x < x*?Note: x* denotes the point where the function is at a maximum.What is the sign of f '(x) when x > x*?What is f '(x) when x = x*? Definition: A point x* on a function is said to be a critical point if f ' (x*) = 0. This is the first order condition for x* to be a maximum/minimum.

19Second Order ConditionsIf x* is a critical point of function f(x), can we decide whether it is a max, a min or neither?Yes! Examine the second derivative of f(x) at x*, f ' '(x*);x* is a maximum of f(x) if f ' '(x*) < 0; x* is a minimum of f(x) if f ' '(x*) > 0; x* can be a maximum, a minimum or neither if f ' '(x*) = 0;

20An Example of f''(x*)=0 Suppose y = f(x) = x3, then f '(x) = 3x2 and f ''(x) =6x,This implies that x* = 0 and f ''(x*=0) = 0.y=f(x)=x3xyx*=0 is a saddle point where the point is neither a maximum nor a minimumExample of Using First and Second Order ConditionsSuppose you have the following function: f(x) = x3 6x2 + 9xThen the first order condition to find the critical points is:f(x) = 3x2 - 12x + 9 = 0This implies that the critical points are at x = 1 and x = 3.

22Example of Using First and Second Order Conditions (Cont.)The next step is to determine whether the critical points are maximums or minimums.These can be found by using the second order condition.f ' '(x) = 6x 12 = 6(x-2)

Testing x = 1 implies:f ' '(1) = 6(1-2) = -6 < 0.Hence at x =1, we have a maximum.Testing x = 3 implies:f ' '(3) = 6(3-2) = 6 > 0.Hence at x =3, we have a minimum.

Are these the ultimate maximum and minimum of the function f(x)?23Local Vs. Global Maxima/MinimaA local maximum is a point that f(x*) f(x) for all x in some open interval containing x* and a local minimum is a point that f(x*) f(x) for all x in some open interval containing x*;

A global maximum is a point that f(x*) f(x) for all x in the domain of f and a global minimum is a point that f(x*) f(x) for all x in the domain of f.

For the previous example, f(x) as x and f(x) - as x -. Neither critical point is a global max or min of f(x).

24Local Vs. Global Maxima/Minima (Cont.)When f ''(x)0 for all x, i.e., f(x) is a convex function, then the local minimum x* is the global minimum of f(x)

When f ''(x)0 for all x, i.e., f(x) is a concave function, then the local maximum x* is the global maximum of f(x)

25Conditions for a Minimum or a Maximum Value of a Function of Several VariablesCorrespondingly, for a function f(x) of several independent variables xCalculate and set it to zero.Solve the equation set to get a solution vector x*.Calculate .Evaluate it at x*.Inspect the Hessian matrix at point x*.

26Hessian Matrix of f(x)

27Conditions for a Minimum or a Maximum Value of a Function of Several Variables (cont.)Let f(x) be a C2 function in Rn. Suppose that x* is a critical point of f(x), i.e., .

If the Hessian is a positive definite matrix, then x* is a local minimum of f(x); If the Hessian is a negative definite matrix, then x* is a local maximum of f(x).If the Hessian is an indefinite matrix, then x* is neither a local maximum nor a local minimum of f(x).

28Example

Find the local maxs and mins of f(x,y)Firstly, computing the first order partial derivatives (i.e., gradient of f(x,y)) and setting them to zero

29Example (Cont.)

We now compute the Hessian of f(x,y)The first order leading principal minor is 6x and the second order principal minor is -36xy-81. At (0,0), these two minors are 0 and -81, respectively. Since the second order leading principal minor is negative, (0,0) is a saddle of f(x,y), i.e., neither a max nor a min.At (3, -3), these two minors are 18 and 243. So, the Hessian is positive definite and (3,-3) is a local min of f(x,y).Is (3, -3) a global min?30Global Maxima and Minima of a Function of Several VariablesLet f(x) be a C2 function in Rn, then

When f(x) is a concave function, i.e., is negative semidefinite for all x and , then x* is a global max of f(x);

When f(x) is a convex function, i.e., is positive semidefinite for all x and , then x* is a global min of f(x);

31Example (Discriminating Monopolist)A monopolist producing a single output has two types of customers. If it produces q1 units for type 1, then these customers are willing to pay a price of 50-5q1 per unit. If it produces q2 units for type 2, then these customers are willing to pay a price of 100-10q2 per unit. The monopolists cost of manufacturing q units of output is 90+20q. In order to maximize profits, how much should the monopolist produce for each market?Profit is:

32Constrained OptimizationExamples:Individuals maximizing utility will be subject to a budget constraintFirms maximising output will be subject to a cost constraint

The function we want to maximize/minimize is called the objective functionThe restriction is called the constraint33Constrained Optimization (General Form)A general mixed constrained multi-dimensional maximization problem is

34Constrained Optimization (Lagrangian Form)The Lagrangian approach is to associate a Lagrange multiplier i with the i th inequality constraint and i with the i th equality constraint. We then form the Lagrangian

35Constrained Optimization (Kuhn-Tucker Conditions)If is a local maximum of f on the constraint set defined by the k inequalities and m equalities, then, there exists multipliers satisfying

36Constrained Optimization (Kuhn-Tucker Conditions)The first set of KT conditions generalizes the unconstrained critical point conditionThe second set of KT conditions says that x needs to satisfy the equality constraintsThe third set of KT conditions is

That is to say

37Constrained Optimization (Kuhn-Tucker Conditions)This can be interpreted as follows:Additional units of the resource bi only have value if the available units are used fully in the optimal solution, i.e., if the constraint is not binding thus it does not make difference in the optimal solution and i*=0.

Finally, note that increasing bi enlarges the feasible region, and therefore increases the objective valueTherefore, i0 for all i

38Example

9Example (cont.)

Write (1) without minus signs as 40Sensitivity AnalysisWe notice that

What happens to the optimal solution value if the right-hand side of constraint i is changed by a small amount, say bi , i=1k. or ci , i=1.m.It changes by approximately or is the shadow price of ith inequality constraint and

is the shadow price of ith equality constraint41Sensitivity Analysis (Example)In the previous example, if we change the first constraint to x2+y2=3.9, then we predict that the new optimal value would be 2+1/4(-0.1)=1.975. If we compute that problem with this new constraint, then x-y2=If, instead, we change the second constraint from x0 to x0.1, we do not change the solution or the optimum value since

42Utility Maximization Example

43Utility Max Example Continued

Numerical Methods for Optimization 64

Recall, with optimization, we are seeking f '(x) = 0f '(x) = 0f "(x)< 0f '(x) = 0f "(x)>0Recall, for one dimensional unconstrained optimization problems, we are solving the first order condition, f(x)=0, then check the second orderconditions. If f(x*) is negative, then x* is a maximum of f, if f(x*) is positive, then x* is a minimum. 65One Dimension Unconstrained Optimization (Example)

Find the maximum of

To solve the root problem forand the second condition is satisfied at the root

66

One Dimension Unconstrained Optimization (Example)We can solve f '(x) =0 by Bisection using initial interval [1,2], Newtons with initial point 1.2 or Secant method with initial points 1.2 and 2 presented in Topic 3.

We can also solve it in Matlab. >> f=@(x) 2*cos(x)-1/5*x; >> fzero(f,[1,2]) ans =1.427667Objectives : Using Optimization Toolbox in Matlab toSolve unconstrained optimization with multiple variablesSolve linear programming problemSolve quadratic programming problem (for example: optimal portfolio)Solve nonlinear optimization with constraintsMostly, we will focus on minimization in this topic, max f(x) is equivalent to min f(x)68Linear Programming/Quadratic Programming/Nonlinear ProgrammingIf f(x) and the constraints are linear, we have linear programming

If f(x) is quadratic, and the constraints are linear, we have quadratic programming

If f(x) in not linear or quadratic, and/or the constraints are nonlinear, we have nonlinear programmingRecall the Optimality Conditions for Multiple Variables Unconstrained Minimization Problem: min f(x1, x2,xn)

Optimality Condition: x* is a local minimum if

Example:

What values of x make

Recall the optimality conditions for a unconstrained minimization problem, min f(x1.xn). Firstly, the first order condition, i.e., the gradient of f is zero and the Hessian Matrix of f(x) at the critical point is positive definite.

For example, min f(x)=. We have to solve the critical point, (x1*, x2*) numerically.

70Unconstrained Optimization with Multiple Variables in MatlabStep 1: Write an M-file objfun.m and save under the work path of matlab function f=objfun(x) f=exp(x(1)+x(2)-1)+exp(x(1)-x(2)-1)+exp(-x(1)-1);

Step 2: >>optimtool in the commend window to open the optimization toolbox

Plot the 3D graph of this example. X=-6:0.1:2;>> Y=-4:0.1:4;>> [XX,YY]=meshgrid(X,Y);>> Z=exp(XX+YY-1)+exp(XX-YY-1)+exp(-XX-1);>> surf(XX,YY,Z)

Explain:[X,Y] = meshgrid(x,y) transformsthe domain specified by vectors x and y intoarrays X and Y, which can be used toevaluate functions of two variables and three-dimensional mesh/surface plots.The rows of the output array X are copies of the vector x;columns of the output array Y are copies of the vector y.71Unconstrained Optimization with Multiple Variables in Matlab (cont.)

We use the function fminunc to solve unconstrained optimization problem objfun

Use fminunc to solve unconstrained optimization problem with objective function objfun.For our class, we choose to use Medium Scale algorithm, Medium-scale is not a standard term and is used only to differentiate these algorithm for large-scale algorithms, which are designed to handle large-scale problems efficiently.

For example, for the unconstrained minimization, the algorithm is BFGS quasi-Newton methodFor constrained minimization, Sequential Quadratic Programming are used.Large scale algorithms use trust region methods.

Gradient Calculations: Gradients are calculated using a finite difference method unless they are supplied in a function. Analytical expressions of the gradients of objective can be incorporated through gradient functions.

To minimize this function with the gradient provided, modify the m-file objfun.m so the gradient is the second output argument

function [f g H]=objfun(x) f=exp(x(1)+x(2)-1)+exp(x(1)-x(2)-1)+exp(-x(1)-1); if nargout>1 % gradient requiredG=[exp(x(1)+x(2)-1)+exp(x(1)-x(2)-1)-exp(-x(1)-1);exp(x(1)+x(2)-1)-exp(x(1)-x(2)-1)];if nargout>2 %Hessian requiredH=[$$, $$;$$,$$];EndEndNargout checks the number of arguments that a calling function specifies; See help of Checking the Number of Input Arguments

Termination Tolerance:TolX is a lower bound on the size of a step, meaning the norm of (xixi+1). If the solver attempts to take a step that is smaller than TolX, the iterations end. TolX is sometimes used as a relative bound, meaning iterations end when |(xixi+1)| < TolX*(1 + |xi|), or a similar relative measure.TolFun is a lower bound on the change in the value of the objective function during a step. If |f(xi)f(xi+1)| < TolFun, the iterations end. TolFun is sometimes used as a relative bound, meaning iterations end when |f(xi)f(xi+1)|optimtool in the commend window to open the optimization toolboxStep 2: Define matrices A, Aeq and the vectors f, b, lb, ub

Example:

80Linear Programming in Matlab (Example)

File->export to workspacecan export the results including lambda,etc.

Simplex Method:

Three facts about linear programs: (1) If there is exactly one optimal point, then it must be at a feasible vertex (A vertex is a point where constraints intersect each other) If there are multiple optimal points then, at least two must be at adjacent vertices.(2) There are a finite number of feasible vertices.

So: We can find optimal points by evaluating objective at every feasible vertex, but this is not efficient (the number of vertices grows exponentially with the number of constraints and variables)(3) If the objective function evaluated at a feasible vertex is lower(higher) or equal than the value at all adjacent feasible vertices then, the vertex is an optimal point for the minimization (maximization ) problem.

Simplex Method: Traverse Feasible Vertices until no adjacent feasible vertex improves the objective function. Simplex method is widely used today.(start with a basic feasible solution, then it moves through a sequence of other basic feasible solutions that successively improve the value of the objective function.)81Quadratic Programming in Matlab

Step 1: >>optimtool in the commend window to open the optimization toolboxStep 2: Define matrices H,A and the vectors f, b

Introduce how to use quadprog to solve the quadratic programming problem, recall that a quadratic programming problem is that the objective function is quadratic function and the constraints are linear. The standard form in Matlab is: Ax>optimtool to open the optimization toolbox

Dont have linear constraints.

86Nonlinear Programming in Matlab (Example)

87Sequential Quadratic Programming is an Algorithm Used in Function fmincon (Basic Idea)

The basic idea is analogous to Newtons method for unconstrained optimization.In unconstrained optimization, only the objective function must be approximated, in the NLP, both the objective and the constraint must be modeled.An sequential quadratic programming method uses a quadratic for the objective and a linear model of the constraint ( i.e., a quadratic program at each iteration)

Sequential quadratic programming (SQP): The basic idea is analogous to Newtons method for unconstrained minimization. At each step, a local model of the optimization problem is constructed and solved, yielding a step toward the solution of the original problem. In unconstrained optimization, only the objective function must be approximated, in the nonlinear problem, both the objective and the constraint must be modeled.

f(x_k+p)=f(x_k)+Df(x_k)p+1/2*p*D^2f(x_k)*p. G(x_k+p)=DG(x_k)^Tp+G(x_k), so the constaint: G_i(x)=0, is replaced by DG_i(x_k)^T+G(X_k)=0. Where H_k is psotive definite approximation of the Hessian Matrix of the Largragian function. H_k can be updated by any of the quasi-Newton mthods, for example, BFGS method.

The step length parameter a_k is determined by an appropriate line search procedure. (Remember: Newtons method may not converge to a solution if we move too far between estimates, this can be remedied by limiting how far we move, a_k determines how far to move.)88Copyright 2006 John Wiley & Sons, Inc.Supplement 13-89Lecture OutlineModel FormulationGraphical Solution MethodLinear Programming ModelSolutionSolving Linear Programming Problems with ExcelSensitivity Analysis

Copyright 2006 John Wiley & Sons, Inc.Supplement 13-90A model consisting of linear relationshipsrepresenting a firms objective and resource constraintsLinear Programming (LP)LP is a mathematical modeling technique used to determine a level of operational activity in order to achieve an objective, subject to restrictions called constraints

Copyright 2006 John Wiley & Sons, Inc.Supplement 13-91

Types of LPCopyright 2006 John Wiley & Sons, Inc.Supplement 13-92

Types of LP (cont.)Copyright 2006 John Wiley & Sons, Inc.Supplement 13-93

Types of LP (cont.)Copyright 2006 John Wiley & Sons, Inc.Supplement 13-94LP Model FormulationDecision variablesmathematical symbols representing levels of activity of an operationObjective functiona linear relationship reflecting the objective of an operationmost frequent objective of business firms is to maximize profitmost frequent objective of individual operational units (such as a production or packaging department) is to minimize costConstrainta linear relationship representing a restriction on decision makingCopyright 2006 John Wiley & Sons, Inc.Supplement 13-95LP Model Formulation (cont.)Max/min z = c1x1 + c2x2 + ... + cnxn

subject to:a11x1 + a12x2 + ... + a1nxn (, =, ) b1a21x1 + a22x2 + ... + a2nxn (, =, ) b2: am1x1 + am2x2 + ... + amnxn (, =, ) bm

xj = decision variablesbi = constraint levelscj = objective function coefficientsaij = constraint coefficientsCopyright 2006 John Wiley & Sons, Inc.Supplement 13-96LP Model: ExampleLaborClayRevenuePRODUCT(hr/unit)(lb/unit)($/unit)Bowl1440Mug2350

There are 40 hours of labor and 120 pounds of clay available each day

Decision variablesx1 = number of bowls to producex2 = number of mugs to produceRESOURCE REQUIREMENTS96Copyright 2006 John Wiley & Sons, Inc.Supplement 13-97LP Formulation: ExampleMaximize Z = $40 x1 + 50 x2

Subject tox1+2x240 hr(labor constraint)4x1+3x2120 lb(clay constraint)x1 , x20

Solution is x1 = 24 bowls x2 = 8 mugsRevenue = $1,36097Copyright 2006 John Wiley & Sons, Inc.Supplement 13-98Graphical Solution MethodPlot model constraint on a set of coordinates in a planeIdentify the feasible solution space on the graph where all constraints are satisfied simultaneouslyPlot objective function to find the point on boundary of this space that maximizes (or minimizes) value of objective function98Copyright 2006 John Wiley & Sons, Inc.Supplement 13-99Graphical Solution: Example4 x1 + 3 x2 120 lbx1 + 2 x2 40 hrArea common toboth constraints50 40 30 20 10 0 |10|60|50|20|30|40x1x299Copyright 2006 John Wiley & Sons, Inc.Supplement 13-100Computing Optimal Valuesx1+2x2=404x1+3x2=1204x1+8x2=160-4x1-3x2=-1205x2=40x2=8

x1+2(8)=40x1=244 x1 + 3 x2 120 lbx1 + 2 x2 40 hr40 30 20 10 0 |10|20|30|40x1x2Z = $50(24) + $50(8) = $1,360248100Copyright 2006 John Wiley & Sons, Inc.Supplement 13-101Extreme Corner Pointsx1 = 224 bowlsx2 =8 mugsZ = $1,360x1 = 30 bowlsx2 =0 mugsZ = $1,200x1 = 0 bowlsx2 =20 mugsZ = $1,000ABC|20|30|40|10x1x240 30 20 10 0 101Copyright 2006 John Wiley & Sons, Inc.Supplement 13-1024x1 + 3x2 120 lbx1 + 2x2 40 hr40 30 20 10 0 B|10|20|30|40x1x2CAZ = 70x1 + 20x2Optimal point:x1 = 30 bowlsx2 =0 mugsZ = $2,100Objective Function102Copyright 2006 John Wiley & Sons, Inc.Supplement 13-103Minimization ProblemCHEMICAL CONTRIBUTIONBrandNitrogen (lb/bag)Phosphate (lb/bag)Gro-plus24Crop-fast43Minimize Z = $6x1 + $3x2

subject to2x1+4x2 16 lb of nitrogen4x1+3x2 24 lb of phosphatex1, x2 0Copyright 2006 John Wiley & Sons, Inc.Supplement 13-10414 12 10 8 6 4 2 0 |2|4|6|8|10|12|14x1x2ABCGraphical Solutionx1 = 0 bags of Gro-plusx2 = 8 bags of Crop-fastZ = $24Z = 6x1 + 3x2Copyright 2006 John Wiley & Sons, Inc.Supplement 13-105Simplex MethodA mathematical procedure for solving linear programming problems according to a set of stepsSlack variables added to constraints to represent unused resources x1 + 2x2 + s1 =40 hours of labor4x1 + 3x2 + s2 =120 lb of claySurplus variables subtracted from constraints to represent excess above resource requirement. For example2x1 + 4x2 16 is transformed into2x1 + 4x2 - s1 = 16Slack/surplus variables have a 0 coefficient in the objective functionZ = $40x1 + $50x2 + 0s1 + 0s2

105Copyright 2006 John Wiley & Sons, Inc.Supplement 13-106

Solution Points withSlack VariablesCopyright 2006 John Wiley & Sons, Inc.Supplement 13-107

Solution Points withSurplus VariablesCopyright 2006 John Wiley & Sons, Inc.Supplement 13-108

Solving LP Problems with ExcelClick on Tools to invoke Solver.Objective functionDecision variables bowls(x1)=B10; mugs (x2)=B11=C6*B10+D6*B11=C7*B10+D7*B11=E6-F6=E7-F7Copyright 2006 John Wiley & Sons, Inc.Supplement 13-109

Solving LP Problems with Excel (cont.)After all parameters and constraints have been input, click on Solve.Objective functionDecision variablesC6*B10+D6*B1140C7*B10+D7*B11120Click on Add to insert constraintsCopyright 2006 John Wiley & Sons, Inc.Supplement 13-110Solving LP Problems with Excel (cont.)

Copyright 2006 John Wiley & Sons, Inc.Supplement 13-111Sensitivity Analysis

Copyright 2006 John Wiley & Sons, Inc.Supplement 13-112

Sensitivity Range for Labor HoursCopyright 2006 John Wiley & Sons, Inc.Supplement 13-113

Sensitivity Range for BowlsCopyright 2006 John Wiley & Sons, Inc.Supplement 13-114Copyright 2006 John Wiley & Sons, Inc.All rights reserved. Reproduction or translation of this work beyond that permitted in section 117 of the 1976 United States Copyright Act without express permission of the copyright owner is unlawful. Request for further information should be addressed to the Permission Department, John Wiley & Sons, Inc. The purchaser may make back-up copies for his/her own use only and not for distribution or resale. The Publisher assumes no responsibility for errors, omissions, or damages caused by the use of these programs or from the use of the information herein. Using Solver for Non-Linear Programming (NLP)NLP with SolverRequires Microsoft ExcelRequires Premium Solver, which is located on your student disk.Requires a spreadsheet model that needs to be optimized.Well use our EOQ model as an example.The Model

(Note the nonlinear objective!)D = Annual demand C = Box purchase costsS = Order costsI = Inventory carrying costs Q = Quantity ordered

The Model

D = Annual demand C = Box purchase costsS = Order costsI = Inventory carrying costs Q = Quantity orderedMake sure cell formulas are correct.Set solver parametersRecall

Green cells are the unknownsThe blue cell contains the objective functionRed cells contain the constraintsSet solver parameters

Green cells are the unknowns. Delete the formula.The blue cell contains the objective functionRed cells contain the constraints, but the only constraint is non-negativity, which is handled in the Solver dialogueNLP with SolverSelect Tools..Add-ins and make sure the Solver Add-in is checked. (Click on thecheck box if it isnt.)Click OKIf the Solver Add-In is not showing at all, plan on working in the lab Zone 1.

NLP with SolverSelect the Tools..Solver menu itemIf the Standard Solver window appears, click the Premium button

NLP with SolverIn the Premium Solver window, set the solution method to Standard GRG Non-linearNLP with SolverI. Click the options buttonII. When the Solver Options window appears, you can include a non-negativity constraint by checking Assume Non-NegativeIII. Click OK.

NLP with Solver

Objective functionUnknownSelect Max or MinNLP with Solver

NLP with SolverTry Waner 13.2 example 1 pg 783Focus on using Premium Solver NLP to get the same answer. Create spreadsheet

2. Enter objective function formula

3. Set up SolverSet cell points to objective function cellBy changing variable points to unknown cellConstraints have used value to left of comparison and Available value to right.

4. Check for Solver gotchasUsing Premium solverIf needed, Min selectedStandard GRG Nonlinear selected

4. Check for Solver gotchasUnder options, assume non-negative selected

4. Check for Solver gotchasIn spreadsheet, you have tried multiple starting points

NLP with SolverAnswer is 100, as shown in your text.

Chart17151.71428571434.57142857146010.57142857146.363636363613.750.85714285713.7142857143629.71428571435.727272727312.502.8571428571648.85714285715.090909090911.25-0.85714285712667.14285714294.454545454510-1.71428571431.1428571429683.71428571433.81818181828.750.285714285761023.18181818187.5-0.57142857146121.14285714292.54545454556.25-1.428571428660.28571428571.9090909091561.27272727273.7560.63636363642.5601.2566

Constraint 1Constraint 2x1x2

Graph 1000.20.19866933080.40.38941834230.60.56464247340.80.717356090910.84147098481.20.9320390861.40.985449731.60.9995736031.80.973847630920.90929742682.20.80849640382.40.67546318062.60.51550137182.80.334988150230.14112000813.2-0.05837414343.4-0.2555411023.6-0.44252044333.8-0.61185789094-0.75680249534.2-0.87157577244.4-0.95160207394.6-0.99369100364.8-0.99616460885-0.95892427475.2-0.88345465575.4-0.77276448765.6-0.63126663795.8-0.46460217946-0.27941549826.2-0.08308940286.40.11654920496.60.31154136356.80.494113351170.65698659877.20.79366786387.40.89870809587.60.9679196727.80.998543345480.98935824668.20.94073055678.40.85459890818.60.73439709798.80.584917192990.41211848529.20.22288991419.40.02477542559.6-0.17432678129.8-0.366479129310-0.544021110910.2-0.699874687610.4-0.827826469110.6-0.922775421610.8-0.980936230111-0.999990206611.2-0.979177729211.4-0.919328525711.6-0.82282859511.8-0.6935250848

Graph 1

xf(x)

TableiProduct2ResourceRegularPremiumResource Availability3Raw Gas (m3/tonne)711774Production Time (hr/tonne)108120Storage96(tonne)Profit (/tonne)150175

SolverGas Production ProblemRegularPremiumTotalAvailableProduced00Raw711077Time1080120Storage Regular09Storage Premium06Unit Profit150175Profit000

Solver 2Gas Production ProblemRegularPremiumTotalAvailableProduced9.001.27Raw71177.0077Time108100.18120Storage Regular9.009Storage Premium1.276Unit Profit150175Profit1350222.731572.73

Graph 2x1x2x1x21550071506-21.7142857143-24.571428571490-210.571428571416.363636363613.75-0.85714285716-10.8571428571-13.714285714392-19.714285714325.727272727312.5-1.714285714360002.85714285719408.857142857135.090909090911.25-2.571428571461-0.8571428571129627.142857142944.454545454510-3.428571428662-1.714285714321.14285714299863.714285714353.81818181828.75-4.2857142857630.28571428579108263.18181818187.5-5.142857142964-0.571428571491291.142857142972.54545454556.25-665-1.4285714286100.285714285781.90909090915-6.8571428571691.27272727273.75-7.71428571436100.63636363642.5-8.571428571461101.25-9.4285714286612-0.63636363640-10.28571428576

Graph 2

Constraint 1Constraint 2x1x2

Fio

Constraint 1Constraint 2Constraint 3Constraint 4x1x2

03770.6180410.000006100.6180311.000009870.6180320.5000015970.6180330.6666725840.6180350.6000041810.6180380.6250067650.61803130.61538109460.61803210.61905177110.61803340.61765286570.61803550.61818463680.61803890.61798750250.618031440.618061213930.618032330.618031964180.61803