reaction path optimization
TRANSCRIPT
REACTION PATH OPTIMIZATION- Rudra Prasad Sahu
TOPICS TO BE COVERED
1) Optimization and its application2) Different types of extremes3) Unconstrained minimization4) Convergence criteria5) One- dimensional linear search
INTRODUCTIONThe process of optimization is the process of obtaining the ‘best’, if it is possibleto measure and change what is ‘good’ or ‘bad’. In practice, one wishes the‘most’ or ‘maximum’ (e.g., salary) or the ‘least’ or ‘minimum’ (e.g., expenses).
Optimization practice is , thus, the collection of techniques, methods, procedures, and algorithms that can be used to find the optima.
APPLICATIONS OF OPTIMIZATION
1) Modeling,2) Characterization, and design of devices, circuits, and systems; 3) Design of tools, instruments, and equipment; 4) Design of structures and buildings;5) Approximation theory, curve fitting, solution of systems of equations; 6) Forecasting, production scheduling, quality control;7) Neural networks and adaptive systems8) Inventory control, accounting, budgeting
DIFFERENT TYPES OF EXTREMES IN OBJECTIVE FUNCTION CURVE
Local minima = A, C, FLocal maxima = B, EGlobal minima = CGlobal maxima = EInflexion point = D
With functions of two variables we have got a new critical point, i.e., the saddle point.
The required conditions for it are:1) 2)
The figure aside is a graph of the function,f(x,y)= x2 – y2 and it has a saddle point at the origin
CONVERGENCE CRITERIA
GOLDEN SECTION METHODAssumption:1) F(x) is unimodal
ALGORITHM
NEWTON-RAPHSON METHODIt considers a linear approximation to the first derivative of the function using the Taylor’s series expansion. Subsequently, this expression is equated to zero to find the initial guess. If the current point at iteration t is xt , the point in the next iteration is governed by the nature of the following expression.
The iteration process is assumed to have converged when the derivative, is close to zero.
| | ≤ ε
where ε is a small quantity
STEEPEST DESCENT (CAUCHY’S) METHODThis method is generally used to optimize a multi-variable design. The search direction used in this method is the negative of the gradient at any particular point Xt
Since this direction provides maximum descent in function values, it is called steepest descent method. At every iteration, the derivative is computed at the current point and a unidirectional search is performed in the negative to this derivative direction to find the minimum point along that direction. The minimum point becomes the current point and the search is continued from this point. The procedure continues until a point having a small enough gradient vector is found.
The steps followed in the present method is mentioned sequentially below
OTHER LINE SEARCH ALGORITHMS (UNCONSTRAINED)1) One-dimensional line search
Powell's quadratic interpolation algorithm2) First order line search
Conjugate gradient methods3) Second order line search descent methods
Modified Newton's methodQuasi-Newton methods
REFERENCES1) Andreas Antoniou, Wu-Sheng Lu-Practical optimization_ algorithms and
engineering applications-Springer (2007)2) Practical Mathematical Optimization Panos M. Pardalos, Donald W.
Heam Vol-973) Numerical Methods in Engineering With Python 3 , Third Edition- Jaan
Kiusalaas4) Essentials Of Computational Chemistry Theories And Models -
Christopher Cramer