an region enclosed by multiple constraint boundaries
Feasible Region
子主题 1
once the model is formulated
an optimization algorithm can be used to find the solution
optimization is the process of minimizing or maximzing of a function subject to constraints on its variables
optimization is also called "linear programming"
linear problems
vortex(漩涡) conversion
simplex method
nonlinear problems
unconstrained optimization
one-dimensional
golden section search method(GSM)
Fibonacci search method (FSM)
Quadratic interpolation method
Cubic interpolation method
Use info of derivatives(派生物)
Steepest descent method
Newton method
Quasi-Newton Method
Conjugate Gradient method
子主题 4
Don't use info
Powell method
constrained optimization
solve directly
Feasible direction method
solve indirectly
Penalty fuchtion method
SQP method
Graphical method
1. determinate the domains of the variables
2. identify the feasible region of te solutions
3. plot a few contours of objective function to search the descent direction of the function
4. find the optimal solutions of the problem
iteration methods
descent methods
search direction:S0
determinate the optimal step length to minimize the function value :a
obtain the new iteration point X1=X0+a*S0
a good algorithm is expected to have good convergence and a fast speed
rate of convergence
linear rate
β=1
superlinear rate
1<β<2
quadratic rate
β=2
termination criteria(终止准则)
based on point distance
based on function value difference
based on gradient
Fundamental of Optimization
partial derivative
gradient
the direction the function value increases most rapidly
negative gradient direction decreases most rapidly
a vector comprised of all the first-order partial derivatives at this point
decribes the local variation of the function values at one point
directional derivative
注意转置方向
梯度是列向量
方向向量也是列向量
梯度的转置 * 方向向量 = 方向导数
Taylor Expansion
one variable
multiple variables
二次近似
二阶:Hessian matrix
significant in optimization
一阶
Positive Definite Quadratic Function(PDQF)
type of matrix
positive definite
positive semi-definite
negative positive
negative semi-definite
indefinite
characteristics
1. the contours of PDQFs are a set of concentric ellipses; the center of this set is the minimum point of the PDQFs
2. for the non-positive-definite QFs, the contours near the minimum point are approximately as ellipses; the contours become irregular when they are away form the minimum point.
Convexity
convexity set
if the straight line segment connecting any two points in S lies entirely inside S
convexity function
for any two points ,the straight line lies above the graph of the function
Minimizer
global minimizer
local minimizer
strong minimizer
Descent direction
通过角度来判定
Extremum conditions
only one variables
multiple variables
first order
second order
记住他们的表达式
Linear search methods
1-D dimensional search method
STEP1: an initial interval including the minimum point needs to be determined
unimodal function(单峰函数)
basic steps to determine the initial interval
STEP2: then the length of the interval is reduced after every iteration until the minimum point is found
interpolate(差补) two points inside the current known interval [a,b] , then compare the two points.
when the given convergence accuracy, some point inside the interval can be approximately considered as the minimum point along this search direction.
ratio of interval reduction
Determining Initial Interval
some basic steps
GSM
ratio is 0.618
the principle to determine two pionts
and the principle to determine the convergence
the algorithm of the GSM
FSM
Fn递增:1,1,2,3,5,8,13,21......
Ln递减
how to determine n?
每一步都有不同的ratio
Powell's quadratic interpolation method(QIM)
类似待定系数法
the minimum point Xp
两个条件确定
the direction of parabolic
convergence criterion of QIM
X2-Xp
Davidon's cubic interpolation method (CIM)
Chapter4: Unconstrained Optimization Methods
Steepest Descent Method
searching direction
negative gradient direction at iteration point
Sk=-gk=-梯度
the search direction
X2=X1+a*Sk
df/da=0
a is the step length
Convergence judgement
fault
need a lot iterations
Newton Method
utilize the negative gradient and the Hessian matrix of the function
Taylor Expansion of second order
Newton's Direction
Sk
the step length α is 1
fault
if the Hessians are not positive definite ,the method is not applicable
Quasi-Newton Method
use a positive definite matrix to approximate the f的二阶偏导
两种方法获得校正值
DFP
经过迭代H接近H阵的逆
BFGS
better
经过迭代,B接近H阵
B=H的逆
superlinear convergence
1<β<2
Conjugate Gradient Method
PRP
FR
两种求β的不同形式
main method : utilizing the information of the first-order and second order derivatives of the objective function
Linear Programming
constraints
constraint equations
ought to be equalities
if there exists inequalities ,bring in slack variables
将不等式变为等式
nonnegativity constraints
solutions
Basic solution
only satisfies the constraint equations
Basic feasible solution
satisfies both constraint equations and the non-negative constraints for variables
optimal solution
the feasible solution which achieve the optimal value
n variables ; m equations (n>m)
set n-m variables to zeros
non-basic variables
m variables
Basic variables
Basic Feasible Solution
all the constraint equations are the inequalities
introduce slack variables
basic variables
变量系数为0,其他系数为要求的表达式中的系数
constraint equations are equalities
introduce artificial variables
object function is the sum of all the artificial variables
when the value of the object function is zero, the solution is founded
the basic feasible solution is the part without the artificial variables
the principle column with the smallest judgement number
the row has the smallest quotient(商):b/a ( a属于第k列 ; a >0 )
the judgement number for the basic variable is always zero
and the judgement number for the non-basic variable 有公式
Constrained Optimization Methods
direct methods
the iteration points are always confined in the feasible region, after considering all the constraints.
Feasible Direction Method
indirect methods
constraints are put into the objective function such that constrained problems can be converted to unconstrained problem, or nonlinear problems are converted to relatively simple quadratic programming problems.
Penalty Function Method
SQP (Sequential Quadratic Programming) Method
effective constraints
ECs for equality-constrained problems
Kuhn-Tucker Condition
ineffective constraints
depending on whether the investigated point is on the constraint boundary or not