SlideShare a Scribd company logo
Solving Optimization Problems using the Matlab Optimization
Toolbox - a Tutorial
TU-Ilmenau, Fakultät für Mathematik und Naturwissenschaften
Dr. Abebe Geletu
December 13, 2007
Contents
1 Introduction to Mathematical Programming 2
1.1 A general Mathematical Programming Problem . . . . . . . . . .
. . . . . . . . . . . . 2
1.1.1 Some Classes of Optimization Problems . . . . . . . . . . . . .
. . . . . . . . . 2
1.1.2 Functions of the Matlab Optimization Toolbox . . . . . . . . .
. . . . . . . . . . 5
2 Linear Programming Problems 6
2.1 Linear programming with MATLAB . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 6
2.2 The Interior Point Method for LP . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 8
2.3 Using linprog to solve LP’s . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 11
2.3.1 Formal problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . 11
2.3.2 Approximation of discrete Data by a Curve . . . . . . . . . . .
. . . . . . . . . . 13
3 Quadratic programming Problems 15
3.1 Algorithms Implemented under quadprog.m . . . . . . . . . . . .
. . . . . . . . . . . . 16
3.1.1 Active Set-Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . 17
3.1.2 The Interior Reflective Method . . . . . . . . . . . . . . . . . . . .
. . . . . . . 19
3.2 Using quadprog to Solve QP Problems . . . . . . . . . . . . . . . .
. . . . . . . . . . . 24
3.2.1 Theoretical Problems . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . 24
3.2.2 Production model - profit maximization . . . . . . . . . . . . . .
. . . . . . . . 26
4 Unconstrained nonlinear programming 30
4.1 Theory, optimality conditions . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 30
4.1.1 Problems, assumptions, definitions . . . . . . . . . . . . . . . . .
. . . . . . . . 30
4.2 Optimality conditions for smooth unconstrained problems . .
. . . . . . . . . . . . . . 31
4.3 Matlab Function for Unconstrained Optimization . . . . . . . .
. . . . . . . . . . . . . 32
4.4 General descent methods - for differentiable Optimization
Problems . . . . . . . . . . . 32
4.5 The Quasi-Newton Algorithm -idea . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 33
4.5.1 Determination of Search Directions . . . . . . . . . . . . . . . . .
. . . . . . . . 33
4.5.2 Line Search Strategies- determination of the step-length
αk . . . . . . . . . . . 34
4.6 Trust Region Methods - idea . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 35
4.6.1
Solution
of the Trust-Region Sub-problem . . . . . . . . . . . . . . . . . . . . .
36
4.6.2 The Trust Sub-Problem under Considered in the Matlab
Optimization toolbox . 38
4.6.3 Calling and Using fminunc.m to Solve Unconstrained
Problems . . . . . . . . . 39
4.7 Derivative free Optimization - direct (simplex) search
methods . . . . . . . . . . . . . . 45
1
Chapter 1
Introduction to Mathematical
Programming
1.1 A general Mathematical Programming Problem
f(x) −→ min (max)
subject to
x ∈ M.
(O)
The function f : Rn → R is called the objective function and the
set M ⊂ Rn is the feasible set of (O).
Based on the description of the function f and the feasible set
M, the problem (O) can be classified
as linear, quadratic, non-linear, semi-infinite, semi-definite,
multiple-objective, discrete optimization
problem etc1.
1.1.1 Some Classes of Optimization Problems
Linear Programming
If the objective function f and the defining functions of M are
linear, then (O) will be a linear
optimization problem.
General form of a linear programming problem:
c⊤ x −→ min (max)
s.t.
Ax = a
Bx ≤ b
lb ≤ x ≤ ub;
(LO)
i.e. f(x) = c⊤ x and M = {x ∈ Rn | Ax = a,Bx ≤ b,lb ≤ x ≤ ub}.
Under linear programming problems are such practical problems
like: linear discrete Chebychev ap-
proximation problems, transportation problems, network flow
problems,etc.
1The terminology mathematical programming is being currently
contested and many demand that problems of the form
(O) be always called mathematical optimization problems. Here,
we use both terms alternatively.
2
Sou Ying
Highlight
Sou Ying
Highlight
3
Quadratic Programming
1
2
xT Qx + q⊤ x → min
s.t.
Ax = a
Bx ≤ b
x ≥ u
x ≤ v
(QP)
Here the objective function f(x) = 12x⊤ Qx + q⊤ x is a quadratic
function, while the feasible set
M = {x ∈ Rn | Ax = a,Bx ≤ b,u ≤ x ≤ v} is defined using linear
functions.
One of the well known practical models of quadratic
optimization problems is the least squares ap-
proximation problem; which has applications in almost all fields
of science.
Non-linear Programming Problem
The general form of a non-linear optimization problem is
f (x) −→ min (max)
subject to
equality constraints: gi (x) = 0, i ∈ {1, 2, . . . ,m}
inequality constraints: gj (x) ≤ 0, j ∈ {m + 1,m + 2, . . . ,m + p}
box constraints: uk ≤ xk ≤ vk, k = 1, 2, . . . ,n;
(NLP)
where, we assume that all the function are smooth, i.e. the
functions
f, gl : U −→ R l = 1, 2, . . . ,m + p
are sufficiently many times differentiable on the open subset U
of Rn. The feasible set of (NLP) is
given by
M = {x ∈ Rn | gi(x) = 0, i = 1, 2, . . . ,m; gj (x) ≤ 0,j = m + 1,m
+ 2, . . . ,m + p} .
We also write the (NLP) in vectorial notation as
f (x) → min (max)
h (x) = 0
g (x) ≤ 0
u ≤ x ≤ v.
Problems of the form (NLP) arise frequently in the numerical
solution of control problems, non-linear
approximation, engineering design, finance and economics,
signal processing, etc.
Sou Ying
Highlight
4
Semi-infinite Programming
f(x) → min
s.t.
G(x,y) ≤ 0,∀ y ∈ Y ;
hi(x) = 0, i = 1, . . . ,p;
gj (x) ≤ 0,j = 1, . . . ,q;
x ∈ Rn;
Y ⊂ Rm.
(SIP)
Here, f,hi,gj : R
n → R, i ∈ {1, . . . ,p},j ∈ {1, . . . ,q} are smooth functions; G :
Rn × Rm → R is such
that, for each fixed y ∈ Y , G(·,y) : Rn → R is smooth and, for
each fixed x ∈ Rn,G(x, ·) : Rm → R is
smooth; furthermore, Y is a compact subset of Rm. Sometimes,
the set Y can also be given as
Y = {y ∈ Rm | uk(y) = 0,k = 1, . . . ,s1; vl(y) ≤ 0, l = 1, . . . ,s2}
with smooth functions uk,vl : R
m → R,k ∈ {1, . . . ,s1}, l ∈ {1, . . . ,s2}.
The problem (SIP) is called semi-infinite, since its an
optimization problem with finite number of vari-
ables (i.e. x ∈ Rn) and infinite number of constraints (i.e.
G(x,y) ≤ 0,∀ y ∈ Y ).
One of the well known practical models of (SIP) is the
continuous Chebychev approximation problem.
This approximation problem can be used for the approximation
of functions by polynomials, in filter
design for digital signal processing, spline approximation of
robot trajectory
Multiple-Objective Optimization
A multiple objective Optimization problem has a general form
min(f1(x),f1(x), . . . ,fm(x))
s.t.
x ∈ M;
(MO)
where the functions fk : R
n → R,k = 1, . . . ,m are smooth and the feasible set M is
defined in terms of
linear or non-linear functions. Sometimes, this problem is also
alternatively called multiple-criteria,
vector optimization, goal attainment or multi-decision analysis
problem. It is an optimization
problem with more than one objective function (each such
objective is a criteria). In this sense,
(LO),(QP)(NLO) and (SIP) are single objective (criteria)
optimization problems. If there are only two
objective functions in (MO), then (MO) is commonly called to
be a bi-criteria optimization problem.
Furthermore, if each of the functions f1, . . . ,fm are linear and
M is defined using linear functions,
then (MO) will be a linear multiple-criteria optimization
problem; otherwise, it is non-linear.
For instance, in a financial application we may need, to
maximize revenue and minimize risk at the
same time, constrained upon the amount of our investment.
Several engineering design problems can
also be modeled into (MO). Practical problems like autonomous
vehicle control, optimal truss design,
antenna array design, etc are very few examples of (MO).
In real life we may have several objectives to arrive at. But,
unfortunately, we cannot satisfy all
our objectives optimally at the same time. So we have to find a
compromising solution among all
our objectives. Such is the nature of multiple objective
optimization. Thus, the minimization (or
5
maximization) of several objective functions can not be done in
the usual sense. Hence, one speaks
of so-called efficient points as solutions of the problem. Using
special constructions involving the
objectives, the problem (MO) can be reduced to a problem with
a single objective function.
1.1.2 Functions of the Matlab Optimization Toolbox
Linear and Quadratic Minimization problems.
linprog - Linear programming.
quadprog - Quadratic programming.
Nonlinear zero finding (equation solving).
fzero - Scalar nonlinear zero finding.
fsolve - Nonlinear system of equations solve (function solve).
Linear least squares (of matrix problems).
lsqlin - Linear least squares with linear constraints.
lsqnonneg - Linear least squares with nonnegativity constraints.
Nonlinear minimization of functions.
fminbnd - Scalar bounded nonlinear function minimization.
fmincon - Multidimensional constrained nonlinear
minimization.
fminsearch - Multidimensional unconstrained nonlinear
minimization, by Nelder-Mead direct
search method.
fminunc - Multidimensional unconstrained nonlinear
minimization.
fseminf - Multidimensional constrained minimization, semi-
infinite constraints.
Nonlinear least squares (of functions).
lsqcurvefit - Nonlinear curvefitting via least squares (with
bounds).
lsqnonlin - Nonlinear least squares with upper and lower
bounds.
Nonlinear minimization of multi-objective functions.
fgoalattain - Multidimensional goal attainment optimization
fminimax - Multidimensional minimax optimization.
Sou Ying
Highlight
Chapter 2
Linear Programming Problems
2.1 Linear programming with MATLAB
For the linear programming problem
c⊤ x −→ min
s.t.
Ax ≤ a
Bx = b
lb ≤ x ≤ ub;
(LP)
MATLAB: The program linprog.m is used for the minimization
of problems of the form (LP).
Once you have defined the matrices A, B, and the vectors
c,a,b,lb and ub, then you can call linprog.m
to solve the problem. The general form of calling linprog.m is:
[x,fval,exitflag,output,lambda]=linprog(f,A,a,B,b,lb,ub,x0,optio
ns)
Input arguments:
c coefficient vector of the objective
A Matrix of inequality constraints
a right hand side of the inequality constraints
B Matrix of equality constraints
b right hand side of the equality constraints
lb,[ ] lb ≤ x : lower bounds for x, no lower bounds
ub,[ ] x ≤ ub : upper bounds for x, no upper bounds
x0 Startvector for the algorithm, if known, else [ ]
options options are set using the optimset funciton, they
determine what algorism to use,etc.
Output arguments:
x optimal solution
fval optimal value of the objective function
exitflag tells whether the algorithm converged or not, exitflag >
0 means convergence
output a struct for number of iterations, algorithm used and
PCG iterations(when LargeScale=on)
lambda a struct containing lagrange multipliers corresponding
to the constraints.
6
7
Setting Options
The input argument options is a structure, which contains
several parameters that you can use with
a given Matlab optimization routine.
For instance, to see the type of parameters you can use with the
linprog.m routine use
>>optimset(’linprog’)
Then Matlab displays the fileds of the structure options.
Accordingly, before calling linprog.m you
can set your preferred parameters in the options for linprog.m
using the optimset command as:
>>options=optimset(’ParameterName1’,value1,’ParameterName
2’,value2,...)
where ’ParameterName1’,’ParameterName2’,... are those you
get displayed when you use
optimset(’linprog’). And value1, value2,... are their
corresponding values.
The following are parameters and their corresponding values
which are frequently used with linprog.m:
Parameter Possible Values
’LargeScale’ ’on’,’off’
’Simplex’ ’on’,’off’
’Display’ ’iter’,’final’,’off’
’Maxiter’ Maximum number of iteration
’TolFun’ Termination tolerance for the objective function
’TolX’ Termination tolerance for the iterates
’Diagnostics’ ’on’ or ’off’ (when ’on’ prints diagnostic
information about the objective function)
Algorithms under linprog
There are three type of algorithms that are being implemented
in the linprog.m:
• a simplex algorithm;
• an active-set algorithm;
• a primal-dual interior point method.
The simplex and active-set algorithms are usually used to solve
medium-scale linear programming
problems. If any one of these algorithms fail to solve a linear
programming problem, then the problem
at hand is a large scale problem. Moreover, a linear
programming problem with several thousands of
variables along with sparse matrices is considered to be a large-
scale problem. However, if coefficient
matrices of your problem have a dense matrix structure, then
linprog.m assumes that your problem
is of medium-scale.
By default, the parameter ’LargeScale’ is always ’on’. When
’LargeScale’ is ’on’, then linprog.m
uses the primal-dual interior point algorithm. However, if you
want to set if off so that you can solve
a medium scale problem , then use
>>options=optimset(’LargeScale’,’off’)
8
In this case linprog.m uses either the simplex algorithm or the
active-set algorithm. (Nevertheless,
recall that the simplex algorithm is itself an active-set strategy).
If you are specifically interested to use the active set algorithm,
then you need to set both the param-
eters ’LargeScale’ and ’Simplex’, respectively, to ’off’:
>>options=optimset(’LargeScale’,’off’,’Simplex’,’off’)
Note: Sometimes, even if we specified ’LargeScale’ to be ’off’,
when a linear programming problem
cannot be solved with a medium scale algorithm, then linprog.m
automatically switches to the large
scale algorithm (interior point method).
2.2 The Interior Point Method for LP
Assuming that the simplex method already known, we find this
section a brief discussion on the
primal-dual interior point method for (LP).
Let A ∈ Rm×n,a ∈ Rm,B ∈ Rp×n,b ∈ Rp. Then, for the linear
programming problem
c⊤ x −→ min
s.t.
Ax ≤ a
Bx = b
lb ≤ x ≤ ub;
(LP)
if we set x̃ = x− lb we get
c⊤x̃− c⊤ lb −→ min
s.t.
Ax̃ ≤ a−A(lb)
Bx̃ = b−B(lb)
0 ≤ x̃ ≤ ub− lb;
(LP)
Now, by adding slack variables y ∈ Rm and s ∈ Rn (see below),
we can write (LP) as
c⊤x̃−c⊤ lb −→ min
s.t.
Ax̃ +y = a−A(lb)
Bx̃ = b−B(lb)
x̃ +s = ub− lb
x̃ ≥ 0, y ≥ 0, s ≥ 0.
(LP)
Thus, using a single matrix for the constraints, we have
c⊤x̃− c⊤ lb −→ min
A Im Om×n
B Op×m Op×n
In On×n In
x̃
y
s
a−A(lb)
b−B(lb)
ub− lb
x̃ ≥ 0,y ≥ 0,s ≥ 0.
9
Since, a constant in the objective does not create a difficulty,
we assume w.l.o.g that we have a problem
of the form
c⊤ x −→ min
s.t.
Ax = a
x ≥ 0.
(LP’)
In fact, when you call linprog.m with the original problem (LP),
this transformation will be done by
Matlab internally. The aim here is to briefly explain the
algorithm used, when you set the LargeScale
parameter to ’on’ in the options of linprog.
Now the dual of (LP’) is the problem
b⊤ w −→ max
s.t.
A⊤ w ≤ c
w ∈ Rm.
(LPD)
Using a slack variable s ∈ Rn we have
b⊤ w −→ max
s.t.
A⊤ w + s = c
w ∈ Rm, s ≥ 0 .
(LPD)
The problem (LP’) and (LPD) are called primal-dual pairs.
Optimality Condition
It is well known that a vector (x∗ ,w∗ ,s∗ ) is a solution of the
primal-dual if and only if it satisfies the
Karush-Kuhn-Tucker (KKT) optimlaity condition. The KKT
conditions here can be written as
A⊤ w + s = c
Ax = a
xisi = 0, i = 1, . . . ,n(Complemetarity conditions)
(x,y) ≥ 0.
This system can be written as
F(x,w,s) =
A⊤ w + s− c
Ax−a
XSe
(x,s) ≥ 0, (2.2)
where X = diag(x1,x2, . . . ,xn),S = diag(s1,s2, . . . ,sn) ∈ R
n×n,e = (1, 1, . . . , 1)⊤ ∈ Rn.
Primal-dual interior point methods generate iterates (xk,wk,sk)
that satisfy the system (2.1) & (2.2)
so that (2.2) is satisfied strictly; i.e. xk > 0,sk > 0. That is, for
each k, (xk,sk) lies in the interior
of the nonnegative-orthant. Thus the naming of the method as
interior point method. Interior point
methods use a variant of the Newton method for the system
(2.1) & (2.2).
10
Central Path
Let τ > 0 be a parameter. The central path is a curve C which is
the set of all points (x(τ),w(τ),s(τ)) ∈
C that satisfy the parametric system :
A⊤ w + s = c,
Ax = b,
xsi = τ,i = 1, . . . ,n
(x,s) > 0.
This implies C is the set of all points (x(τ),w(τ),s(τ)) that
satisfy
F(x(τ),w(τ),s(τ)) =
0
0
τe
Obviously, if we let τ ↓ 0, the the system (2.3) goes close to the
system (2.1) & (2.2).
Hence, theoretically, primal-dual algorithms solve the system
J(x(τ),w(τ),s(τ))
△x(τ)
△w(τ)
△s(τ)
0
0
−XSe + τe
to determine a search direction (△x(τ),△w(τ),△s(τ)), where
J(x(τ),w(τ),s(τ)) is the Jacobian of
F(x(τ),w(τ),s(τ)). And the new iterate will be
(x+(τ),w+(τ),s+(τ)) = (x(τ),w(τ),s(τ)) + α(△x(τ),△w(τ),△s(τ)),
where α is a step length, usually α ∈ (0, 1], chosen in such a
way that (x+(τ),w+(τ),s+(τ) ∈ C.
However, practical primal-dual interior point methods use τ =
σµ, where σ ∈ [0, 1] is a constant and
µ =
x⊤ s
n
The term x⊤ s is the duality gap between the primal and dual
problems. Thus, µ is the measure of
the (average) duality gap. Note that, in general, µ ≥ 0 and µ = 0
when x and s are primal and dual
optimal, respectively.
Thus the Newton step (△x(µ),△w(µ),△s(µ)) is determined by
solving:
On A
⊤ In
A On×m Om×n
S On×m X
△x(τ)
△w(τ)
△s(τ)
0
0
−XSe + σµe
The Newton step (△x(µ),△w(µ),△s(µ)) is also called centering
direction that pushes the iterates
(x+(µ),w+(µ),s+(µ) towards the central path C along which the
algorithm converges more rapidly.
The parameter σ is called the centering parameter. If σ = 0, then
the search direction is known to
be an affine scaling direction.
Primal-Dual Interior Point Algorithm
11
Step 0: Start with (x0,w0,s0) with (x0,s0) > 0, k = 0
Step k: choose σk ∈ [0, 1], set µk = (x
k)⊤ sk/n and solve
On A
⊤ In
A On×m Om×n
S On×m X
△xk
△wk
△sk
0
0
−XkSke + σkµke
set
(xk+1,wk+1,sk+1) ← (xk,wk,sk) + αk(△x
k,△wk,△sk)
choosing αk ∈ [0, 1] so that (x
k+1,sk+1) > 0.
If (convergence) STOP else set k ← k + 1 GO To Step k.
The Matlab LargeSclae option of linprog.m uses a predicator-
corrector like method of Mehrotra to
guarantee that (xk,sk) > 0 for each k,k = 1, 2, . . .
Predicator Step: A search direction (a predicator step) dkp =
(△x
k,△wk,△sk) is obtained by solving
the non-parameterized system (2.1) & (2.2).
Corrector Step: For a centering parameter σ is obtained from
dkc = [F
⊤(xk,wk,sk)]−1F(xk + ⊤xk,wk⊤wk,sk + ⊤sk) −σê
where ê ∈ Rn+m+n, whose last n components are equal to 1.
Iteration: for a step length α ∈ (0, 1]
(xk+1,wk+1,sk+1) = (xk,wk,sk) + α(dkp + d
k
c ).
2.3 Using linprog to solve LP’s
2.3.1 Formal problems
1. Solve the following linear optimization problem using
linprog.m.
2x1+ 3x2 → max
s.t.
x1+ 2x2 ≤ 8
2x1+ x2 ≤ 10
x2 ≤ 3
x1, x2 ≥ 0
For this problem there are no equality constraints and box
constraints, i.e. B=[],b=[],lb=[] and
ub=[]. Moreover,
>>c=[-2,-3]’; % linprog solves minimization problems
>>A=[1,2;2,1;0,1];
>>a=[8,10,3]’;
>>options=optimset(’LargeScale’,’off’);
12
i) If you are interested only on the solution, then use
>>xsol=linprog(c,A,b,[],[],[],[],[],options)
ii) To see if the algorithm really converged or not you need to
access the exit flag through:
>>[xsol,fval,exitflag]=linprog(c,A,a,[],[],[],[],[],options)
iii) If you need the Lagrange multipliers you can access them
using:
>>[xsol,fval,flag,output,LagMult]=linprog(c,A,a,[],[],[],[],[],opt
ions)
iv) To display the iterates you can use:
>>xsol=linprog(c,A,a,[],[],[],[],[],optimset(’Display’,’iter’))
2. Solve the following LP using linprog.m
c⊤ x −→ max
Ax = a
Bx ≥ b
Dx ≤ d
lb ≤ x ≤ lu
where
(A|a) =
(
1 1 1 1 1 1
5 0 −3 0 1 0
∣ ∣ ∣ ∣
10
15
)
, (B|b) =
1 2 3 0 0 0
0 1 2 3 0 0
0 0 1 2 3 0
0 0 0 1 2 3
∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣
5
7
8
8
(D|d) =
(
3 0 0 0 −2 1
0 4 0 −2 0 3
∣ ∣ ∣ ∣
5
7
)
, lb =
−2
0
−1
−1
−5
1
, lu =
7
2
2
3
4
10
,c =
1
−2
3
−4
5
−6
.
When there are large matrices, it is convenient to write m files.
Thus one possible solution will
be the following:
function LpExa2
A=[1,1,1,1,1,1;5,0,-3,0,1,0]; a=[10,15]’;
B1=[1,2,3,0,0,0; 0,1,2,3,0,0;... 0,0,1,2,3,0;0,0,0,1,2,3];
b1=[5,7,8,8];b1=b1(:);
D=[3,0,0,0,-2,1;0,4,0,-2,0,3]; d=[5,7]; d=d(:);
lb=[-2,0,-1,-1,-5,1]’; ub=[7,2,2,3,4,10]’;
c=[1,-2,3,-4,5,-6];c=c(:);
B=[-B1;D]; b=[-b1;d];
[xsol,fval,exitflag,output]=linprog(c,A,a,B,b,lb,ub)
fprintf(’%s %s n’, ’Algorithm Used: ’,output.algorithm);
disp(’=================================’);
disp(’Press Enter to continue’); pause
options=optimset(’linprog’);
13
options =
optimset(options,’LargeScale’,’off’,’Simplex’,’on’,’Display’,’it
er’);
[xsol,fval,exitflag]=linprog(c,A,a,B,b,lb,ub,[],options)
fprintf(’%s %s n’, ’Algorithm Used: ’,output.algorithm);
fprintf(’%s’,’Reason for termination:’)
if (exitflag)
fprintf(’%s n’,’ Convergence.’);
else
fprintf(’%s n’,’ No convergence.’);
end
Observe that for the problem above the simplex algorithm does
not work properly. Hence,
linprog.m uses automatically the interior point method.
2.3.2 Approximation of discrete Data by a Curve
Solve the following discrete approximation problem and plot the
approximating curve.
Suppose the measurement of a real process over a 24 hours
period be given by the following table
with 14 data values:
i 1 2 3 4 5 6 7 8 9 10 11 12 13 14
ti 0 3 7 8 9 10 12 14 16 18 19 20 21 23
ui 3 5 5 4 3 6 7 6 6 11 11 10 8 6
The values ti represent time and ui’s are measurements.
Assuming there is a mathematical connection
between the variables t and u, we would like to determine the
coefficients a,b,c,d,e ∈ R of the
function
u(t) = at4 + bt3 + ct2 + dt + e
so that the value of the function u(ti) could best approximate
the discrete value ui at ti, i = 1, . . . , 14
in the Chebychev sense. Hence, we need to solve the Chebyshev
approximation problem
(CA)
maxi=1,...,14
∣ ∣ ui −
(
at4i + bt
3
i + ct
2
i + dti + e
)∣ ∣ → min
s.t. a,b,c,d,e ∈ R
Ad

Recommended

Convex optmization in communications
Convex optmization in communications
Deepshika Reddy
 
Dynamic pgmming
Dynamic pgmming
Dr. C.V. Suresh Babu
 
Penalty Function Method For Solving Fuzzy Nonlinear Programming Problem
Penalty Function Method For Solving Fuzzy Nonlinear Programming Problem
paperpublications3
 
Machine learning cheat sheet
Machine learning cheat sheet
Hany Sewilam Abdel Hamid
 
Penalty Function Method For Solving Fuzzy Nonlinear Programming Problem
Penalty Function Method For Solving Fuzzy Nonlinear Programming Problem
paperpublications3
 
Dynamicpgmming
Dynamicpgmming
Muhammad Wasif
 
Computer algorithm(Dynamic Programming).pdf
Computer algorithm(Dynamic Programming).pdf
jannatulferdousmaish
 
Dynamic Programming.pptx
Dynamic Programming.pptx
MuktarHossain13
 
NON LINEAR PROGRAMMING
NON LINEAR PROGRAMMING
karishma gupta
 
ADA complete notes
ADA complete notes
Vinay Kumar C
 
APPLYING TRANSFORMATION CHARACTERISTICS TO SOLVE THE MULTI OBJECTIVE LINEAR F...
APPLYING TRANSFORMATION CHARACTERISTICS TO SOLVE THE MULTI OBJECTIVE LINEAR F...
ijcsit
 
Applying Transformation Characteristics to Solve the Multi Objective Linear F...
Applying Transformation Characteristics to Solve the Multi Objective Linear F...
AIRCC Publishing Corporation
 
Linear programming manzoor nabi
Linear programming manzoor nabi
Manzoor Wani
 
Scilab optimization workshop
Scilab optimization workshop
Scilab
 
Least Square Optimization and Sparse-Linear Solver
Least Square Optimization and Sparse-Linear Solver
Ji-yong Kwon
 
lecture.ppt
lecture.ppt
FathiShokry
 
Applying Transformation Characteristics to Solve the Multi Objective Linear F...
Applying Transformation Characteristics to Solve the Multi Objective Linear F...
AIRCC Publishing Corporation
 
1108.1170
1108.1170
Tomasz Waszczyk
 
Dynamic programming prasintation eaisy
Dynamic programming prasintation eaisy
ahmed51236
 
Machine learning and its parameter is discussed here
Machine learning and its parameter is discussed here
RevathiSundar4
 
ML ALL in one (1).pdf
ML ALL in one (1).pdf
AADITYADARAKH1
 
The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)
theijes
 
chap3.pdf
chap3.pdf
eseinsei
 
Linear Programming
Linear Programming
Pulchowk Campus
 
Packing Problems Using Gurobi
Packing Problems Using Gurobi
Terrance Smith
 
Shor's discrete logarithm quantum algorithm for elliptic curves
Shor's discrete logarithm quantum algorithm for elliptic curves
XequeMateShannon
 
super-cheatsheet-artificial-intelligence.pdf
super-cheatsheet-artificial-intelligence.pdf
ssuser089265
 
Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...
Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...
kongara
 
In this unit, you will experience the powerful impact communication .docx
In this unit, you will experience the powerful impact communication .docx
whitneyleman54422
 
In this task, you will write an analysis (suggested length of 3–5 .docx
In this task, you will write an analysis (suggested length of 3–5 .docx
whitneyleman54422
 

More Related Content

Similar to Solving Optimization Problems using the Matlab Optimization.docx (20)

NON LINEAR PROGRAMMING
NON LINEAR PROGRAMMING
karishma gupta
 
ADA complete notes
ADA complete notes
Vinay Kumar C
 
APPLYING TRANSFORMATION CHARACTERISTICS TO SOLVE THE MULTI OBJECTIVE LINEAR F...
APPLYING TRANSFORMATION CHARACTERISTICS TO SOLVE THE MULTI OBJECTIVE LINEAR F...
ijcsit
 
Applying Transformation Characteristics to Solve the Multi Objective Linear F...
Applying Transformation Characteristics to Solve the Multi Objective Linear F...
AIRCC Publishing Corporation
 
Linear programming manzoor nabi
Linear programming manzoor nabi
Manzoor Wani
 
Scilab optimization workshop
Scilab optimization workshop
Scilab
 
Least Square Optimization and Sparse-Linear Solver
Least Square Optimization and Sparse-Linear Solver
Ji-yong Kwon
 
lecture.ppt
lecture.ppt
FathiShokry
 
Applying Transformation Characteristics to Solve the Multi Objective Linear F...
Applying Transformation Characteristics to Solve the Multi Objective Linear F...
AIRCC Publishing Corporation
 
1108.1170
1108.1170
Tomasz Waszczyk
 
Dynamic programming prasintation eaisy
Dynamic programming prasintation eaisy
ahmed51236
 
Machine learning and its parameter is discussed here
Machine learning and its parameter is discussed here
RevathiSundar4
 
ML ALL in one (1).pdf
ML ALL in one (1).pdf
AADITYADARAKH1
 
The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)
theijes
 
chap3.pdf
chap3.pdf
eseinsei
 
Linear Programming
Linear Programming
Pulchowk Campus
 
Packing Problems Using Gurobi
Packing Problems Using Gurobi
Terrance Smith
 
Shor's discrete logarithm quantum algorithm for elliptic curves
Shor's discrete logarithm quantum algorithm for elliptic curves
XequeMateShannon
 
super-cheatsheet-artificial-intelligence.pdf
super-cheatsheet-artificial-intelligence.pdf
ssuser089265
 
Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...
Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...
kongara
 
NON LINEAR PROGRAMMING
NON LINEAR PROGRAMMING
karishma gupta
 
APPLYING TRANSFORMATION CHARACTERISTICS TO SOLVE THE MULTI OBJECTIVE LINEAR F...
APPLYING TRANSFORMATION CHARACTERISTICS TO SOLVE THE MULTI OBJECTIVE LINEAR F...
ijcsit
 
Applying Transformation Characteristics to Solve the Multi Objective Linear F...
Applying Transformation Characteristics to Solve the Multi Objective Linear F...
AIRCC Publishing Corporation
 
Linear programming manzoor nabi
Linear programming manzoor nabi
Manzoor Wani
 
Scilab optimization workshop
Scilab optimization workshop
Scilab
 
Least Square Optimization and Sparse-Linear Solver
Least Square Optimization and Sparse-Linear Solver
Ji-yong Kwon
 
Applying Transformation Characteristics to Solve the Multi Objective Linear F...
Applying Transformation Characteristics to Solve the Multi Objective Linear F...
AIRCC Publishing Corporation
 
Dynamic programming prasintation eaisy
Dynamic programming prasintation eaisy
ahmed51236
 
Machine learning and its parameter is discussed here
Machine learning and its parameter is discussed here
RevathiSundar4
 
The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)
theijes
 
Packing Problems Using Gurobi
Packing Problems Using Gurobi
Terrance Smith
 
Shor's discrete logarithm quantum algorithm for elliptic curves
Shor's discrete logarithm quantum algorithm for elliptic curves
XequeMateShannon
 
super-cheatsheet-artificial-intelligence.pdf
super-cheatsheet-artificial-intelligence.pdf
ssuser089265
 
Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...
Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...
kongara
 

More from whitneyleman54422 (20)

In this unit, you will experience the powerful impact communication .docx
In this unit, you will experience the powerful impact communication .docx
whitneyleman54422
 
In this task, you will write an analysis (suggested length of 3–5 .docx
In this task, you will write an analysis (suggested length of 3–5 .docx
whitneyleman54422
 
In this SLP you will identify where the major transportation modes a.docx
In this SLP you will identify where the major transportation modes a.docx
whitneyleman54422
 
In this module the student will present writing which focuses attent.docx
In this module the student will present writing which focuses attent.docx
whitneyleman54422
 
In this module, we looked at a variety of styles in the Renaissa.docx
In this module, we looked at a variety of styles in the Renaissa.docx
whitneyleman54422
 
In this experiential learning experience, you will evaluate a health.docx
In this experiential learning experience, you will evaluate a health.docx
whitneyleman54422
 
In this essay you should combine your practice responding and analyz.docx
In this essay you should combine your practice responding and analyz.docx
whitneyleman54422
 
In this Discussion, pick one film to write about and answer ques.docx
In this Discussion, pick one film to write about and answer ques.docx
whitneyleman54422
 
In this assignment, you will identify and interview a family who.docx
In this assignment, you will identify and interview a family who.docx
whitneyleman54422
 
In this assignment, you will assess the impact of health legisla.docx
In this assignment, you will assess the impact of health legisla.docx
whitneyleman54422
 
In this assignment, you will create a presentation. Select a topic o.docx
In this assignment, you will create a presentation. Select a topic o.docx
whitneyleman54422
 
In this assignment, the student will understand the growth and devel.docx
In this assignment, the student will understand the growth and devel.docx
whitneyleman54422
 
In this assignment, I want you to locate two pieces of news detailin.docx
In this assignment, I want you to locate two pieces of news detailin.docx
whitneyleman54422
 
In this assignment worth 150 points, you will consider the present-d.docx
In this assignment worth 150 points, you will consider the present-d.docx
whitneyleman54422
 
In the readings thus far, the text identified many early American in.docx
In the readings thus far, the text identified many early American in.docx
whitneyleman54422
 
In the Roman Colony, leaders, or members of the court, were to be.docx
In the Roman Colony, leaders, or members of the court, were to be.docx
whitneyleman54422
 
In the provided scenario there are a few different crimes being .docx
In the provided scenario there are a few different crimes being .docx
whitneyleman54422
 
Stoichiometry Lab – The Chemistry Behind Carbonates reacting with .docx
Stoichiometry Lab – The Chemistry Behind Carbonates reacting with .docx
whitneyleman54422
 
Stock-Trak Portfolio Report Write-Up GuidelinesYou may want to.docx
Stock-Trak Portfolio Report Write-Up GuidelinesYou may want to.docx
whitneyleman54422
 
Stewart Guthrie, Faces in the Clouds Oxford UP, 1993.docx
Stewart Guthrie, Faces in the Clouds Oxford UP, 1993.docx
whitneyleman54422
 
In this unit, you will experience the powerful impact communication .docx
In this unit, you will experience the powerful impact communication .docx
whitneyleman54422
 
In this task, you will write an analysis (suggested length of 3–5 .docx
In this task, you will write an analysis (suggested length of 3–5 .docx
whitneyleman54422
 
In this SLP you will identify where the major transportation modes a.docx
In this SLP you will identify where the major transportation modes a.docx
whitneyleman54422
 
In this module the student will present writing which focuses attent.docx
In this module the student will present writing which focuses attent.docx
whitneyleman54422
 
In this module, we looked at a variety of styles in the Renaissa.docx
In this module, we looked at a variety of styles in the Renaissa.docx
whitneyleman54422
 
In this experiential learning experience, you will evaluate a health.docx
In this experiential learning experience, you will evaluate a health.docx
whitneyleman54422
 
In this essay you should combine your practice responding and analyz.docx
In this essay you should combine your practice responding and analyz.docx
whitneyleman54422
 
In this Discussion, pick one film to write about and answer ques.docx
In this Discussion, pick one film to write about and answer ques.docx
whitneyleman54422
 
In this assignment, you will identify and interview a family who.docx
In this assignment, you will identify and interview a family who.docx
whitneyleman54422
 
In this assignment, you will assess the impact of health legisla.docx
In this assignment, you will assess the impact of health legisla.docx
whitneyleman54422
 
In this assignment, you will create a presentation. Select a topic o.docx
In this assignment, you will create a presentation. Select a topic o.docx
whitneyleman54422
 
In this assignment, the student will understand the growth and devel.docx
In this assignment, the student will understand the growth and devel.docx
whitneyleman54422
 
In this assignment, I want you to locate two pieces of news detailin.docx
In this assignment, I want you to locate two pieces of news detailin.docx
whitneyleman54422
 
In this assignment worth 150 points, you will consider the present-d.docx
In this assignment worth 150 points, you will consider the present-d.docx
whitneyleman54422
 
In the readings thus far, the text identified many early American in.docx
In the readings thus far, the text identified many early American in.docx
whitneyleman54422
 
In the Roman Colony, leaders, or members of the court, were to be.docx
In the Roman Colony, leaders, or members of the court, were to be.docx
whitneyleman54422
 
In the provided scenario there are a few different crimes being .docx
In the provided scenario there are a few different crimes being .docx
whitneyleman54422
 
Stoichiometry Lab – The Chemistry Behind Carbonates reacting with .docx
Stoichiometry Lab – The Chemistry Behind Carbonates reacting with .docx
whitneyleman54422
 
Stock-Trak Portfolio Report Write-Up GuidelinesYou may want to.docx
Stock-Trak Portfolio Report Write-Up GuidelinesYou may want to.docx
whitneyleman54422
 
Stewart Guthrie, Faces in the Clouds Oxford UP, 1993.docx
Stewart Guthrie, Faces in the Clouds Oxford UP, 1993.docx
whitneyleman54422
 
Ad

Recently uploaded (20)

BINARY files CSV files JSON files with example.pptx
BINARY files CSV files JSON files with example.pptx
Ramakrishna Reddy Bijjam
 
Plate Tectonic Boundaries and Continental Drift Theory
Plate Tectonic Boundaries and Continental Drift Theory
Marie
 
june 10 2025 ppt for madden on art science is over.pptx
june 10 2025 ppt for madden on art science is over.pptx
roger malina
 
What are the benefits that dance brings?
What are the benefits that dance brings?
memi27
 
How to Manage Upselling of Subscriptions in Odoo 18
How to Manage Upselling of Subscriptions in Odoo 18
Celine George
 
Measuring, learning and applying multiplication facts.
Measuring, learning and applying multiplication facts.
cgilmore6
 
Assisting Individuals and Families to Promote and Maintain Health – Unit 7 | ...
Assisting Individuals and Families to Promote and Maintain Health – Unit 7 | ...
RAKESH SAJJAN
 
GEOGRAPHY-Study Material [ Class 10th] .pdf
GEOGRAPHY-Study Material [ Class 10th] .pdf
SHERAZ AHMAD LONE
 
Chalukyas of Gujrat, Solanki Dynasty NEP.pptx
Chalukyas of Gujrat, Solanki Dynasty NEP.pptx
Dr. Ravi Shankar Arya Mahila P. G. College, Banaras Hindu University, Varanasi, India.
 
Paper 109 | Archetypal Journeys in ‘Interstellar’: Exploring Universal Themes...
Paper 109 | Archetypal Journeys in ‘Interstellar’: Exploring Universal Themes...
Rajdeep Bavaliya
 
JHS SHS Back to School 2024-2025 .pptx
JHS SHS Back to School 2024-2025 .pptx
melvinapay78
 
Overview of Off Boarding in Odoo 18 Employees
Overview of Off Boarding in Odoo 18 Employees
Celine George
 
What is FIle and explanation of text files.pptx
What is FIle and explanation of text files.pptx
Ramakrishna Reddy Bijjam
 
Basic English for Communication - Dr Hj Euis Eti Rohaeti Mpd
Basic English for Communication - Dr Hj Euis Eti Rohaeti Mpd
Restu Bias Primandhika
 
Paper 107 | From Watchdog to Lapdog: Ishiguro’s Fiction and the Rise of “Godi...
Paper 107 | From Watchdog to Lapdog: Ishiguro’s Fiction and the Rise of “Godi...
Rajdeep Bavaliya
 
THERAPEUTIC COMMUNICATION included definition, characteristics, nurse patient...
THERAPEUTIC COMMUNICATION included definition, characteristics, nurse patient...
parmarjuli1412
 
PEST OF WHEAT SORGHUM BAJRA and MINOR MILLETS.pptx
PEST OF WHEAT SORGHUM BAJRA and MINOR MILLETS.pptx
Arshad Shaikh
 
ICT-8-Module-REVISED-K-10-CURRICULUM.pdf
ICT-8-Module-REVISED-K-10-CURRICULUM.pdf
penafloridaarlyn
 
LDMMIA GRAD Student Check-in Orientation Sampler
LDMMIA GRAD Student Check-in Orientation Sampler
LDM & Mia eStudios
 
The Man In The Back – Exceptional Delaware.pdf
The Man In The Back – Exceptional Delaware.pdf
dennisongomezk
 
BINARY files CSV files JSON files with example.pptx
BINARY files CSV files JSON files with example.pptx
Ramakrishna Reddy Bijjam
 
Plate Tectonic Boundaries and Continental Drift Theory
Plate Tectonic Boundaries and Continental Drift Theory
Marie
 
june 10 2025 ppt for madden on art science is over.pptx
june 10 2025 ppt for madden on art science is over.pptx
roger malina
 
What are the benefits that dance brings?
What are the benefits that dance brings?
memi27
 
How to Manage Upselling of Subscriptions in Odoo 18
How to Manage Upselling of Subscriptions in Odoo 18
Celine George
 
Measuring, learning and applying multiplication facts.
Measuring, learning and applying multiplication facts.
cgilmore6
 
Assisting Individuals and Families to Promote and Maintain Health – Unit 7 | ...
Assisting Individuals and Families to Promote and Maintain Health – Unit 7 | ...
RAKESH SAJJAN
 
GEOGRAPHY-Study Material [ Class 10th] .pdf
GEOGRAPHY-Study Material [ Class 10th] .pdf
SHERAZ AHMAD LONE
 
Paper 109 | Archetypal Journeys in ‘Interstellar’: Exploring Universal Themes...
Paper 109 | Archetypal Journeys in ‘Interstellar’: Exploring Universal Themes...
Rajdeep Bavaliya
 
JHS SHS Back to School 2024-2025 .pptx
JHS SHS Back to School 2024-2025 .pptx
melvinapay78
 
Overview of Off Boarding in Odoo 18 Employees
Overview of Off Boarding in Odoo 18 Employees
Celine George
 
What is FIle and explanation of text files.pptx
What is FIle and explanation of text files.pptx
Ramakrishna Reddy Bijjam
 
Basic English for Communication - Dr Hj Euis Eti Rohaeti Mpd
Basic English for Communication - Dr Hj Euis Eti Rohaeti Mpd
Restu Bias Primandhika
 
Paper 107 | From Watchdog to Lapdog: Ishiguro’s Fiction and the Rise of “Godi...
Paper 107 | From Watchdog to Lapdog: Ishiguro’s Fiction and the Rise of “Godi...
Rajdeep Bavaliya
 
THERAPEUTIC COMMUNICATION included definition, characteristics, nurse patient...
THERAPEUTIC COMMUNICATION included definition, characteristics, nurse patient...
parmarjuli1412
 
PEST OF WHEAT SORGHUM BAJRA and MINOR MILLETS.pptx
PEST OF WHEAT SORGHUM BAJRA and MINOR MILLETS.pptx
Arshad Shaikh
 
ICT-8-Module-REVISED-K-10-CURRICULUM.pdf
ICT-8-Module-REVISED-K-10-CURRICULUM.pdf
penafloridaarlyn
 
LDMMIA GRAD Student Check-in Orientation Sampler
LDMMIA GRAD Student Check-in Orientation Sampler
LDM & Mia eStudios
 
The Man In The Back – Exceptional Delaware.pdf
The Man In The Back – Exceptional Delaware.pdf
dennisongomezk
 
Ad

Solving Optimization Problems using the Matlab Optimization.docx

  • 1. Solving Optimization Problems using the Matlab Optimization Toolbox - a Tutorial TU-Ilmenau, Fakultät für Mathematik und Naturwissenschaften Dr. Abebe Geletu December 13, 2007 Contents 1 Introduction to Mathematical Programming 2 1.1 A general Mathematical Programming Problem . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 Some Classes of Optimization Problems . . . . . . . . . . . . . . . . . . . . . . 2 1.1.2 Functions of the Matlab Optimization Toolbox . . . . . . . . . . . . . . . . . . . 5 2 Linear Programming Problems 6 2.1 Linear programming with MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2 The Interior Point Method for LP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
  • 2. 2.3 Using linprog to solve LP’s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3.1 Formal problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3.2 Approximation of discrete Data by a Curve . . . . . . . . . . . . . . . . . . . . . 13 3 Quadratic programming Problems 15 3.1 Algorithms Implemented under quadprog.m . . . . . . . . . . . . . . . . . . . . . . . . 16 3.1.1 Active Set-Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.1.2 The Interior Reflective Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.2 Using quadprog to Solve QP Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.2.1 Theoretical Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.2.2 Production model - profit maximization . . . . . . . . . . . . . . . . . . . . . . 26 4 Unconstrained nonlinear programming 30 4.1 Theory, optimality conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.1.1 Problems, assumptions, definitions . . . . . . . . . . . . . . . . . . . . . . . . . 30
  • 3. 4.2 Optimality conditions for smooth unconstrained problems . . . . . . . . . . . . . . . . 31 4.3 Matlab Function for Unconstrained Optimization . . . . . . . . . . . . . . . . . . . . . 32 4.4 General descent methods - for differentiable Optimization Problems . . . . . . . . . . . 32 4.5 The Quasi-Newton Algorithm -idea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.5.1 Determination of Search Directions . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.5.2 Line Search Strategies- determination of the step-length αk . . . . . . . . . . . 34 4.6 Trust Region Methods - idea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.6.1 Solution of the Trust-Region Sub-problem . . . . . . . . . . . . . . . . . . . . . 36 4.6.2 The Trust Sub-Problem under Considered in the Matlab Optimization toolbox . 38
  • 4. 4.6.3 Calling and Using fminunc.m to Solve Unconstrained Problems . . . . . . . . . 39 4.7 Derivative free Optimization - direct (simplex) search methods . . . . . . . . . . . . . . 45 1 Chapter 1 Introduction to Mathematical Programming 1.1 A general Mathematical Programming Problem f(x) −→ min (max) subject to x ∈ M. (O) The function f : Rn → R is called the objective function and the
  • 5. set M ⊂ Rn is the feasible set of (O). Based on the description of the function f and the feasible set M, the problem (O) can be classified as linear, quadratic, non-linear, semi-infinite, semi-definite, multiple-objective, discrete optimization problem etc1. 1.1.1 Some Classes of Optimization Problems Linear Programming If the objective function f and the defining functions of M are linear, then (O) will be a linear optimization problem. General form of a linear programming problem: c⊤ x −→ min (max) s.t. Ax = a Bx ≤ b lb ≤ x ≤ ub; (LO)
  • 6. i.e. f(x) = c⊤ x and M = {x ∈ Rn | Ax = a,Bx ≤ b,lb ≤ x ≤ ub}. Under linear programming problems are such practical problems like: linear discrete Chebychev ap- proximation problems, transportation problems, network flow problems,etc. 1The terminology mathematical programming is being currently contested and many demand that problems of the form (O) be always called mathematical optimization problems. Here, we use both terms alternatively. 2 Sou Ying Highlight Sou Ying Highlight
  • 7. 3 Quadratic Programming 1 2 xT Qx + q⊤ x → min s.t. Ax = a Bx ≤ b x ≥ u x ≤ v (QP) Here the objective function f(x) = 12x⊤ Qx + q⊤ x is a quadratic function, while the feasible set M = {x ∈ Rn | Ax = a,Bx ≤ b,u ≤ x ≤ v} is defined using linear functions. One of the well known practical models of quadratic optimization problems is the least squares ap-
  • 8. proximation problem; which has applications in almost all fields of science. Non-linear Programming Problem The general form of a non-linear optimization problem is f (x) −→ min (max) subject to equality constraints: gi (x) = 0, i ∈ {1, 2, . . . ,m} inequality constraints: gj (x) ≤ 0, j ∈ {m + 1,m + 2, . . . ,m + p} box constraints: uk ≤ xk ≤ vk, k = 1, 2, . . . ,n; (NLP) where, we assume that all the function are smooth, i.e. the functions f, gl : U −→ R l = 1, 2, . . . ,m + p are sufficiently many times differentiable on the open subset U of Rn. The feasible set of (NLP) is given by
  • 9. M = {x ∈ Rn | gi(x) = 0, i = 1, 2, . . . ,m; gj (x) ≤ 0,j = m + 1,m + 2, . . . ,m + p} . We also write the (NLP) in vectorial notation as f (x) → min (max) h (x) = 0 g (x) ≤ 0 u ≤ x ≤ v. Problems of the form (NLP) arise frequently in the numerical solution of control problems, non-linear approximation, engineering design, finance and economics, signal processing, etc. Sou Ying Highlight 4
  • 10. Semi-infinite Programming f(x) → min s.t. G(x,y) ≤ 0,∀ y ∈ Y ; hi(x) = 0, i = 1, . . . ,p; gj (x) ≤ 0,j = 1, . . . ,q; x ∈ Rn; Y ⊂ Rm. (SIP) Here, f,hi,gj : R n → R, i ∈ {1, . . . ,p},j ∈ {1, . . . ,q} are smooth functions; G : Rn × Rm → R is such that, for each fixed y ∈ Y , G(·,y) : Rn → R is smooth and, for each fixed x ∈ Rn,G(x, ·) : Rm → R is smooth; furthermore, Y is a compact subset of Rm. Sometimes, the set Y can also be given as Y = {y ∈ Rm | uk(y) = 0,k = 1, . . . ,s1; vl(y) ≤ 0, l = 1, . . . ,s2}
  • 11. with smooth functions uk,vl : R m → R,k ∈ {1, . . . ,s1}, l ∈ {1, . . . ,s2}. The problem (SIP) is called semi-infinite, since its an optimization problem with finite number of vari- ables (i.e. x ∈ Rn) and infinite number of constraints (i.e. G(x,y) ≤ 0,∀ y ∈ Y ). One of the well known practical models of (SIP) is the continuous Chebychev approximation problem. This approximation problem can be used for the approximation of functions by polynomials, in filter design for digital signal processing, spline approximation of robot trajectory Multiple-Objective Optimization A multiple objective Optimization problem has a general form min(f1(x),f1(x), . . . ,fm(x)) s.t.
  • 12. x ∈ M; (MO) where the functions fk : R n → R,k = 1, . . . ,m are smooth and the feasible set M is defined in terms of linear or non-linear functions. Sometimes, this problem is also alternatively called multiple-criteria, vector optimization, goal attainment or multi-decision analysis problem. It is an optimization problem with more than one objective function (each such objective is a criteria). In this sense, (LO),(QP)(NLO) and (SIP) are single objective (criteria) optimization problems. If there are only two objective functions in (MO), then (MO) is commonly called to be a bi-criteria optimization problem. Furthermore, if each of the functions f1, . . . ,fm are linear and M is defined using linear functions,
  • 13. then (MO) will be a linear multiple-criteria optimization problem; otherwise, it is non-linear. For instance, in a financial application we may need, to maximize revenue and minimize risk at the same time, constrained upon the amount of our investment. Several engineering design problems can also be modeled into (MO). Practical problems like autonomous vehicle control, optimal truss design, antenna array design, etc are very few examples of (MO). In real life we may have several objectives to arrive at. But, unfortunately, we cannot satisfy all our objectives optimally at the same time. So we have to find a compromising solution among all our objectives. Such is the nature of multiple objective optimization. Thus, the minimization (or
  • 14. 5 maximization) of several objective functions can not be done in the usual sense. Hence, one speaks of so-called efficient points as solutions of the problem. Using special constructions involving the objectives, the problem (MO) can be reduced to a problem with a single objective function. 1.1.2 Functions of the Matlab Optimization Toolbox Linear and Quadratic Minimization problems. linprog - Linear programming. quadprog - Quadratic programming. Nonlinear zero finding (equation solving). fzero - Scalar nonlinear zero finding. fsolve - Nonlinear system of equations solve (function solve).
  • 15. Linear least squares (of matrix problems). lsqlin - Linear least squares with linear constraints. lsqnonneg - Linear least squares with nonnegativity constraints. Nonlinear minimization of functions. fminbnd - Scalar bounded nonlinear function minimization. fmincon - Multidimensional constrained nonlinear minimization. fminsearch - Multidimensional unconstrained nonlinear minimization, by Nelder-Mead direct search method. fminunc - Multidimensional unconstrained nonlinear minimization. fseminf - Multidimensional constrained minimization, semi- infinite constraints. Nonlinear least squares (of functions).
  • 16. lsqcurvefit - Nonlinear curvefitting via least squares (with bounds). lsqnonlin - Nonlinear least squares with upper and lower bounds. Nonlinear minimization of multi-objective functions. fgoalattain - Multidimensional goal attainment optimization fminimax - Multidimensional minimax optimization. Sou Ying Highlight Chapter 2 Linear Programming Problems 2.1 Linear programming with MATLAB For the linear programming problem
  • 17. c⊤ x −→ min s.t. Ax ≤ a Bx = b lb ≤ x ≤ ub; (LP) MATLAB: The program linprog.m is used for the minimization of problems of the form (LP). Once you have defined the matrices A, B, and the vectors c,a,b,lb and ub, then you can call linprog.m to solve the problem. The general form of calling linprog.m is: [x,fval,exitflag,output,lambda]=linprog(f,A,a,B,b,lb,ub,x0,optio ns) Input arguments: c coefficient vector of the objective A Matrix of inequality constraints
  • 18. a right hand side of the inequality constraints B Matrix of equality constraints b right hand side of the equality constraints lb,[ ] lb ≤ x : lower bounds for x, no lower bounds ub,[ ] x ≤ ub : upper bounds for x, no upper bounds x0 Startvector for the algorithm, if known, else [ ] options options are set using the optimset funciton, they determine what algorism to use,etc. Output arguments: x optimal solution fval optimal value of the objective function exitflag tells whether the algorithm converged or not, exitflag > 0 means convergence output a struct for number of iterations, algorithm used and PCG iterations(when LargeScale=on)
  • 19. lambda a struct containing lagrange multipliers corresponding to the constraints. 6 7 Setting Options The input argument options is a structure, which contains several parameters that you can use with a given Matlab optimization routine. For instance, to see the type of parameters you can use with the linprog.m routine use >>optimset(’linprog’) Then Matlab displays the fileds of the structure options. Accordingly, before calling linprog.m you
  • 20. can set your preferred parameters in the options for linprog.m using the optimset command as: >>options=optimset(’ParameterName1’,value1,’ParameterName 2’,value2,...) where ’ParameterName1’,’ParameterName2’,... are those you get displayed when you use optimset(’linprog’). And value1, value2,... are their corresponding values. The following are parameters and their corresponding values which are frequently used with linprog.m: Parameter Possible Values ’LargeScale’ ’on’,’off’ ’Simplex’ ’on’,’off’ ’Display’ ’iter’,’final’,’off’ ’Maxiter’ Maximum number of iteration
  • 21. ’TolFun’ Termination tolerance for the objective function ’TolX’ Termination tolerance for the iterates ’Diagnostics’ ’on’ or ’off’ (when ’on’ prints diagnostic information about the objective function) Algorithms under linprog There are three type of algorithms that are being implemented in the linprog.m: • a simplex algorithm; • an active-set algorithm; • a primal-dual interior point method. The simplex and active-set algorithms are usually used to solve medium-scale linear programming problems. If any one of these algorithms fail to solve a linear programming problem, then the problem at hand is a large scale problem. Moreover, a linear
  • 22. programming problem with several thousands of variables along with sparse matrices is considered to be a large- scale problem. However, if coefficient matrices of your problem have a dense matrix structure, then linprog.m assumes that your problem is of medium-scale. By default, the parameter ’LargeScale’ is always ’on’. When ’LargeScale’ is ’on’, then linprog.m uses the primal-dual interior point algorithm. However, if you want to set if off so that you can solve a medium scale problem , then use >>options=optimset(’LargeScale’,’off’) 8 In this case linprog.m uses either the simplex algorithm or the
  • 23. active-set algorithm. (Nevertheless, recall that the simplex algorithm is itself an active-set strategy). If you are specifically interested to use the active set algorithm, then you need to set both the param- eters ’LargeScale’ and ’Simplex’, respectively, to ’off’: >>options=optimset(’LargeScale’,’off’,’Simplex’,’off’) Note: Sometimes, even if we specified ’LargeScale’ to be ’off’, when a linear programming problem cannot be solved with a medium scale algorithm, then linprog.m automatically switches to the large scale algorithm (interior point method). 2.2 The Interior Point Method for LP Assuming that the simplex method already known, we find this section a brief discussion on the primal-dual interior point method for (LP).
  • 24. Let A ∈ Rm×n,a ∈ Rm,B ∈ Rp×n,b ∈ Rp. Then, for the linear programming problem c⊤ x −→ min s.t. Ax ≤ a Bx = b lb ≤ x ≤ ub; (LP) if we set x̃ = x− lb we get c⊤x̃− c⊤ lb −→ min s.t. Ax̃ ≤ a−A(lb) Bx̃ = b−B(lb) 0 ≤ x̃ ≤ ub− lb; (LP) Now, by adding slack variables y ∈ Rm and s ∈ Rn (see below), we can write (LP) as
  • 25. c⊤x̃−c⊤ lb −→ min s.t. Ax̃ +y = a−A(lb) Bx̃ = b−B(lb) x̃ +s = ub− lb x̃ ≥ 0, y ≥ 0, s ≥ 0. (LP) Thus, using a single matrix for the constraints, we have c⊤x̃− c⊤ lb −→ min A Im Om×n B Op×m Op×n In On×n In
  • 26. x̃ y s a−A(lb) b−B(lb) ub− lb x̃ ≥ 0,y ≥ 0,s ≥ 0. 9 Since, a constant in the objective does not create a difficulty, we assume w.l.o.g that we have a problem
  • 27. of the form c⊤ x −→ min s.t. Ax = a x ≥ 0. (LP’) In fact, when you call linprog.m with the original problem (LP), this transformation will be done by Matlab internally. The aim here is to briefly explain the algorithm used, when you set the LargeScale parameter to ’on’ in the options of linprog. Now the dual of (LP’) is the problem b⊤ w −→ max s.t. A⊤ w ≤ c w ∈ Rm. (LPD)
  • 28. Using a slack variable s ∈ Rn we have b⊤ w −→ max s.t. A⊤ w + s = c w ∈ Rm, s ≥ 0 . (LPD) The problem (LP’) and (LPD) are called primal-dual pairs. Optimality Condition It is well known that a vector (x∗ ,w∗ ,s∗ ) is a solution of the primal-dual if and only if it satisfies the Karush-Kuhn-Tucker (KKT) optimlaity condition. The KKT conditions here can be written as A⊤ w + s = c Ax = a xisi = 0, i = 1, . . . ,n(Complemetarity conditions)
  • 29. (x,y) ≥ 0. This system can be written as F(x,w,s) = A⊤ w + s− c Ax−a XSe (x,s) ≥ 0, (2.2) where X = diag(x1,x2, . . . ,xn),S = diag(s1,s2, . . . ,sn) ∈ R n×n,e = (1, 1, . . . , 1)⊤ ∈ Rn. Primal-dual interior point methods generate iterates (xk,wk,sk) that satisfy the system (2.1) & (2.2) so that (2.2) is satisfied strictly; i.e. xk > 0,sk > 0. That is, for each k, (xk,sk) lies in the interior of the nonnegative-orthant. Thus the naming of the method as
  • 30. interior point method. Interior point methods use a variant of the Newton method for the system (2.1) & (2.2). 10 Central Path Let τ > 0 be a parameter. The central path is a curve C which is the set of all points (x(τ),w(τ),s(τ)) ∈ C that satisfy the parametric system : A⊤ w + s = c, Ax = b, xsi = τ,i = 1, . . . ,n (x,s) > 0. This implies C is the set of all points (x(τ),w(τ),s(τ)) that satisfy
  • 31. F(x(τ),w(τ),s(τ)) = 0 0 τe Obviously, if we let τ ↓ 0, the the system (2.3) goes close to the system (2.1) & (2.2). Hence, theoretically, primal-dual algorithms solve the system J(x(τ),w(τ),s(τ)) △x(τ) △w(τ) △s(τ)
  • 32. 0 0 −XSe + τe to determine a search direction (△x(τ),△w(τ),△s(τ)), where J(x(τ),w(τ),s(τ)) is the Jacobian of F(x(τ),w(τ),s(τ)). And the new iterate will be (x+(τ),w+(τ),s+(τ)) = (x(τ),w(τ),s(τ)) + α(△x(τ),△w(τ),△s(τ)), where α is a step length, usually α ∈ (0, 1], chosen in such a way that (x+(τ),w+(τ),s+(τ) ∈ C. However, practical primal-dual interior point methods use τ =
  • 33. σµ, where σ ∈ [0, 1] is a constant and µ = x⊤ s n The term x⊤ s is the duality gap between the primal and dual problems. Thus, µ is the measure of the (average) duality gap. Note that, in general, µ ≥ 0 and µ = 0 when x and s are primal and dual optimal, respectively. Thus the Newton step (△x(µ),△w(µ),△s(µ)) is determined by solving: On A ⊤ In A On×m Om×n S On×m X
  • 34. △x(τ) △w(τ) △s(τ) 0 0 −XSe + σµe The Newton step (△x(µ),△w(µ),△s(µ)) is also called centering direction that pushes the iterates (x+(µ),w+(µ),s+(µ) towards the central path C along which the
  • 35. algorithm converges more rapidly. The parameter σ is called the centering parameter. If σ = 0, then the search direction is known to be an affine scaling direction. Primal-Dual Interior Point Algorithm 11 Step 0: Start with (x0,w0,s0) with (x0,s0) > 0, k = 0 Step k: choose σk ∈ [0, 1], set µk = (x k)⊤ sk/n and solve On A ⊤ In A On×m Om×n S On×m X
  • 37. (xk+1,wk+1,sk+1) ← (xk,wk,sk) + αk(△x k,△wk,△sk) choosing αk ∈ [0, 1] so that (x k+1,sk+1) > 0. If (convergence) STOP else set k ← k + 1 GO To Step k. The Matlab LargeSclae option of linprog.m uses a predicator- corrector like method of Mehrotra to guarantee that (xk,sk) > 0 for each k,k = 1, 2, . . . Predicator Step: A search direction (a predicator step) dkp = (△x k,△wk,△sk) is obtained by solving the non-parameterized system (2.1) & (2.2). Corrector Step: For a centering parameter σ is obtained from dkc = [F ⊤(xk,wk,sk)]−1F(xk + ⊤xk,wk⊤wk,sk + ⊤sk) −σê
  • 38. where ê ∈ Rn+m+n, whose last n components are equal to 1. Iteration: for a step length α ∈ (0, 1] (xk+1,wk+1,sk+1) = (xk,wk,sk) + α(dkp + d k c ). 2.3 Using linprog to solve LP’s 2.3.1 Formal problems 1. Solve the following linear optimization problem using linprog.m. 2x1+ 3x2 → max s.t. x1+ 2x2 ≤ 8 2x1+ x2 ≤ 10 x2 ≤ 3 x1, x2 ≥ 0 For this problem there are no equality constraints and box
  • 39. constraints, i.e. B=[],b=[],lb=[] and ub=[]. Moreover, >>c=[-2,-3]’; % linprog solves minimization problems >>A=[1,2;2,1;0,1]; >>a=[8,10,3]’; >>options=optimset(’LargeScale’,’off’); 12 i) If you are interested only on the solution, then use >>xsol=linprog(c,A,b,[],[],[],[],[],options) ii) To see if the algorithm really converged or not you need to access the exit flag through: >>[xsol,fval,exitflag]=linprog(c,A,a,[],[],[],[],[],options)
  • 40. iii) If you need the Lagrange multipliers you can access them using: >>[xsol,fval,flag,output,LagMult]=linprog(c,A,a,[],[],[],[],[],opt ions) iv) To display the iterates you can use: >>xsol=linprog(c,A,a,[],[],[],[],[],optimset(’Display’,’iter’)) 2. Solve the following LP using linprog.m c⊤ x −→ max Ax = a Bx ≥ b Dx ≤ d lb ≤ x ≤ lu where (A|a) = ( 1 1 1 1 1 1
  • 41. 5 0 −3 0 1 0 ∣ ∣ ∣ ∣ 10 15 ) , (B|b) = 1 2 3 0 0 0 0 1 2 3 0 0 0 0 1 2 3 0 0 0 0 1 2 3 ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ 5 7 8 8
  • 42. (D|d) = ( 3 0 0 0 −2 1 0 4 0 −2 0 3 ∣ ∣ ∣ ∣ 5 7 ) , lb = −2 0 −1 −1 −5 1
  • 44. 3 −4 5 −6 . When there are large matrices, it is convenient to write m files. Thus one possible solution will be the following: function LpExa2 A=[1,1,1,1,1,1;5,0,-3,0,1,0]; a=[10,15]’; B1=[1,2,3,0,0,0; 0,1,2,3,0,0;... 0,0,1,2,3,0;0,0,0,1,2,3]; b1=[5,7,8,8];b1=b1(:); D=[3,0,0,0,-2,1;0,4,0,-2,0,3]; d=[5,7]; d=d(:);
  • 45. lb=[-2,0,-1,-1,-5,1]’; ub=[7,2,2,3,4,10]’; c=[1,-2,3,-4,5,-6];c=c(:); B=[-B1;D]; b=[-b1;d]; [xsol,fval,exitflag,output]=linprog(c,A,a,B,b,lb,ub) fprintf(’%s %s n’, ’Algorithm Used: ’,output.algorithm); disp(’=================================’); disp(’Press Enter to continue’); pause options=optimset(’linprog’); 13 options = optimset(options,’LargeScale’,’off’,’Simplex’,’on’,’Display’,’it er’); [xsol,fval,exitflag]=linprog(c,A,a,B,b,lb,ub,[],options)
  • 46. fprintf(’%s %s n’, ’Algorithm Used: ’,output.algorithm); fprintf(’%s’,’Reason for termination:’) if (exitflag) fprintf(’%s n’,’ Convergence.’); else fprintf(’%s n’,’ No convergence.’); end Observe that for the problem above the simplex algorithm does not work properly. Hence, linprog.m uses automatically the interior point method. 2.3.2 Approximation of discrete Data by a Curve Solve the following discrete approximation problem and plot the approximating curve.
  • 47. Suppose the measurement of a real process over a 24 hours period be given by the following table with 14 data values: i 1 2 3 4 5 6 7 8 9 10 11 12 13 14 ti 0 3 7 8 9 10 12 14 16 18 19 20 21 23 ui 3 5 5 4 3 6 7 6 6 11 11 10 8 6 The values ti represent time and ui’s are measurements. Assuming there is a mathematical connection between the variables t and u, we would like to determine the coefficients a,b,c,d,e ∈ R of the function u(t) = at4 + bt3 + ct2 + dt + e so that the value of the function u(ti) could best approximate the discrete value ui at ti, i = 1, . . . , 14 in the Chebychev sense. Hence, we need to solve the Chebyshev approximation problem (CA)
  • 48. maxi=1,...,14 ∣ ∣ ui − ( at4i + bt 3 i + ct 2 i + dti + e )∣ ∣ → min s.t. a,b,c,d,e ∈ R