The processes studied in this session are third order systems with two poles in $s=-0.5$ and one pole in -2. $$P(s) = \dfrac{1}{(2s+1)^2(0.5s+1)}$$ with or without input nonlinearity.
The code below must be run once every time the kernel is restarted
using ControlSystems, OrdinaryDiffEq, Plots, Pkg, LinearAlgebra, Statistics
gr(show=false, size=(500,400)) # Set defaults for plotting
┌ Info: Precompiling ControlSystems [a6e380b2-a6ca-5380-bf3e-84a91bcd477e] └ @ Base loading.jl:1515
Plots.GRBackend()
We now proceed to define the process model and some parameters for our simulation
P = tf(1,[2.,1])^2*tf(1,[0.5,1]) # Process model
h = 0.1 # Sample time (only used for plots)
Ps = ss(P); # State-space representation of process model
s = Simulator(Ps)
x0 = [0.,0,0] # Initial state
Tf = 20 # Length of experiments (seconds)
t = 0:h:Tf # Time vector
tspan = (0.0,Tf)
(0.0, 20)
Even with interactive tools, tuning a PID controller by hand can be difficult. A basic control course teaches us how to design a PID controller based on, e.g., pole placement. Here, we will formulate an optimization problem where we optimize a cost function $J(k_p,k_i,k_d)$ based on the step response of the closed-loop system $$J(k_p,k_i,k_d) \quad=\quad \dfrac{1}{T_f}\int_0^{T_f} |r(t)-y(t)| dt \quad=\quad \dfrac{1}{T_f}\int_0^{T_f} |e(t)|dt$$
# Pkg.add("NLopt") # Install NLopt if not already installed
using NLopt, ForwardDiff
p = [0.1, 0.1, 0.1] # Initial guess [kp, ki, kd]
Kpid(kp,ki,kd) = pid(kp=kp, ki=ki, kd=kd)
Kpid(p) = K(p...)
function timedomain(p)
C = Kpid(p[1], p[2], p[3])
L = feedback(P*C) |> ss
step(L, t).y
end
function costfun(p)
y = timedomain(p)
mean(abs,1 .- y) # ~ Integrated absolute error IAE
end
f_cfg = ForwardDiff.GradientConfig(costfun, p)
function f(p::Vector, grad::Vector=[])
if length(grad) > 0
grad .= ForwardDiff.gradient(costfun,p,f_cfg)
end
costfun(p)
end
function runopt(p; f_tol = 1e-5, x_tol = 1e-3)
opt = Opt(:LD_MMA, 3)
opt.lower_bounds = 1e-6ones(3)
opt.xtol_rel = x_tol
opt.ftol_rel = f_tol
opt.min_objective = f
NLopt.optimize(opt, p)[2]
end
runopt (generic function with 1 method)
@info "Starting Optimization"
@time p = runopt(p, x_tol=1e-6)
y = timedomain(p)
plot(t,y')
┌ Info: Starting Optimization └ @ Main In[16]:1
9.082014 seconds (25.27 M allocations: 1.346 GiB, 4.38% gc time, 99.01% compilation time)
If we look at the sensitivity function of the closed-loop system obtained with the optimized controller, we see that the optimization procedure produced a quite aggressive controller with poor robustness.
C = Kpid(p...)
gangoffourplot(P,C, exp10.(LinRange(-1,3,500)), legend=false)
To combat this, we can add bounds on the maximum value of the sensitivity functions:
Ω = exp10.(LinRange(-1,2,150))
p0 = [0.1,0.1,0.1]
function freqdomain(p)
C = Kpid(p[1], p[2], p[3])
S = 1/(1+P*C) # Sensitivity fun
T = tf(1.) - S# Comp. Sensitivity fun
Sw = vec(bode(S,Ω,unwrap=false)[1]) # Freq. domain constraints
Tw = vec(bode(T,Ω,unwrap=false)[1]) # Freq. domain constraints
Sw,Tw
end
Mt = 1.2
Ms = 1.2 # Change these to modify the constraints
p = p0
function constraintfun(p)
Sw,Tw = freqdomain(p)
[maximum(Sw)-Ms; maximum(Tw)-Mt]
end
g_cfg = ForwardDiff.JacobianConfig(constraintfun, p)
function c(result, p::Vector, grad)
if length(grad) > 0
grad .= ForwardDiff.jacobian(constraintfun,p,g_cfg)'
end
result .= constraintfun(p)
end
function runopt(p; f_tol = 1e-5, x_tol = 1e-3, c_tol = 1e-8)
opt = Opt(:LD_SLSQP, 3)
lower_bounds!(opt, 1e-6ones(3))
xtol_rel!(opt, x_tol)
ftol_rel!(opt, f_tol)
min_objective!(opt, f)
inequality_constraint!(opt, c, c_tol*ones(2))
NLopt.optimize(opt, p)[2]
end
p = runopt(p, x_tol=1e-6)
y = timedomain(p)
Sw,Tw = freqdomain(p)
plot(t,y', layout=2, size=(800,400))
plot!(Ω, [Sw Tw] , lab=["Sw" "Tw"], subplot=2, xscale=:log10, yscale=:log10)
plot!([Ω[1],Ω[end]], [Ms,Ms], c = :black, l=:dash, subplot=2, lab="Ms")
plot!([Ω[1],Ω[end]], [Mt,Mt], c = :purple, l=:dash, subplot=2, lab="Mt", ylims=(0.01,3))
C = Kpid(p...)
gangoffourplot(P,C, exp10.(LinRange(-1,3,500)), legend=false)