Optimization.jl
There are some solvers that are available in the Optimization.jl package directly without the need to install any of the solver wrappers.
Methods
LBFGS
: The popular quasi-Newton method that leverages limited memory BFGS approximation of the inverse of the Hessian. Through a wrapper over the L-BFGS-B fortran routine accessed from the LBFGSB.jl package. It directly supports box-constraints.This can also handle arbitrary non-linear constraints through a Augmented Lagrangian method with bounds constraints described in 17.4 of Numerical Optimization by Nocedal and Wright. Thus serving as a general-purpose nonlinear optimization solver available directly in Optimization.jl.
Sophia
: Based on the recent paper https://arxiv.org/abs/2305.14342. It incorporates second order information in the form of the diagonal of the Hessian matrix hence avoiding the need to compute the complete hessian. It has been shown to converge faster than other first order methods such as Adam and SGD.solve(problem, Sophia(; η, βs, ϵ, λ, k, ρ))
η
is the learning rateβs
are the decay of momentumsϵ
is the epsilon valueλ
is the weight decay parameterk
is the number of iterations to re-compute the diagonal of the Hessian matrixρ
is the momentumDefaults:
η = 0.001
βs = (0.9, 0.999)
ϵ = 1e-8
λ = 0.1
k = 10
ρ = 0.04
Examples
Unconstrained rosenbrock problem
using Optimization, Zygote
rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
x0 = zeros(2)
p = [1.0, 100.0]
optf = OptimizationFunction(rosenbrock, AutoZygote())
prob = Optimization.OptimizationProblem(optf, x0, p)
sol = solve(prob, Optimization.LBFGS())
retcode: Success
u: 2-element Vector{Float64}:
0.9999997057368228
0.999999398151528
With nonlinear and bounds constraints
function con2_c(res, x, p)
res .= [x[1]^2 + x[2]^2, (x[2] * sin(x[1]) + x[1]) - 5]
end
optf = OptimizationFunction(rosenbrock, AutoZygote(), cons = con2_c)
prob = OptimizationProblem(optf, x0, p, lcons = [1.0, -Inf],
ucons = [1.0, 0.0], lb = [-1.0, -1.0],
ub = [1.0, 1.0])
res = solve(prob, Optimization.LBFGS(), maxiters = 100)
retcode: Success
u: 2-element Vector{Float64}:
0.783397417853095
0.6215211044097776
Train NN with Sophia
using Optimization, Lux, Zygote, MLUtils, Statistics, Plots, Random, ComponentArrays
x = rand(10000)
y = sin.(x)
data = MLUtils.DataLoader((x, y), batchsize = 100)
# Define the neural network
model = Chain(Dense(1, 32, tanh), Dense(32, 1))
ps, st = Lux.setup(Random.default_rng(), model)
ps_ca = ComponentArray(ps)
smodel = StatefulLuxLayer{true}(model, nothing, st)
function callback(state, l)
state.iter % 25 == 1 && @show "Iteration: %5d, Loss: %.6e\n" state.iter l
return l < 1e-1 ## Terminate if loss is small
end
function loss(ps, data)
ypred = [smodel([data[1][i]], ps)[1] for i in eachindex(data[1])]
return sum(abs2, ypred .- data[2])
end
optf = OptimizationFunction(loss, AutoZygote())
prob = OptimizationProblem(optf, ps_ca, data)
res = Optimization.solve(prob, Optimization.Sophia(), callback = callback)
retcode: Default
u: ComponentVector{Float32}(layer_1 = (weight = Float32[1.4031048; -1.576963; … ; -2.317541; 0.30409494;;], bias = Float32[0.9096854, 0.40020156, 0.038926773, -0.6519473, -0.5720045, -0.39070684, 0.9278469, 0.3105595, 0.17594115, -0.073721446 … -0.43140942, 0.45602673, -0.3579044, 0.34728053, 0.33192325, 0.3349182, 0.87993675, 0.28018722, -0.72215813, -0.3076666]), layer_2 = (weight = Float32[0.24815032 0.19610713 … 0.14433084 -0.014661686], bias = Float32[0.03913372]))