Linear Solve with Caching Interface
Often, one may want to cache information that is reused between different linear solves. For example, if one is going to perform:
A \ b1
A \ b2
then it would be more efficient to LU-factorize one time and reuse the factorization:
lu!(A)
A \ b1
A \ b2
LinearSolve.jl's caching interface automates this process to use the most efficient means of solving and resolving linear systems. To do this with LinearSolve.jl, you simply init
a cache, solve
, replace b
, and solve again. This looks like:
using LinearSolve
n = 4
A = rand(n, n)
b1 = rand(n);
b2 = rand(n);
prob = LinearProblem(A, b1)
linsolve = init(prob)
sol1 = solve!(linsolve)
retcode: Default
u: 4-element Vector{Float64}:
11.351481602082895
0.00170310156685088
5.688285304214941
-12.181708611473645
linsolve.b = b2
sol2 = solve!(linsolve)
sol2.u
4-element Vector{Float64}:
-8.578037876029924
0.6381304325783519
-3.279619071534978
9.345841714504697
Then refactorization will occur when a new A
is given:
A2 = rand(n, n)
linsolve.A = A2
sol3 = solve!(linsolve)
sol3.u
4-element Vector{Float64}:
-13.973958104597402
3.526894357644472
12.268724911425853
0.07119433472401909
The factorization occurs on the first solve, and it stores the factorization in the cache. You can retrieve this cache via sol.cache
, which is the same object as the init
, but updated to know not to re-solve the factorization.
The advantage of course with using LinearSolve.jl in this form is that it is efficient while being agnostic to the linear solver. One can easily swap in iterative solvers, sparse solvers, etc. and it will do all the tricks like caching the symbolic factorization if the sparsity pattern is unchanged.