Issue with LinearAlgebra (and julia) 1.12

I’m trying the pre-release of julia 1.12, when performing MPO-MPS contraction using the DensityMatrix algorithm, eigen (and in particular syevd) is throwing a LAPACKError with a number >0 (which should mean a problem with convergence has occurred https://www.netlib.org/lapack/explore-html/d8/d30/group__heevd_ga3cac18a55c8b7b87a42cfa26c1f3499a.html ).
I know that since the issue is related to LinearAlgebra this is not the right place to report the issue, but before doing that I wanted to ask if someone else have had this problem here and is trying to work on it already.

EDIT

I have created the issue on LinearAlgebra LAPACKError from `syevd` reported only for 1 threaded diagonalization and only for v1.12 · Issue #1313 · JuliaLang/LinearAlgebra.jl · GitHub

Thanks for reporting that, it is helpful to keep track of these things. Hopefully they can address that issue on the Julia side, otherwise we’ll have to find a way to work around it from the ITensor side.

Have you tried other LAPACK backends like MKL? I assume you are using OpenBLAS?

Thanks for the advice, MKL seems to don’t give error, but I’m not confident it is a final solution, by experimenting a little more it seems that also the cpu architecture has a role in it, in fact the specific matrix doesn’t raise the error on another machine, but I remember having the error also on this second machine but I was not tracking it yet, so I’ll experiment a bit more.

It seems the change comes from this discussion, where it seems they have decided to change the used algorithm from syevr to syevd. I don’t know if LinearAlgebra will decide to come back to syevr (from the docstring they seems to be open to this possibility), from ITensor side, it will be up to you to decide if you want to keep the current behavior or go with the new default choice. This could be decided also from user end by overloading default_eigen_algwith something like:

LinearAlgebra.default_eigen_alg(::AbstractMatrix) = RobustRepresentations()

which is what I think I’m going to do

Thanks for the update, I was wondering if this was caused by a change to the default algorithm in LinearAlgebra.

Do you know the tradeoffs between the different backends? For now we could just explicitly call syevd from the ITensor eigendecomposition.

In our SVD function we have logic for trying certain SVD algorithms (we start with one that is generally faster but not as robust), catching errors when they arise, and then trying other ones. Maybe something similar is warranted for the eigendecomposition function.

syevr is the current (julia v1.11.5) used algorithm, while syevd should become the next default one.

Unfortunately I don’t have an extended knowledge about it, but it seems the choice between Divide and Conquer (DC) or Multiple Relatively Robust Representation (MR) is strongly matrix and architecture dependent.

MKL itself affirms that MR is faster Ref

Note that for most cases of real symmetric eigenvalue problems the default choice should be syevr function as its underlying algorithm is faster and uses less workspace. ?syevd requires more workspace but is faster in some cases, especially for large matrices.

But then actual benchmarks seems to show the opposite, but the factor seems to be low (something around 10-20%).

What seems to be true is that DC does use almost twice the memory required by MR, and it is more accurate (at least 1 order of magnitude).

The main problem is that (at least in my case) DC is proving itself to be more unstable, so probably a logic similar to the SVD implementation would be better.

Any updates on this? Probably we should set the default for ITensor’s eigen function to alg=RobustRepresentations() to not break user code in Julia v1.12.

Not really, the error is itself quite rare, and it is strongly system dependent (the same matrix can give error on a machine A and not on B, while another can give error on B and not on A).

Setting alg=RobustRepresentations() would be the best choice, I think. But if it is not too much work, an user interface to allow the user to change the alg used would be even better (syevd does give a speed up, but it eats up much more memory, so syevr would still be safer for an end-user).

The good news is that MKL is not giving problems about this.