Hi @sou27,
I think your question has been elegantly answered already, but still I wanted to give another possibility as further reference in case someone else finds this post for something similar.
The contraction algorithm used to multiply a dense
tensor with a combiner
one creates always a new tensor from scratch, I think the reason behind this is to make the operations independent from the order of the indices with which the tensor has been saved.
But if you know the order on which the tensor has been saved, you don’t really need to use a combiner, at least not to do a multiplication. What I mean is that you can use directly matrix
(ITensor · ITensors.jl}) and ITensors.itensor
(ITensor · ITensors.jl) functions, which creates a view of the data, without requiring additional memory (WARNING: As written in the documentation, use this functions only if you really know what you are doing).
Here there is an example on how to use them, and that shows how no additional memory is used:
using ITensors
using KrylovKit: linsolve
using BenchmarkTools
size = 32
i,j,k = Index.(fill(size, 3))
A = random_itensor((i,j,k,i',j',k'))
b = random_itensor(i,j,k)
x0 = random_itensor(i,j,k) #assumes noprime(A*x) = b
@show "size of A is $(Base.summarysize(A)/2^30) GiB"
C = combiner(i,j,k)
ci = combinedind(C)
function convert_to_matrices(A::ITensor, b::ITensor, x0::ITensor)
A=ITensors.setinds(A, (ci, ci'))
b=ITensors.setinds(b, ci)
x0=ITensors.setinds(x0, ci)
A = ITensors.matrix(A)
b = ITensors.vector(b)
x0 = ITensors.vector(x0)
return A, b, x0
end
A,b,x0 = @btime convert_to_matrices($A, $b, $x0)
x, info = linsolve(A,b,x0)
function convert_to_itensors(A::Matrix, b::Vector, x::Vector)
A = ITensors.itensor(A, (i,j,k), (i',j',k'))
b = ITensors.itensor(b, i,j,k)
x = ITensors.itensor(x, i,j,k)
end
A,b,x = @btime convert_to_itensors($A, $b, $x)