Skip to content

Commit b04a973

Browse files
committed
Add HPC FAQ [no ci]
1 parent dcfff20 commit b04a973

File tree

2 files changed

+28
-0
lines changed

2 files changed

+28
-0
lines changed

docs/settings.jl

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -50,6 +50,7 @@ settings = Dict(
5050
"ITensor Development FAQs" => "faq/Development.md",
5151
"Relationship of ITensor to other tensor libraries FAQs" => "faq/RelationshipToOtherLibraries.md",
5252
"Julia Package Manager FAQs" => "faq/JuliaPkg.md",
53+
"High-Performance Computing FAQs" => "faq/HPC.md",
5354
],
5455
"Upgrade guides" => ["Upgrading from 0.1 to 0.2" => "UpgradeGuide_0.1_to_0.2.md"],
5556
"ITensor indices and Einstein notation" => "Einsum.md",

docs/src/faq/HPC.md

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
# High Performance Computing (HPC) Frequently Asked Questions
2+
3+
## My code is using a lot of RAM - what can I do about this?
4+
5+
Tensor network algorithms can often use a large amount of RAM. However, on top
6+
of this essential fact, the Julia programming languge is also "garbage collected"
7+
which means that unused memory isn't given back to the operating system right away,
8+
but only on a schedule determined by the Julia runtime. In cases where you code
9+
allocates a lot of memory very quickly, this can lead to high memory usage.
10+
11+
Fortunately, one simple step you can take to potentially help with this is to pass
12+
the `--heap-size-hint` flag to the Julia program when you start it. For example,
13+
you can call Julia as:
14+
```
15+
julia --heap-size-hint=100G
16+
```
17+
When you pass this heap size, Julia will try to keep the memory usage at or below this
18+
value if possible.
19+
20+
In cases where this does not work, your code simply may be allocating too much memory.
21+
Be sure not to allocate over and over again inside of "hot" loops which execute many times.
22+
23+
Another possibility is that you are simply working with a tensor network with large
24+
bond dimensions, which may fundamentally use a lot of memory. In those cases, you can
25+
try to use features such as "write to disk mode" of the ITensor DMRG code or other related
26+
techniques. (See the `write_when_maxdim_exceeds` keyword of the ITensor `dmrg` function.)
27+

0 commit comments

Comments
 (0)