RAPIDS Installation Guide

RAPIDS has several methods for installation, depending on the preferred environment and version. New Users should review the system and environment prerequisites.

Install RAPIDS with Release Selector

System Requirements

Environment Setup

Next Steps


Install RAPIDS

Use the selector tool below to select your preferred method, packages, and environment to install RAPIDS. Certain combinations may not be possible and are dimmed automatically.

Release
Method
ENV. CUDA
System CUDA
Image CUDA
Python
RAPIDS Packages
Additional Packages
Packages
Image Location
Image Type
Command


Installation Troubleshooting

Conda Issues

A conda create error occurs:
To resolve this error please follow one of these steps:

  • If the Conda installation is older than 23.10, please update to the latest version. This will include libmamba to significantly accelerate environment solving
  • Use Mamba directly as mamba create ...

A __cuda constraint conflict occurs:
You may see something like:

LibMambaUnsatisfiableError: Encountered problems while solving:
 - package cuda-version-12.0-hffde075_0 has constraint __cuda >=12 conflicting with __cuda-11.8-0

This means the CUDA driver currently installed on your machine (e.g. __cuda: 11.8.0) is incompatible with the cuda-version (12.0) you are trying to install. You will have to ensure the CUDA driver on your machine supports the CUDA version you are trying to install with conda.

If conda has incorrectly identified the CUDA driver, you can override by setting the CONDA_OVERRIDE_CUDA environment variable.

Even after the above suggestions of updating conda and using libmamba/mamba, you still see a conda create error, or your environment solves but is nonfunctional in some way:
Check if any packages in your environment have been installed from the defaults channel (you can do that by running conda list and inspecting the output). The defaults channel is not supported by RAPIDS packages, which are built to be compatible with dependencies from the conda-forge channel. If you installed conda with the Miniconda or Anaconda distributions, the defaults channel will be included unless you modify your .condarc file or specify -c nodefaults in the install commands for RAPIDS packages. If you find any packages from defaults in your environment, please make those changes and try recreating your environment from scratch. Note that if you installed conda with Miniforge (our recommendation for best compatibility) then the defaults channel is not included.

In general mixing conda-forge and defaults channels is not supported. RAPIDS packages are published to a separate rapidsai channel that is designed for compatibility with conda-forge, not defaults.

Docker Issues

RAPIDS 23.08 brought significant Docker changes.
To learn more about these changes, please see the RAPIDS Container README. Some key notes below:

  • Development images are no longer being published, RAPIDS now uses Dev Containers for development
  • All images are Ubuntu-based
    • CUDA 12.5+ images use Ubuntu 24.04
    • All other images use Ubuntu 22.04
  • All images are multiarch (x86_64 and ARM)
  • The base image starts in an ipython shell
    • To run bash commands inside the ipython shell prefix the command with !
    • To run the image without the ipython shell add /bin/bash to the end of the docker run command
  • For a full list of changes please see this RAPIDS Docker Issue

pip Issues

pip installations require using the matching wheel to the system’s installed CUDA toolkit. For example, if you have the CUDA 12 toolkit, install the -cu12 wheels.
Infiniband is not supported yet.
These packages are not compatible with Tensorflow pip packages. Please use the NGC containers or conda packages instead.

The following error message indicates a problem with your environment:

ERROR: Could not find a version that satisfies the requirement cudf-cu12 (from versions: 0.0.1, 25.08)
ERROR: No matching distribution found for cudf-cu12

Check the suggestions below for possible resolutions:

  • Ensure you’re using a Python version that RAPIDS supports (compare the values in the the install selector to the Python version reported by python --version).


WSL2 Issues

See the WSL2 setup troubleshooting section.


System Requirements

OS / GPU Driver / CUDA Versions

All provisioned systems need to be RAPIDS capable. Here’s what is required:

GPU: NVIDIA Volta™ or higher with compute capability 7.0+

  • Pascal™ GPU support was removed in 24.02. Compute capability 7.0+ is required for RAPIDS 24.02 and later.

OS:

CUDA & NVIDIA Drivers: One of the following supported versions:

  • CUDA 12 with Driver 525.60.13 or newer
  • Compatibility with CUDA 13 is coming soon

See CUDA compatibility for details.

CUDA Support Notes

pip

  • pip installations require using a wheel matching the system’s installed CUDA toolkit.
  • RAPIDS pip packages require NVRTC for Numba to function properly. For Docker users, this means that RAPIDS wheels require the devel flavor of nvidia/cuda images for full functionality. The base and runtime flavors of nvidia/cuda Docker images are currently not sufficient.
  • pip installations require using the matching wheel to the system’s installed CUDA toolkit. For example, if you have the CUDA 12 toolkit, install the -cu12 wheels.


System Recommendations

Aside from the system requirements, other considerations for best performance include:

  • SSD drive (NVMe preferred)
  • Approximately 2:1 ratio of system Memory to total GPU Memory (especially useful for Dask)
  • NVLink with 2 or more GPUs


Cloud Instance GPUs

If you do not have access to GPU hardware, there are several cloud service providers (CSP) that are RAPIDS enabled. Learn how to deploy RAPIDS on AWS, Azure, GCP, and IBM cloud on our Cloud Deployment Page.

Several services also offer free and limited trials with GPU resources:


Environment Setup

For most installations, you will need a Conda or Docker environments installed for RAPIDS. Note, these examples are structured for installing on Ubuntu. Please modify appropriately for Rocky Linux. Windows 11 has a WSL2 specific install.


Conda

RAPIDS can be used with any conda distribution.

Below is an installation guide using miniforge.

1. Download and Run Install Script. Copy the command below to download and run the miniforge install script:

curl -L -O "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh"
bash Miniforge3-$(uname)-$(uname -m).sh

2. Customize Conda and Run the Install. Use the terminal window to finish installation. Note, we recommend enabling conda-init.

3. Start Conda. Open a new terminal window, which should now show Conda initialized.

4. Check Conda Configuration. RAPIDS supports either flexible or strict channel priority.

You can check this and change it, if required, by doing:

conda config --show channel_priority
conda config --set channel_priority flexible


Docker

RAPIDS requires Docker Engine and nvidia-container-toolkit installed.

1. Download and Install. Copy command below to download and install the latest Docker Engine:

curl https://get.docker.com | sh

2. Install Latest NVIDIA Container Toolkit. Follow the instructions for your Linux distribution in the nvidia-container-toolkit installation guide.

3. Start Docker. In new terminal window run:

sudo service docker stop
sudo service docker start

4. Test Docker with GPU support. In a terminal window run:

docker run --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark


JupyterLab.

The command provided from the selector for the notebooks Docker image will run JupyterLab on your host machine at port: 8888.

Running Multi-Node / Multi-GPU (MNMG) Environment. To start the container in an MNMG environment:

docker run -t -d --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack= 67108864 -v $PWD:/ws <container label>

The standard docker command may be sufficient, but the additional arguments ensures more stability. See the NCCL docs and UCX docs for more details on MNMG usage.


pip

RAPIDS pip packages are available on the NVIDIA Python Package Index.


SDK Manager (Ubuntu Only)

NVIDIA SDK Manager gives a users a Graphical User Interface (GUI) option to install RAPIDS. It also attempts to fix any environment issues before installing RAPIDS or updating RAPIDS, making it ideal for new Linux users.

  1. Download SDK Manager’s Ubuntu version from their website (requires sign up or login to NVIDIA’s Developer community). Do not install yet. It is assumed that your home directory’s Downloads folder is where the .deb file will be stored. If not, please move sdkmanager_[version]-[build#]_amd64.deb file to your current Download folder.
  2. Install and run SDK Manager using the installation guide here. For Ubuntu, use the following commands:
    sudo apt install ./sdkmanager_[version]-[build#]_amd64.deb
    sdkmanager
    
  3. Sign in when asked, and follow SDK Manager’s RAPIDS installation instructions.


Windows WSL2

Windows users can now tap into GPU accelerated data science on their local machines using RAPIDS on Windows Subsystem for Linux 2. WSL2 is a Windows feature that enables users to run native Linux command line tools directly on Windows. Using this feature does not require a dual boot environment, removing complexity and saving you time.

WSL2 Additional Prerequisites

OS: Windows 11 with a WSL2 installation of Ubuntu.
WSL Version: WSL2 (WSL1 not supported).
GPU: GPUs with Compute Capability 7.0 or higher (16GB+ GPU RAM is recommended).

Limitations

Only single GPU is supported.
GPU Direct Storage is not supported.

Troubleshooting

When installing with Conda, if an http 000 connection error occurs when accessing the repository data, run wsl --shutdown and then restart the WSL instance.

When installing with Conda or pip, if an WSL2 Jitify fatal error: libcuda.so: cannot open shared object file error occurs, follow suggestions in this WSL issue to resolve.


WSL2 SDK Manager Install

NVIDIA’s SDK Manager gives Windows users a Graphical User Interface (GUI) option to install RAPIDS. It also attempts to fix any environment issues before installing RAPIDS or updating RAPIDS, making it ideal for new WSL users.

  1. Install the latest NVIDIA Drivers on the Windows host.
  2. Download SDK Manager’s Ubuntu version from their website (requires sign up or login to NVIDIA’s Developer community). Do not install yet. The rest of the instructions assume that your home directory’s Downloads folder is where the .deb file will be stored. If this is not the case, please change the directory, as needed, for your system.
  3. Install or update WSL2 and the Ubuntu 22.04 or Ubuntu 24.04 package using Microsoft’s instructions. To install Ubuntu 24.04 from the command line, use this command:
    wsl --install -d Ubuntu-24.04
    

    This will install and start Ubuntu in your Windows host system using WSL2. Make your sudo password memorable as you will need it in the next two steps.

  4. Install and run SDK Manager inside Ubuntu by pasting this into your command line. This command will navigate to your Windows users’s Downloads folder, from your WSL2 instance, and install the latest SDK Manager .deb file that you had downloaded. You will have to enter the sudo password you created when you installed Ubuntu.
    sudo apt update && sudo apt install wslu -y
    cd "$(wslpath -au "$(cmd.exe /c 'echo %USERPROFILE%' | tr -d '\r')")/Downloads"
    sudo apt install "$(ls -t ./sdkmanager_*_amd64.deb | head -n 1)" -y
    sdkmanager
    
  5. Sign in when asked, and follow SDK Manager’s RAPIDS installation instructions here.


WSL2 Conda Install

  1. Install WSL2 and the Ubuntu distribution using Microsoft’s instructions.
  2. Install the latest NVIDIA Drivers on the Windows host.
  3. Log in to the WSL2 Linux instance.
  4. Install Conda in the WSL2 Linux Instance using our Conda instructions.
  5. Install RAPIDS via Conda, using the RAPIDS Release Selector.
  6. Run this code to check that the RAPIDS installation is working:
    import cudf
    print(cudf.Series([1, 2, 3]))
    


WSL2 Docker Desktop Install

  1. Install WSL2 and the Ubuntu distribution using Microsoft’s instructions.
  2. Install the latest NVIDIA Drivers on the Windows host.
  3. Install latest Docker Desktop for Windows
  4. Log in to the WSL2 Linux instance.
  5. Generate and run the RAPIDS docker command based on your desired configuration using the RAPIDS Release Selector.
  6. Inside the Docker instance, run this code to check that the RAPIDS installation is working:
    import cudf
    print(cudf.Series([1, 2, 3]))
    


WSL2 pip Install

  1. Install WSL2 and the Ubuntu distribution using Microsoft’s instructions.
  2. Install the latest NVIDIA Drivers on the Windows host.
  3. Log in to the WSL2 Linux instance.
  4. Follow this helpful developer guide and then install the WSL-specific CUDA 12 Toolkit without drivers into the WSL2 instance.
    • The installed CUDA Toolkit major version must match the package suffix (e.g. -cu12)
  5. Install RAPIDS pip packages on the WSL2 Linux Instance using the release selector commands.
  6. Run this code to check that the RAPIDS installation is working:
    import cudf
    print(cudf.Series([1, 2, 3]))
    


Build from Source

To build from source, find the library on the RAPIDS GitHub. Libraries provide guidance on building from source in README.md or CONTRIBUTING.md. If additional help is needed, file an issue on GitHub or reach out on our Slack Channel.


Next Steps

After installing the RAPIDS libraries, the best place to get started is our User Guide. Our RAPIDS.ai home page also provides a great deal of information, as does our Blog Page and the NVIDIA Developer Blog. We are also always available on our RAPIDS GoAi Slack Channel.