Is DGL (Deep Graph Library) still actively maintained, and how can I use it with PyTorch 2.8 on an NVIDIA RTX 50-series GPU? I can’t find recent guidance—are there known compatibility issues between DGL and PyTorch 2.8 or with RTX 50-series GPUs? What are the recommended DGL/PyTorch/CUDA version combinations, and what practical workarounds exist (prebuilt wheels, building DGL from source, Docker images, or using a different PyTorch/CUDA version)?
DGL (the Deep Graph Library) is actively maintained — the project still publishes releases and installation guides — but its prebuilt binaries can lag behind the newest PyTorch/CUDA and NVIDIA Blackwell (RTX 50-series) support. If you want to run DGL with PyTorch 2.8 on an RTX 50, the reliable options are: use a Docker/NGC container that bundles matching PyTorch+CUDA, install a DGL wheel that exactly matches your torch+CUDA (when available), or build DGL from source against PyTorch 2.8/CUDA 12.8+; practical commands and fallbacks are below.
Contents
- DGL maintenance status and roadmap
- Compatibility quick summary
- Recommended DGL / PyTorch / CUDA version combinations
- Using DGL with PyTorch 2.8 on NVIDIA RTX 50-series (practical steps)
- Installation options: prebuilt wheels, conda and Docker
- Building DGL from source (step-by-step)
- Troubleshooting & practical workarounds
- Sources
- Conclusion
DGL maintenance status and roadmap
Short answer: yes — the Deep Graph Library is still actively developed and its maintainers publish releases and docs. The DGL GitHub releases page contains the changelog and supported binary targets (for example, DGL v2.4.0 with documented PyTorch/CUDA support) and is the best single place to check current binary support: https://github.com/dmlc/dgl/releases. The project site and start page also list installation options and Docker images: https://www.dgl.ai/pages/start.html.
That said, binaries (pip/conda wheels and Docker images) for the very latest PyTorch or for brand-new GPU microarchitectures sometimes arrive later than PyTorch itself. The community has raised explicit requests for Blackwell (RTX 50-series) support in the DGL issue tracker, so binary support for the newest GPUs can lag a bit behind PyTorch releases: https://github.com/dmlc/dgl/issues/7888.
Compatibility quick summary
- DGL release notes (example: v2.4.0) list supported PyTorch releases and CUDA targets. For v2.4.0 the documented supported CUDA builds included 11.7, 11.8, 12.1 and 12.4 and PyTorch 2.1–2.3 families in the release notes: https://github.com/dmlc/dgl/releases.
- RTX 50-series (Blackwell) GPUs require PyTorch builds that include the new microarchitecture (community reports point to needing PyTorch builds targeting CUDA 12.8+ / Blackwell support). Community threads show users running PyTorch 2.7+ / CUDA 12.8 on RTX 5090 hardware, sometimes using nightlies or specific packaged builds: https://www.reddit.com/r/comfyui/comments/1mhs4n2/what_pytorch_and_cuda_versions_have_you/ and https://discuss.pytorch.org/t/nvidia-geforce-rtx-5090/218954.
- Bottom line: DGL itself is maintained, but official prebuilt DGL wheels for PyTorch 2.8 + the CUDA level your RTX 50 requires may not exist yet. That forces one of three paths: (A) pick a PyTorch/CUDA combo that DGL already ships wheels for, (B) use a container that already bundles compatible binaries, or © build DGL from source against your PyTorch/CUDA.
Recommended DGL / PyTorch / CUDA version combinations
Choose based on your tolerance for building from source vs. needing the newest PyTorch:
- Easiest / highest chance of success (use prebuilt DGL binaries)
- DGL v2.4.0 + PyTorch 2.1–2.3 (or the DGL-documented torch versions) + CUDA 12.4 (cu124) — known wheels exist for these combos: https://github.com/dmlc/dgl/releases and pip wheel index examples such as data.dgl.ai/cu124 repos (see installation section). This is the lowest-friction route.
- Modern GPU (RTX 50) + recent PyTorch (you want 2.8)
- Target: PyTorch 2.8 built with CUDA 12.8 (or the CUDA version your driver supports). If DGL does not publish a matching wheel, build DGL from source against your installed PyTorch/CUDA (recommended). Community threads indicate PyTorch 2.7/2.7.1+CUDA 12.8 has been used successfully on RTX 5090 but DGL wheels weren’t always available — hence the build-from-source route: https://www.reddit.com/r/comfyui/comments/1mhs4n2/what_pytorch_and_cuda_versions_have_you/ and https://github.com/dmlc/dgl/issues/7888.
- Conservative fallback (if you want guaranteed prebuilt support)
- Downgrade PyTorch/CUDA to the latest combo that DGL explicitly supports (for example, use a PyTorch build matching cu124 if you can), or use an older DGL release designed for an older torch/CUDA stack (the releases page and install docs will show which tag to pick): https://docs.dgl.ai/en/latest/install/index.html.
Which to pick? If you need absolute minimal friction: pick (1) or use Docker images (next sections). If you need features or driver-level fixes in PyTorch 2.8, expect to either build or use a container that already includes matching binaries.
Using DGL with PyTorch 2.8 on NVIDIA RTX 50-series (practical steps)
Steps (practical path that many users take):
- Confirm what CUDA your system/driver exposes:
- Check NVIDIA driver + CUDA compatibility and look up the CUDA runtime PyTorch will use. Quick check:
python -c “import torch; print(torch.version, torch.version.cuda, torch.cuda.is_available()); print(torch.cuda.get_device_properties(0))” - If torch shows a CUDA version < required by your GPU microarchitecture, upgrade PyTorch/driver.
- Try to find a prebuilt DGL wheel for your exact PyTorch+CUDA:
- Search DGL release notes and wheel repos (example index for cu124): https://github.com/dmlc/dgl/releases and the DGL wheel indexes. If a wheel exists for torch 2.8 + matching CUDA, pip will be the simplest install.
- If no matching wheel exists, pick one of these:
- Use a Docker image that ships matching torch + CUDA + DGL (fast, reproducible). DGL’s start page and release notes mention Docker options: https://www.dgl.ai/pages/start.html and https://github.com/dmlc/dgl/releases (example image dglteam/dgl:2.4.0).
- Build DGL from source against your installed PyTorch 2.8/CUDA (more control; recommended when you need latest cuda/arch support).
- Building-from-source quick checklist:
- Install PyTorch 2.8 (or the desired 2.8.x) with the CUDA runtime that supports Blackwell (community reports point to CUDA 12.8+ for RTX 50). Use conda or pip to get the torch build for that CUDA level. If you installed PyTorch via conda, prefer the pytorch + nvidia channels so binaries are compatible.
- Clone and build DGL (commands in the Build section below). When compiling, instruct the extension build to include Blackwell (SM_120) in the architecture list so kernels get compiled for your GPU; that is often the step users miss and causes runtime “no kernel image” errors (see GitHub issue #7888 for the Blackwell/pyTorch/DGL discussion): https://github.com/dmlc/dgl/issues/7888.
- Test:
- Run a small GNN example (e.g., import dgl; run a forward pass on GPU) and validate there are no kernel/driver errors.
Installation options: prebuilt wheels, conda and Docker
-
Pip / wheel (prebuilt binary)
-
Example known command for DGL v2.4.0 + CUDA 12.4 wheels (adjust version as needed):
pip install dgl==2.4.0 -f https://data.dgl.ai/wheels/cu124/repo.html -
If a wheel exists for your exact torch+cuda pair, pip install is the fastest route. See release notes for exact supported torch versions: https://github.com/dmlc/dgl/releases.
-
Conda
-
DGL sometimes publishes conda packages for specific labels (example):
conda install -c dglteam/label/cu124 dgl -
You can also install PyTorch with a specific CUDA runtime before installing DGL (example pattern from community threads):
conda install pytorch==2.2.1 torchvision==0.17.1 pytorch-cuda=12.1 -c pytorch -c nvidia — adjust versions to your target: https://discuss.dgl.ai/t/installation-of-dgl-with-pytorch-compatibility/4585. -
Docker / containers
-
Using a container removes local binary mismatches. DGL documents GPU-enabled Docker images and you can also find DGL images on Docker Hub (example team image tag): docker pull dglteam/dgl:2.4.0. Official DGL instructions reference container options including NVIDIA NGC: https://www.dgl.ai/pages/start.html and https://github.com/dmlc/dgl/releases.
Which to use? Use prebuilt wheels when they exist and match your torch+CUDA. Use Docker when you want reproducibility and minimal build pain. Build from source when you need bleeding-edge PyTorch/CUDA/arch support or custom compile options.
Building DGL from source (step-by-step)
This is the most flexible approach when prebuilt wheels aren’t available for PyTorch 2.8 + RTX 50.
Minimal example workflow (adjust paths/versions for your system):
- Prepare OS tools
- Ensure system has a recent gcc, cmake, ninja, python dev headers and the CUDA toolkit compatible with the PyTorch runtime.
- Create environment and install PyTorch
- Example (adjust version numbers to what you need):
- conda create -n dgl-env python=3.10 -y
- conda activate dgl-env
- conda install pytorch==2.8 pytorch-cuda=12.8 -c pytorch -c nvidia (replace 2.8/12.8 with the exact versions you need)
- Clone and check out the desired DGL tag
- git clone https://github.com/dmlc/dgl.git
- cd dgl
- git checkout tags/v2.4.0 -b v2.4.0 (or use master for latest)
- Export architecture flags (important for RTX 50 / Blackwell)
- Export the arch list so extensions are compiled for Blackwell. Community/issue discussion suggests including the new SM_120 arch when building:
- export TORCH_CUDA_ARCH_LIST=“sm_120”
- also set CUDA_HOME if your toolkit is in a nonstandard location: export CUDA_HOME=/usr/local/cuda-12.8
- Build and install
- python -m pip install -e .
- Watch the build logs: confirm the CUDA extension compiled and that compile flags include the architecture(s) you expect.
- Validate
- python -c “import torch, dgl; print(torch.version, torch.version.cuda); print(dgl.version); print(torch.cuda.get_device_properties(0))”
If the build fails, read the compiler output: missing cmake/gcc or mismatch between the CUDA toolkit and PyTorch’s CUDA ABI are the most common causes. For reference on building and install options see the DGL install docs: https://docs.dgl.ai/en/latest/install/index.html and the project repo: https://github.com/dmlc/dgl.
Troubleshooting & practical workarounds
Common symptoms and fixes:
-
“RuntimeError: no kernel image is available for execution on the device”
-
Cause: the compiled CUDA kernels don’t include your GPU’s compute capability. Fix: rebuild DGL with TORCH_CUDA_ARCH_LIST including Blackwell (sm_120) or use a wheel built for that arch. See community request around Blackwell support: https://github.com/dmlc/dgl/issues/7888.
-
“CUDA driver/runtime mismatch” or driver errors
-
Fix: update the NVIDIA driver to a version compatible with the CUDA runtime your installed PyTorch expects. Always check the driver <-> CUDA compatibility table from NVIDIA.
-
No prebuilt wheel for your PyTorch+CUDA combo
-
Workarounds:
-
Use a Docker image that bundles a compatible PyTorch/CUDA (fastest short-term fix): https://www.dgl.ai/pages/start.html.
-
Build DGL from source (see steps above).
-
Temporarily use a slightly older PyTorch/CUDA combo that DGL ships wheels for (downgrade PyTorch) if that meets your needs.
Practical decision flow
- Need minimal setup / reproducible environment: use Docker image.
- Need newest PyTorch features / need PyTorch 2.8 specifically: build DGL from source (or wait for official wheels).
- Want prebuilt binaries: match the DGL release wheel versions exactly (consult the release notes and wheel index).
Community experience notes: RTX 5090 users report success with PyTorch builds targeting CUDA 12.8 (some used 2.7.1 or nightlies), but this often required building other extensions locally or using a prepackaged portable distribution. See real-user reports for context: https://www.reddit.com/r/comfyui/comments/1mhs4n2/what_pytorch_and_cuda_versions_have_you/ and https://discuss.pytorch.org/t/nvidia-geforce-rtx-5090/218954.
Sources
- DGL releases (GitHub) — release notes, versioned PyTorch/CUDA support
- DGL repository (GitHub) — source, build instructions and issues
- r/comfyui Reddit thread on PyTorch and CUDA for RTX 5090 — community reports about PyTorch 2.7/12.8 on RTX 5090
- PyTorch discussion: NVIDIA GeForce RTX 5090 — forum discussion about Blackwell/SM_120 compatibility
- DGL issue #7888 (Blackwell support inquiry) — user inquiry about DGL support for Blackwell GPUs and newer PyTorch
- DGL start / installation guidance — official DGL pages mentioning Docker/installation options
- DGL Install & Setup (docs) — installation guide and notes
- DGL community discuss: installation with PyTorch compatibility — community install examples and advice
Conclusion
DGL (Deep Graph Library) is actively maintained, but binary wheel support for the newest PyTorch/CUDA and for NVIDIA Blackwell (RTX 50-series) can lag behind PyTorch itself. If you need DGL with PyTorch 2.8 on an RTX 50, the fastest reliable options are to use a container that bundles compatible binaries, or to build DGL from source against your installed PyTorch/CUDA (making sure to compile for the Blackwell SM_120 arch). If you can, choose a PyTorch+CUDA combo that DGL already publishes wheels for (e.g., DGL v2.4.0 + cu124) to avoid build pain.