experimental/cuda-ubi9/: nvidia-nccl-cu12-2.20.5 metadata and description
NVIDIA Collective Communication Library (NCCL) Runtime
author | Nvidia CUDA Installer Team |
author_email | cuda_installer@nvidia.com |
classifiers |
|
keywords | cuda,nvidia,runtime,machine learning,deep learning |
license | NVIDIA Proprietary Software |
requires_python | >=3 |
File | Tox results | History |
---|---|---|
nvidia_nccl_cu12-2.20.5-py3-none-manylinux2014_x86_64.whl
|
|
NCCL (pronounced “Nickel”) is a stand-alone library of standard collective communication routines for GPUs, implementing all-reduce, all-gather, reduce, broadcast, and reduce-scatter. It has been optimized to achieve high bandwidth on any platform using PCIe, NVLink, NVswitch, as well as networking using InfiniBand Verbs or TCP/IP sockets.