experimental/cuda-ubi9/: pynvml-11.5.0 metadata and description

Homepage Simple index

Python Bindings for the NVIDIA Management Library

author NVIDIA Corporation
author_email rzamora@nvidia.com
classifiers
  • Development Status :: 5 - Production/Stable
  • Intended Audience :: Developers
  • Intended Audience :: System Administrators
  • License :: OSI Approved :: BSD License
  • Operating System :: Microsoft :: Windows
  • Operating System :: POSIX :: Linux
  • Programming Language :: Python
  • Topic :: Software Development :: Libraries :: Python Modules
  • Topic :: System :: Hardware
  • Topic :: System :: Systems Administration
description_content_type text/markdown
license BSD
requires_python >=3.6
File Tox results History
pynvml-11.5.0-py3-none-any.whl
Size
52 KB
Type
Python Wheel
Python
3

Python bindings to the NVIDIA Management Library

Provides a Python interface to GPU management and monitoring functions.

This is a wrapper around the NVML library. For information about the NVML library, see the NVML developer page http://developer.nvidia.com/nvidia-management-library-nvml

As of version 11.0.0, the NVML-wrappers used in pynvml are identical to those published through nvidia-ml-py.

Note that this file can be run with 'python -m doctest -v README.txt' although the results are system dependent

Requires

Python 3, or an earlier version with the ctypes module.

Installation

pip install .

Usage

You can use the lower level nvml bindings

>>> from pynvml import *
>>> nvmlInit()
>>> print("Driver Version:", nvmlSystemGetDriverVersion())
Driver Version: 410.00
>>> deviceCount = nvmlDeviceGetCount()
>>> for i in range(deviceCount):
...     handle = nvmlDeviceGetHandleByIndex(i)
...     print("Device", i, ":", nvmlDeviceGetName(handle))
...
Device 0 : Tesla V100

>>> nvmlShutdown()

Or the higher level nvidia_smi API

from pynvml.smi import nvidia_smi
nvsmi = nvidia_smi.getInstance()
nvsmi.DeviceQuery('memory.free, memory.total')
from pynvml.smi import nvidia_smi
nvsmi = nvidia_smi.getInstance()
print(nvsmi.DeviceQuery('--help-query-gpu'), end='\n')

Functions

Python methods wrap NVML functions, implemented in a C shared library. Each function's use is the same with the following exceptions:

For usage information see the NVML documentation.

Variables

All meaningful NVML constants and enums are exposed in Python.

The NVML_VALUE_NOT_AVAILABLE constant is not used. Instead None is mapped to the field.

NVML Permissions

Many of the pynvml wrappers assume that the underlying NVIDIA Management Library (NVML) API can be used without admin/root privileges. However, it is certainly possible for the system permissions to prevent pynvml from querying GPU performance counters. For example:

$ nvidia-smi nvlink -g 0
GPU 0: Tesla V100-SXM2-32GB (UUID: GPU-96ab329d-7a1f-73a8-a9b7-18b4b2855f92)
NVML: Unable to get the NvLink link utilization counter control for link 0: Insufficient Permissions

A simple way to check the permissions status is to look for RmProfilingAdminOnly in the driver params file (Note that RmProfilingAdminOnly == 1 means that admin/sudo access is required):

$ cat /proc/driver/nvidia/params | grep RmProfilingAdminOnly
RmProfilingAdminOnly: 1

For more information on setting/unsetting the relevant admin privileges, see these notes on resolving ERR_NVGPUCTRPERM errors.

Release Notes