【docker】——常用的使用gpu的命令

it2026-01-08  5

Some examples of the usage are shown below:

Starting a GPU enabled CUDA container; using --gpus

docker run --rm --gpus all nvidia/cuda nvidia-smi

Using NVIDIA_VISIBLE_DEVICES and specify the nvidia runtime

docker run --rm --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=all nvidia/cuda nvidia-smi Start a GPU enabled container on two GPUs

docker run --rm --gpus 2 nvidia/cuda nvidia-smi Starting a GPU enabled container on specific GPUs

docker run --gpus '"device=1,2"' nvidia/cuda nvidia-smi --query-gpu=uuid --format-csv uuid GPU-ad2367dd-a40e-6b86-6fc3-c44a2cc92c7e GPU-16a23983-e73e-0945-2095-cdeb50696982 Alternatively, you can also use NVIDIA_VISIBLE_DEVICES

docker run --rm --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=1,2 nvidia/cuda nvidia-smi --query-gpu=uuid --format=csv uuid GPU-ad2367dd-a40e-6b86-6fc3-c44a2cc92c7e GPU-16a23983-e73e-0945-2095-cdeb50696982 Query the GPU UUID using nvidia-smi and then specify that to the container

nvidia-smi -i 3 --query-gpu=uuid --format=csv uuid GPU-18a3e86f-4c0e-cd9f-59c3-55488c4b0c24

docker run --gpus device=GPU-18a3e86f-4c0e-cd9f-59c3-55488c4b0c24 nvidia/cuda nvidia-smi

最新回复(0)