GPUs on Maple
The Maple cluster has two nodes with two NVIDIA Tesla P100 GPUs (12GB memory, midgpu
queue) each, and six nodes with two NVIDIA Tesla V100S GPUs (32GB memory, biggpu
queue) each. These GPUs can be used to substantially speed up certain types of programs, assuming software has been written and configured to make use of the GPUs.
CUDA
To load the CUDA Development Toolkit, run:
module load cuda10.2/toolkit
This module will also need to be loaded inside PBS scripts for CUDA programs to run properly.
OpenCL
To compile OpenCL code, load the CUDA toolkit as described above, the use the -lOpenCL
option when compiling.
Running GPU Jobs
To use a GPU on Maple, it must be requested, just like CPUs and memory. PBS has an ngpus
resource for reserving GPUs. For instance, to reserve a single GPU, include
#PBS -l ngpus=1
in your PBS script or on the command line when you start an interactive session of PBS (qsub -I
).
Users must also submit their jobs to either the gpu
, midgpu
, or biggpu
queue to get a node with a GPU. To use the newer, large memory GPUs, use the biggpu
queue. There are twelve total, two GPUs each on six nodes. Submitting to the default queue with ngpus=1
will result in the job sitting in the queue indefinitely.
The nodes with the larger GPUs are a valuable resource. Do not use these if the smaller GPUs will suffice.