skip to main content

GPUs on Maple

The Maple cluster has 29 nodes with a single NVIDIA Tesla K20m GPU (2GB memory) for general-purpose computation. There are also two nodes with two NVIDIA Tesla P100 GPUs (12GB memory) each. They can be used to substantially speed up certain types of programs.

CUDA

To load the CUDA Development Toolkit, run:
module load cuda10.1/toolkit
This module will also need to be loaded inside PBS scripts for CUDA programs to run properly.

OpenCL

To compile OpenCL code, load the CUDA toolkit as described above, the use the -lOpenCL option when compiling.

Running GPU Jobs

To use a GPU on Maple, it must be requested, just like CPUs and memory. PBS has an ngpus resource for reserving GPUs. For instance, to reserve a single GPU, include
#PBS -l ngpus=1
in your PBS script or on the command line when you start an interactive session of PBS (qsub -I).

Users must also submit their jobs to either the gpu or biggpu queue to get a node with a GPU. To use the newer, large memory GPUs, use the biggpu queue. There are four total, two GPUs each on two nodes. Submitting to the default queue with ngpus=1 will result in the job sitting in the queue indefinitely.

The nodes with the larger GPUs are a valuable resource. Do not use these if the smaller GPUs will suffice.