MapleĀ is a Cray cluster with 47 compute nodes, a total of 1228 CPU cores, and 29 NVIDIA Kepler K20 GPUs, and 3.3TB of memory. This cluster has theoretical peak of 70 TFLOPS. It is interconnected with 10GBps QDR Infiniband, This cluster is grant funded by the National Science Foundation (CHE-1338056).

The cluster originally consisted of 29 compute nodes, each with two 10 core Intel Xeon CPUs, 64GB of memory, 1TB hard disk, and an NVIDIA Kepler K20 GPU. In 2017, the cluster was upgraded with 18 additional nodes. The new nodes have two 18 core Intel Xeon CPUs, 128 GB of RAM, no hard disk, and no GPU.

Maple has much newer CPUs than Sequoia and Catalpa and does not share their software stack. Also, while all of our systems run SUSE Linux Enterprise Server (SLES), Maple is our only Cray system and our only system to use Bright Cluster Manager, so its operating system and some software is setup differently.

If you wish to use a GPU in your job, you must request it using PBS’s ngpus parameter. For instance, if you need a single GPU you would add
#PBS -l ngpus=1
to your PBS script. You also must specify the gpu queue for your job, or PBS will not be able to assign it to a node. If you wish to use more than one GPU, you will need to request at least that many nodes as well, as Maple only has one GPU per node.

The new nodes are diskless, which means theĀ /tmp partition is backed by a ram disk, so while very fast, they are limited in size. Users should use /scratch for large temporary files on these nodes. Using /scratch is faster than a local disk in many cases, anyway. If you must have a large /tmp, submit to the gpu queue, as all GPU nodes have a hard disk-backed /tmp.

Requests for software installation can be made via email.