Maple is a Cray cluster with 73 compute nodes, a total of 2,164 CPU cores, and 33 NVIDIA GPUs, and 7.5TB of memory. This cluster has theoretical peak of 116 TFLOPS. It is interconnected with Infiniband. This cluster is partially funded by the National Science Foundation (CHE-1338056).

The cluster originally consisted of 29 compute nodes, each with two 10 core Intel Xeon CPUs, 64GB of memory, and an NVIDIA Kepler K20 GPU.

The cluster was upgraded in 2017 and 2018, adding a total of 44 nodes, each with two 18 core Intel Xeon CPUs and 128 GB of RAM. Two of those nodes have two large memory NVIDIA GPUs. The cluster will be upgraded again in the summer of 2019.

Maple has much newer CPUs than Sequoia and Catalpa and does not share their software stack. Also, while all of our systems run SUSE Linux Enterprise Server (SLES), Maple is our only Cray system and our only system to use Bright Cluster Manager, so its operating system and some software is setup differently.

If you wish to use a GPU in your job, you must request it using PBS’s ngpus parameter. For instance, if you need a single GPU you would add
#PBS -l ngpus=1
to your PBS script. You also must specify the gpu queue for your job, or PBS will not be able to assign it to a node. The nodes in the gpu queue have only one GPU per node.

To use the newer, large memory GPUs, use the biggpu queue. There are four total, two GPUs each on two nodes.

All Maple nodes have local hard disks, accessible in /tmp. The original nodes (cn01-cn29) have 1TB disks and all newer nodes (cn30-cn73) have 2TB disks.

Requests for software installation can be made via email.