Maple is a Cray cluster with 116 compute nodes, a total of 3,712 CPU cores, and 33 NVIDIA GPUs, and 16.2TB of memory. This cluster has theoretical peak of 162.9 TFLOPS. It is interconnected with Infiniband. This cluster is partially funded by the National Science Foundation (CHE-1338056).
The cluster originally consisted of 29 compute nodes, each with two 10 core Intel Xeon CPUs, 64GB of memory, and an NVIDIA Kepler K20 GPU.
The cluster was upgraded in 2017, 2018, 2019, and 2020 adding a total of 87 nodes, each with two 18 core Intel Xeon CPUs. Two of those nodes have two NVIDIA P100 GPUs with 12GB of memory.
Maple has much newer CPUs than Sequoia and Catalpa and does not share their software stack. Also, while all of our systems run SUSE Linux Enterprise Server (SLES), Maple is our only Cray system and our only system to use Bright Cluster Manager, so its operating system and some software is setup differently.
All Maple nodes have local hard disks, accessible in /tmp. The original nodes (cn001-cn029) have 1TB disks and all newer nodes (cn030-cn116) have 2TB disks.
Maple has three overlapping PBS queues: workq, gpu, and biggpu. The biggpu queue contains the two nodes with large memory GPUs (two each). This queue should only be used by jobs requiring the large memory GPUs. The gpu queue contains the 29 nodes with small memory GPUs. The workq queue is Maple’s default queue and contains all compute nodes with the exception of the biggpu nodes. This is the queue that should be used when no GPUs are required.
Requests for software installation can be made via email.