Magnolia
Magnolia is an HPE Apollo 2000 cluster with 50 total compute nodes: 44 CPU-only nodes and 6 nodes with dual NVIDIA L40S GPUs. The nodes are interconnected with HDR100 Infiniband. It runs SUSE SLES 15 Linux, uses the Bright Cluster Management software, and uses SLURM for job management.
CPU Nodes
Each CPU node has dual Intel Xeon Gold 6342 processors, 512GB of RAM, and a 2TB SSD for fast local storage. Eight nodes have 1TB of RAM.
GPU Nodes
GPU nodes have two NVIDIA L40S accelerators each. They also have a single Intel Xeon Gold 6548Y+ 32 core processor, 512GB of RAM, and a 3.8TB SSD for local storage.
Software
Magnolia currently has Anaconda Python, MOLPRO, and Gaussian 16 installed. Example jobs, with Slurm scripts, are available in /usr/local/apps/example_jobs
. To request software, please contact us via email.
Slurm
All past MCSR systems have used PBS for job management. Magnolia uses Slurm, a more modern job management system in wide use in the HPC world. The table below shows the Slurm commands that correspond to the PBS commands we’re familiar with.
Submit a job
sbatch name.slurm
Check on status of job
squeue -j jobid
Get detailed information on job
scontrol show job jobid
See all jobs
squeue
See queue status
sinfo
Cancel job
scancel jobid
Submit interactive job
srun -c 4 --mem 4g --time 1:00:00 --pty bash