MCSR takes a very hands-off approach to supercomputing, but we do have a few rules that should be followed:
- All jobs should be run via PBS, whether via a script or an interaction session. No running jobs on the head node or hpcwoods.
- Users should ensure their jobs don’t use more resources than they have reserved via PBS. Usually this is enforced by PBS, but users are responsible for keeping an eye on their jobs.
- Jobs should efficiently use resources. Use what you request.
- Users queuing multiple jobs at once should take care that their scripts are doing what they expect them to do.
- Jobs requiring less than 5 minutes should be combined with other, similar jobs. This can be done via a simple for loop or with a program like GNU Parallel.
- Users queuing more than 1,000 jobs should clear it with MCSR staff in advance.
- Some resources are reserved for certain types of jobs:
biggpuqueue on Maple should only be used by jobs using the GPUs on those nodes.
- Catalpa should only be used by jobs needing more than 32GB of memory.
- Do not upload any data that contains protected health information (PHI).
Users found to be ignoring these rules are subject to discipline, up to and including loss of all MCSR privileges.
If you have any questions about what is allowed on MCSR systems, please contact the staff.