skip to main content

Rules

MCSR takes a very hands-off approach to supercomputing, but we do have a few rules that should be followed:

  1. All jobs should be run via PBS, whether via a script or an interaction session. No running jobs on the head node or hpcwoods.
  2. Users should ensure their jobs don’t use more resources than they have reserved via PBS. Usually this is enforced by PBS, but users are responsible for keeping an eye on their jobs.
  3. Jobs should efficiently use resources. Use what you request.
  4. Users queuing multiple jobs at once should take care that their scripts are doing what they expect them to do.
    1. Jobs requiring less than 5 minutes should be combined with other, similar jobs. This can be done via a simple for loop or with a program like GNU Parallel.
    2. Users queuing more than 1,000 jobs should clear it with MCSR staff in advance.
  5. Some resources are reserved for certain types of jobs:
    1. The biggpu queue on Maple should only be used by jobs using the GPUs on those nodes.
    2. Catalpa should only be used by jobs needing more than 32GB of memory.
  6. Do not upload any data that contains protected health information (PHI).

Users found to be ignoring these rules are subject to discipline, up to and including loss of all MCSR privileges.

If you have any questions about what is allowed on MCSR systems, please contact the staff.