Usage policy

From Storrs HPC Wiki
Revision as of 16:36, 22 May 2018 by Jar02014 (talk | contribs) (Scheduled Jobs)
Jump to: navigation, search

The Storrs HPC cluster is a shared resource available to all researchers on all campuses. Use of the cluster is subject to all applicable university policies, including the Information Technology Acceptable Use policy. The cluster cannot be used to generate or store data that has been classified as Sensitive University Data or covered by the university's Export Control Policy. All data that is stored on the cluster is subject to these restrictions, and data that is not in compliance may be removed. Please familiarize yourself with the data storage guidelines described in the Data Storage Guide.

Additionally, before using the cluster, please familiarize yourself with the procedures listed below.

Scheduled Jobs

All computational jobs need to be submitted to the cluster using the job scheduler. Please read the SLURM Guide for helpful information on using the scheduler. Listed below are the runtime and resource limits for scheduled jobs.

Job property Default Partition (general) serial Partition parallel Partition
Run time 12 hours 7 days 6 hours
CPU cores 192 96 384
Concurrent jobs 8

Unscheduled programs

Programs that are running on a login node ( without using the job scheduler are subject to certain restrictions. Any program that violates these restrictions may be throttled or terminated without notice.

Run time (minutes) CPU limit Memory limit
20 5% 5%

Below is a list of programs that are allowed on the login node without restrictions:

  • awk
  • bash
  • bzip
  • chmod
  • cp
  • du
  • emacs
  • find
  • fort
  • gcc
  • gfortran
  • gunzip
  • gzip
  • icc
  • ifort
  • less
  • make
  • more
  • mv
  • nvcc
  • rm
  • rsync
  • sftp
  • smbclient
  • ssh
  • tail
  • tar
  • vim
  • wget
  • x2goagent