Difference between revisions of "Usage policy"

From Storrs HPC Wiki
Jump to: navigation, search
(Remove references to obsolete directories, scratch0 and scratch1)
(Unscheduled programs)
Line 40: Line 40:
 
! Node        || Run time (minutes) || CPU limit || Memory limit
 
! Node        || Run time (minutes) || CPU limit || Memory limit
 
|-  
 
|-  
| Login node  || 30                || 5%        || 5% (or 3.2 GB)
+
| Login node  || 30                || 5%        || 5%
 
|-
 
|-
| Compute node || 10                ||colspan=2| Not allowed on compute nodes!
+
| Compute node || 10                ||colspan=2| Not allowed on compute nodes
 
|}
 
|}
  
* We strongly discourage programs being run on the cluster without the slurm scheduler.
+
* We strongly discourage programs being run on the cluster without the SLURM scheduler.
 
* Programs on the compute nodes are not allowed.
 
* Programs on the compute nodes are not allowed.
 
* Some programs are allowed on the login node only:<div style="column-count:4;-moz-column-count:4;-webkit-column-count:4">
 
* Some programs are allowed on the login node only:<div style="column-count:4;-moz-column-count:4;-webkit-column-count:4">

Revision as of 20:19, 31 March 2016

To be fair to all users of the cluster, please be aware of these resource limits and usage expectations.

Storage

Name Path Size (GB) Persistence Backed up? Purpose
Home ~ 2 Permanent Yes Personal storage, available on every node
Group /shared By request Permanent Yes Group storage for collaborative work
Fast /scratch/scratch2 438,000 (shared) 2 weeks No Fast RAID-0 storage, generally for result files
Local to node /work 100 5 days No Useful for large intermediate files, globally accessible from /misc/cnXX
  • Data deletion of directories inside the scratch2 folder is based on modification time. You will get 3 warnings before deletion.
  • If you try to run ls on either the /home, /shared, or /misc/cnXX directories, you might not see them. They are invisible because they are mounted on demand by autofs, when an attempt is made to access a file under the directory, or using cd to enter the directory structure.
  • You can recover files on your own from our backed up directories using our snapshots within 2 weeks. Beyond 2 weeks we may be able to help if you contact us.
  • You can check on your home directory quota.

Scheduled Jobs

Jobs submitted through the slurm scheduler:

Job property Standard QoS Limit Longrun QoS Limit Haswell384 QoS Limit
Run time (hours) 36 72 18
Cores / CPUs 48 384
Jobs 8

Unscheduled programs

Node Run time (minutes) CPU limit Memory limit
Login node 30 5% 5%
Compute node 10 Not allowed on compute nodes
  • We strongly discourage programs being run on the cluster without the SLURM scheduler.
  • Programs on the compute nodes are not allowed.
  • Some programs are allowed on the login node only:
    • bzip
    • cp
    • du
    • emacs
    • fort
    • gcc
    • gfortran
    • gunzip
    • gzip
    • icc
    • mv
    • sftp
    • smbclient
    • ssh
    • tar
    • vim
    • wget
Retrieved from "https://wiki.hpc.uconn.edu/index.php?title=Usage_policy&oldid=2324"