Difference between revisions of "Usage policy"

From Storrs HPC Wiki
Jump to: navigation, search
(Raise job limit to 8)
(Remove references to obsolete directories, scratch0 and scratch1)
Line 4: Line 4:
  
 
{| class="wikitable sortable"
 
{| class="wikitable sortable"
! Name          !! Path                 !! Size (GB)                    !! Persistence  !! Backed up? !! Purpose
+
! Name          !! Path                           !! Size (GB)                    !! Persistence  !! Backed up? !! Purpose
 
|-
 
|-
| Home          || <code>~</code>       || 2                            || Permanent    || Yes        || Personal storage, available on every node
+
| Home          || <code>~</code>                 || 2                            || Permanent    || Yes        || Personal storage, available on every node
 
|-
 
|-
| Group        || <code>/shared</code> || [[:Category:Help|By request]] || Permanent    || Yes        || Group storage for collaborative work
+
| Group        || <code>/shared</code>           || [[:Category:Help|By request]] || Permanent    || Yes        || Group storage for collaborative work
 
|-
 
|-
| Fast          || <code>/scratch</code> || 438,000 (shared)              || '''2 weeks''' || No        || Fast RAID-0 storage, generally for result files
+
| Fast          || <code>/scratch/scratch2</code> || 438,000 (shared)              || '''2 weeks''' || No        || Fast RAID-0 storage, generally for result files
 
|-
 
|-
| Local to node || <code>/work</code>   || 100                          || '''5 days'''  || No        || Useful for large intermediate files, globally accessible from <code>/misc/cnXX</code>
+
| Local to node || <code>/work</code>             || 100                          || '''5 days'''  || No        || Useful for large intermediate files, globally accessible from <code>/misc/cnXX</code>
 
|}
 
|}
  
* There are 3 different scratch sub-directories, scratch0, scratch1, and scratch2, under scratch folder.
+
* Data deletion of directories inside the '''scratch2''' folder is based on modification time.  You will get 3 warnings before deletion.
* Data deletion of '''scratch''' folder is based on modification time.  You will get 3 warnings before deletion if your files or directories are expired.  
 
 
* If you try to run <code>ls</code> on either the <code>/home</code>, <code>/shared</code>, or <code>/misc/cnXX</code> directories, you might not see them. <!--
 
* If you try to run <code>ls</code> on either the <code>/home</code>, <code>/shared</code>, or <code>/misc/cnXX</code> directories, you might not see them. <!--
 
-->They are invisible because they are mounted on demand by <code>autofs</code>, when an attempt is made to access a file under the directory, or using <code>cd</code> to enter the directory structure.
 
-->They are invisible because they are mounted on demand by <code>autofs</code>, when an attempt is made to access a file under the directory, or using <code>cd</code> to enter the directory structure.

Revision as of 10:08, 22 February 2016

To be fair to all users of the cluster, please be aware of these resource limits and usage expectations.

Storage

Name Path Size (GB) Persistence Backed up? Purpose
Home ~ 2 Permanent Yes Personal storage, available on every node
Group /shared By request Permanent Yes Group storage for collaborative work
Fast /scratch/scratch2 438,000 (shared) 2 weeks No Fast RAID-0 storage, generally for result files
Local to node /work 100 5 days No Useful for large intermediate files, globally accessible from /misc/cnXX
  • Data deletion of directories inside the scratch2 folder is based on modification time. You will get 3 warnings before deletion.
  • If you try to run ls on either the /home, /shared, or /misc/cnXX directories, you might not see them. They are invisible because they are mounted on demand by autofs, when an attempt is made to access a file under the directory, or using cd to enter the directory structure.
  • You can recover files on your own from our backed up directories using our snapshots within 2 weeks. Beyond 2 weeks we may be able to help if you contact us.
  • You can check on your home directory quota.

Scheduled Jobs

Jobs submitted through the slurm scheduler:

Job property Standard QoS Limit Longrun QoS Limit Haswell384 QoS Limit
Run time (hours) 36 72 18
Cores / CPUs 48 384
Jobs 8

Unscheduled programs

Node Run time (minutes) CPU limit Memory limit
Login node 30 5% 5% (or 3.2 GB)
Compute node 10 Not allowed on compute nodes!
  • We strongly discourage programs being run on the cluster without the slurm scheduler.
  • Programs on the compute nodes are not allowed.
  • Some programs are allowed on the login node only:
    • bzip
    • cp
    • du
    • emacs
    • fort
    • gcc
    • gfortran
    • gunzip
    • gzip
    • icc
    • mv
    • sftp
    • smbclient
    • ssh
    • tar
    • vim
    • wget
Retrieved from "https://wiki.hpc.uconn.edu/index.php?title=Usage_policy&oldid=2286"