Difference between revisions of "Usage policy"

From Storrs HPC Wiki
Jump to: navigation, search
(Fix bullet point continuation.)
(Add scheduled job limits, add policy for unscheduled programs)
Line 18: Line 18:
 
* If you try to run <code>ls</code> on either the <code>/home</code>, <code>/shared</code>, or <code>/misc/cnXX</code> directories, you might not see them. <!--
 
* If you try to run <code>ls</code> on either the <code>/home</code>, <code>/shared</code>, or <code>/misc/cnXX</code> directories, you might not see them. <!--
 
-->They are invisible because they are mounted on demand by <code>autofs</code>, when an attempt is made to access a file under the directory, or using <code>cd</code> to enter the directory structure.
 
-->They are invisible because they are mounted on demand by <code>autofs</code>, when an attempt is made to access a file under the directory, or using <code>cd</code> to enter the directory structure.
<!--
 
= Computation =
 
  
== Legal jobs ==
+
= Scheduled Jobs =
  
Don't use login nodes, etc
+
Jobs submitted through the slurm scheduler:
  
== CPU usage limitations ==
+
{| class="wikitable sortable"
-->
+
! Job property    !! Standard Limit || Longrun Limit
 +
|-
 +
| Run time (hours) || 36            || 72
 +
|-
 +
| Cores / CPUs    ||colspan=2| 48
 +
|-
 +
| Nodes            ||colspan=2| 4
 +
|}
 +
 
 +
= Unscheduled programs =
 +
 
 +
{| class="wikitable sortable"
 +
! Node        || Run time (minutes) || CPU limit || Memory limit
 +
|-
 +
| Login node  || 30                || 5%        || 5% (or 3.2 GB)
 +
|-
 +
| Compute node || 10                ||colspan=2| Not allowed on compute nodes!
 +
|}
 +
 
 +
* We strong discourage programs being run on the cluster without the slurm scheduler.
 +
* Programs on the compute nodes are not allowed.
 +
* Some programs are allowed on the login node only:<div style="column-count:4;-moz-column-count:4;-webkit-column-count:4">
 +
** bzip
 +
** cp
 +
** du
 +
** emacs
 +
** fort
 +
** gcc
 +
** gfortran
 +
** gunzip
 +
** gzip
 +
** icc
 +
** mv
 +
** sftp
 +
** smbclient
 +
** ssh
 +
** tar
 +
** vim
 +
** wget
 +
</div>
  
 
= Appendix: Storage =
 
= Appendix: Storage =

Revision as of 14:50, 26 August 2015

To be fair to all users of the cluster, please be aware of these resource limits and usage expectations.

Storage

Name Path Size (GB) Persistence Backed up? Purpose
Home ~ 2 Permanent Yes Personal storage, available on every node
Group /shared By request Permanent Yes Group storage for collaborative work
Fast /scratch 220,000 (shared) 2 weeks No Fast RAID-0 storage, generally for result files
Local to node /work 100 5 days No Useful for large intermediate files, globally accessible from /misc/cnXX
  • Data deletion is based on modification time. You will get 3 warnings before deletion if your files or directories are expired.
  • If you try to run ls on either the /home, /shared, or /misc/cnXX directories, you might not see them. They are invisible because they are mounted on demand by autofs, when an attempt is made to access a file under the directory, or using cd to enter the directory structure.

Scheduled Jobs

Jobs submitted through the slurm scheduler:

Job property Standard Limit Longrun Limit
Run time (hours) 36 72
Cores / CPUs 48
Nodes 4

Unscheduled programs

Node Run time (minutes) CPU limit Memory limit
Login node 30 5% 5% (or 3.2 GB)
Compute node 10 Not allowed on compute nodes!
  • We strong discourage programs being run on the cluster without the slurm scheduler.
  • Programs on the compute nodes are not allowed.
  • Some programs are allowed on the login node only:
    • bzip
    • cp
    • du
    • emacs
    • fort
    • gcc
    • gfortran
    • gunzip
    • gzip
    • icc
    • mv
    • sftp
    • smbclient
    • ssh
    • tar
    • vim
    • wget

Appendix: Storage

Check size of home directory

You can check your home directory quota usage with:

/usr/lpp/mmfs/bin/mmlsquota -j $(whoami) gpfs2

Example output:

                         Block Limits                                    |     File Limits
Filesystem type             KB      quota      limit   in_doubt    grace |    files   quota    limit in_doubt    grace  Remarks
gpfs2      FILESET        6272          0    2097152          0     none |      165       0        0        0     none 

Your current usage and allowed limit are highlighted in red.

Recovered deleted files

/gpfs/gpfs2/.snapshots/<date>/home/<user>

Shared directories are available in

/gpfs/gpfs2/.snapshots/<date>/shared/<group>
Retrieved from "https://wiki.hpc.uconn.edu/index.php?title=Usage_policy&oldid=1816"