Difference between revisions of "Usage policy"

From Storrs HPC Wiki
Jump to: navigation, search
(Add scheduled job limits, add policy for unscheduled programs)
(Unscheduled programs)
 
(43 intermediate revisions by 6 users not shown)
Line 1: Line 1:
To be fair to all users of the cluster, please be aware of these resource limits and usage expectations.
+
The Storrs HPC cluster is a shared resource available to all researchers on all campuses. Use of the cluster is subject to all applicable university policies, including the [http://policy.uconn.edu/2012/06/21/acceptable-use-information-technology/ Information Technology Acceptable Use] policy. The cluster cannot be used to generate or store data that has been classified as [http://security.uconn.edu/extended-list-of-confidential-data/ Sensitive University Data] or covered by the university's [http://policy.uconn.edu/2015/12/16/export-control-policy/ Export Control Policy]. All data that is stored on the cluster is subject to these restrictions, and data that is not in compliance may be removed. Please familiarize yourself with the data storage guidelines described in the [[Data Storage Guide]].
  
= Storage =
+
Additionally, before using the cluster, please familiarize yourself with the procedures listed below.
 
 
{| class="wikitable sortable"
 
! Name          !! Path                  !! Size (GB)                    !! Persistence  !! Backed up? !! Purpose
 
|-
 
| Home          || <code>~</code>        || 2                            || Permanent    || Yes        || Personal storage, available on every node
 
|-
 
| Group        || <code>/shared</code>  || [[:Category:Help|By request]] || Permanent    || Yes        || Group storage for collaborative work
 
|-
 
| Fast          || <code>/scratch</code> || 220,000 (shared)              || '''2 weeks''' || No        || Fast RAID-0 storage, generally for result files
 
|-
 
| Local to node || <code>/work</code>    || 100                          || '''5 days'''  || No        || Useful for large intermediate files, globally accessible from <code>/misc/cnXX</code>
 
|}
 
 
 
* Data deletion is based on modification time.  You will get 3 warnings before deletion if your files or directories are expired.
 
* If you try to run <code>ls</code> on either the <code>/home</code>, <code>/shared</code>, or <code>/misc/cnXX</code> directories, you might not see them. <!--
 
-->They are invisible because they are mounted on demand by <code>autofs</code>, when an attempt is made to access a file under the directory, or using <code>cd</code> to enter the directory structure.
 
  
 
= Scheduled Jobs =
 
= Scheduled Jobs =
  
Jobs submitted through the slurm scheduler:
+
All computational jobs need to be submitted to the cluster using the job scheduler. Please read the [[SLURM_Guide|SLURM Guide]] for helpful information on using the scheduler. Listed below are the runtime and resource limits for scheduled jobs.
  
 
{| class="wikitable sortable"
 
{| class="wikitable sortable"
! Job property    !! Standard Limit || Longrun Limit
+
! Job property    !! Default Partition (<code>general</code>) || <code>serial</code> Partition || <code>parallel</code> Partition || <code>debug</code> Partition
 
|-  
 
|-  
| Run time (hours) || 36            || 72
+
| Run time || 12 hours || 7 days || 6 hours || 30 Minutes
 
|-
 
|-
| Cores / CPUs    ||colspan=2| 48
+
| Nodes  || 8 || 4 || 16 || 1
 
|-
 
|-
| Nodes            ||colspan=2| 4
+
| Concurrent jobs ||colspan=4| 8
 
|}
 
|}
  
 
= Unscheduled programs =
 
= Unscheduled programs =
 +
 +
Programs that are running on a login node (<code>login.storrs.hpc.uconn.edu</code>) without using the job scheduler are subject to certain restrictions. Any program that violates these restrictions may be throttled or terminated without notice.
  
 
{| class="wikitable sortable"
 
{| class="wikitable sortable"
! Node        || Run time (minutes) || CPU limit || Memory limit
+
! Run time (minutes) || CPU limit || Memory limit
 
|-  
 
|-  
| Login node  || 30                 || 5%        || 5% (or 3.2 GB)
+
| 20                 || 5%        || 5%
|-
 
| Compute node || 10                ||colspan=2| Not allowed on compute nodes!
 
 
|}
 
|}
  
* We strong discourage programs being run on the cluster without the slurm scheduler.
+
Below is a list of programs that are allowed on the login node without restrictions:
* Programs on the compute nodes are not allowed.
+
<div style="column-count:4;-moz-column-count:4;-webkit-column-count:4">
* Some programs are allowed on the login node only:<div style="column-count:4;-moz-column-count:4;-webkit-column-count:4">
+
* awk
** bzip
+
* basemount
** cp
+
* bash
** du
+
* bzip
** emacs
+
* chgrp
** fort
+
* chmod
** gcc
+
* comsollauncher
** gfortran
+
* cp
** gunzip
+
* du
** gzip
+
* emacs
** icc
+
* find
** mv
+
* fort
** sftp
+
* gcc
** smbclient
+
* gfortran
** ssh
+
* grep
** tar
+
* gunzip
** vim
+
* gzip
** wget
+
* icc
 +
* ifort
 +
* jservergo
 +
* less
 +
* ls
 +
* make
 +
* more
 +
* mv
 +
* ncftp
 +
* nvcc
 +
* perl
 +
* rm
 +
* rsync
 +
* ruby
 +
* setfacl
 +
* sftp
 +
* smbclient
 +
* ssh
 +
* tail
 +
* tar
 +
* ukbfetch
 +
* vim
 +
* wget
 +
* x2goagent
 
</div>
 
</div>
 
= Appendix: Storage =
 
 
== Check size of home directory ==
 
 
You can check your home directory quota usage with:
 
/usr/lpp/mmfs/bin/mmlsquota -j $(whoami) gpfs2
 
 
Example output:
 
                          Block Limits                                    |    File Limits
 
Filesystem type            <span style="color:red">KB</span>      quota      <span style="color:red">limit</span>  in_doubt    grace |    files  quota    limit in_doubt    grace  Remarks
 
gpfs2      FILESET        <span style="color:red">6272</span>          0    <span style="color:red">2097152</span>          0    none |      165      0        0        0    none
 
 
Your current usage and allowed limit are highlighted in red.
 
 
== Recovered deleted files ==
 
 
* File system snapshots are created every day and kept for 1 week.
 
* An additional snapshot is created every second Monday and is saved for 2 weeks.
 
* Users can access their home directory snapshots in:
 
 
/gpfs/gpfs2/.snapshots/<date>/home/<user>
 
 
Shared directories are available in
 
 
/gpfs/gpfs2/.snapshots/<date>/shared/<group>
 

Latest revision as of 16:08, 12 April 2021

The Storrs HPC cluster is a shared resource available to all researchers on all campuses. Use of the cluster is subject to all applicable university policies, including the Information Technology Acceptable Use policy. The cluster cannot be used to generate or store data that has been classified as Sensitive University Data or covered by the university's Export Control Policy. All data that is stored on the cluster is subject to these restrictions, and data that is not in compliance may be removed. Please familiarize yourself with the data storage guidelines described in the Data Storage Guide.

Additionally, before using the cluster, please familiarize yourself with the procedures listed below.

Scheduled Jobs

All computational jobs need to be submitted to the cluster using the job scheduler. Please read the SLURM Guide for helpful information on using the scheduler. Listed below are the runtime and resource limits for scheduled jobs.

Job property Default Partition (general) serial Partition parallel Partition debug Partition
Run time 12 hours 7 days 6 hours 30 Minutes
Nodes 8 4 16 1
Concurrent jobs 8

Unscheduled programs

Programs that are running on a login node (login.storrs.hpc.uconn.edu) without using the job scheduler are subject to certain restrictions. Any program that violates these restrictions may be throttled or terminated without notice.

Run time (minutes) CPU limit Memory limit
20 5% 5%

Below is a list of programs that are allowed on the login node without restrictions:

  • awk
  • basemount
  • bash
  • bzip
  • chgrp
  • chmod
  • comsollauncher
  • cp
  • du
  • emacs
  • find
  • fort
  • gcc
  • gfortran
  • grep
  • gunzip
  • gzip
  • icc
  • ifort
  • jservergo
  • less
  • ls
  • make
  • more
  • mv
  • ncftp
  • nvcc
  • perl
  • rm
  • rsync
  • ruby
  • setfacl
  • sftp
  • smbclient
  • ssh
  • tail
  • tar
  • ukbfetch
  • vim
  • wget
  • x2goagent