Difference between revisions of "Usage policy"

From Storrs HPC Wiki
Jump to: navigation, search
(Unscheduled programs)
 
(15 intermediate revisions by 3 users not shown)
Line 1: Line 1:
To be fair to all users of the cluster, please be aware of these resource limits and usage expectations.
+
The Storrs HPC cluster is a shared resource available to all researchers on all campuses. Use of the cluster is subject to all applicable university policies, including the [http://policy.uconn.edu/2012/06/21/acceptable-use-information-technology/ Information Technology Acceptable Use] policy. The cluster cannot be used to generate or store data that has been classified as [http://security.uconn.edu/extended-list-of-confidential-data/ Sensitive University Data] or covered by the university's [http://policy.uconn.edu/2015/12/16/export-control-policy/ Export Control Policy]. All data that is stored on the cluster is subject to these restrictions, and data that is not in compliance may be removed. Please familiarize yourself with the data storage guidelines described in the [[Data Storage Guide]].
 +
 
 +
Additionally, before using the cluster, please familiarize yourself with the procedures listed below.
  
 
= Scheduled Jobs =
 
= Scheduled Jobs =
Line 6: Line 8:
  
 
{| class="wikitable sortable"
 
{| class="wikitable sortable"
! Job property    !! Standard QoS Limit || Longrun QoS Limit || Haswell384 QoS Limit
+
! Job property    !! Default Partition (<code>general</code>) || <code>serial</code> Partition || <code>parallel</code> Partition || <code>debug</code> Partition
 
|-  
 
|-  
| Run time (hours) || 36                || 72                || 18
+
| Run time || 12 hours || 7 days || 6 hours || 30 Minutes
 
|-
 
|-
| Cores / CPUs    ||colspan=2| 48                            || 384
+
| Nodes  || 8 || 4 || 16 || 1
 
|-
 
|-
| Jobs            ||colspan=3| 8
+
| Concurrent jobs ||colspan=4| 8
 
|}
 
|}
  
 
= Unscheduled programs =
 
= Unscheduled programs =
  
Programs that are running on login node (<code>login.storrs.hpc.uconn.edu</code>) without using the job scheduler, are subject to certain restrictions. Any program that violates these restrictions may be throttled or terminated without notice.
+
Programs that are running on a login node (<code>login.storrs.hpc.uconn.edu</code>) without using the job scheduler are subject to certain restrictions. Any program that violates these restrictions may be throttled or terminated without notice.
  
 
{| class="wikitable sortable"
 
{| class="wikitable sortable"
Line 27: Line 29:
 
Below is a list of programs that are allowed on the login node without restrictions:
 
Below is a list of programs that are allowed on the login node without restrictions:
 
<div style="column-count:4;-moz-column-count:4;-webkit-column-count:4">
 
<div style="column-count:4;-moz-column-count:4;-webkit-column-count:4">
 +
* awk
 +
* basemount
 +
* bash
 
* bzip
 
* bzip
 +
* chgrp
 +
* chmod
 +
* comsollauncher
 
* cp
 
* cp
 
* du
 
* du
 
* emacs
 
* emacs
 +
* find
 
* fort
 
* fort
 
* gcc
 
* gcc
 
* gfortran
 
* gfortran
 +
* grep
 
* gunzip
 
* gunzip
 
* gzip
 
* gzip
 
* icc
 
* icc
 +
* ifort
 +
* jservergo
 +
* less
 +
* ls
 +
* make
 +
* more
 
* mv
 
* mv
 +
* ncftp
 +
* nvcc
 +
* perl
 +
* rm
 +
* rsync
 +
* ruby
 +
* setfacl
 
* sftp
 
* sftp
 
* smbclient
 
* smbclient
 
* ssh
 
* ssh
 +
* tail
 
* tar
 
* tar
 +
* ukbfetch
 
* vim
 
* vim
 
* wget
 
* wget
 +
* x2goagent
 
</div>
 
</div>
 
= Data Storage =
 
Please familiarize yourself with the data storage guidelines described on the [[HPC_Getting_Started#HPC_Storage_.28short_term.29|Getting Started]] page. All data that is stored on the cluster is subject to the restricted described on that page, and data that is not in compliance may be removed.
 
 
= Shared Read-Only Datasets =
 
Users who need read-only datasets can contact our administrators (hpc@uconn.edu) to request the dataset. For example, people who study bioinformatics often need reference dataset for different organisms. The reference dataset is usually very large so user can only save them in /scratch. But. it is inconvenient to touch the dataset every 15 days to prevent deletion. If you have such kind of dataset, we can store the dataset for you. The dataset must meet the following requirements:
 
* dataset is read-only, cannot be writable or executable
 
* dataset is public (can be used by other users) or is restricted to a group of users
 
The shared dataset is under path: /scratch/scratch2/shareddata/. The data under this directory will be stored permanently. Now we have 4 reference datasets in genome directory: hg19 hg38 mm9 and mm10.
 
 
To make the linking path shorter, you can create a soft link with dataset under your home directory. For example:
 
$ cd
 
$ link -s /scratch/scratch2/shareddata/genome ./genome
 

Latest revision as of 16:08, 12 April 2021

The Storrs HPC cluster is a shared resource available to all researchers on all campuses. Use of the cluster is subject to all applicable university policies, including the Information Technology Acceptable Use policy. The cluster cannot be used to generate or store data that has been classified as Sensitive University Data or covered by the university's Export Control Policy. All data that is stored on the cluster is subject to these restrictions, and data that is not in compliance may be removed. Please familiarize yourself with the data storage guidelines described in the Data Storage Guide.

Additionally, before using the cluster, please familiarize yourself with the procedures listed below.

Scheduled Jobs

All computational jobs need to be submitted to the cluster using the job scheduler. Please read the SLURM Guide for helpful information on using the scheduler. Listed below are the runtime and resource limits for scheduled jobs.

Job property Default Partition (general) serial Partition parallel Partition debug Partition
Run time 12 hours 7 days 6 hours 30 Minutes
Nodes 8 4 16 1
Concurrent jobs 8

Unscheduled programs

Programs that are running on a login node (login.storrs.hpc.uconn.edu) without using the job scheduler are subject to certain restrictions. Any program that violates these restrictions may be throttled or terminated without notice.

Run time (minutes) CPU limit Memory limit
20 5% 5%

Below is a list of programs that are allowed on the login node without restrictions:

  • awk
  • basemount
  • bash
  • bzip
  • chgrp
  • chmod
  • comsollauncher
  • cp
  • du
  • emacs
  • find
  • fort
  • gcc
  • gfortran
  • grep
  • gunzip
  • gzip
  • icc
  • ifort
  • jservergo
  • less
  • ls
  • make
  • more
  • mv
  • ncftp
  • nvcc
  • perl
  • rm
  • rsync
  • ruby
  • setfacl
  • sftp
  • smbclient
  • ssh
  • tail
  • tar
  • ukbfetch
  • vim
  • wget
  • x2goagent