Difference between revisions of "HPC Getting Started"

From Storrs HPC Wiki
Jump to: navigation, search
(GPU Cluster Nodes)
Line 58: Line 58:
 
|+ Configuration of each type of GPU compute node
 
|+ Configuration of each type of GPU compute node
 
|-
 
|-
! GPU name !! Nodes !! Cards per Node !! Cuda Cores per Card !! Tensor Cores Per Card !! RAM on Card (MB) !! Host Names
+
! ''Partition'' || GPU name !! Nodes !! Cards<br>per Node !! Cuda Cores<br>per Card !! Tensor Cores<br>per Card !! RAM (GB) !! Host Names
 
|-
 
|-
| NVIDIA Tesla K40m || 2 || 2 || 2,880 || 0 || 12GB || gpu01 - gpu02
+
| ''gpu''      ||NVIDIA Tesla K40m || 2 || 2 || 2,880 || 0 || 12 || gpu01 - gpu02
 
|-
 
|-
| NVIDIA Tesla V100 || 4 || 1 or 3 || 5,120 ||640|| 16GB || gpu03 - gpu11
+
| ''gpu_v100'' ||NVIDIA Tesla V100 || 4 || 1 or 3 || 5,120 ||640|| 16 or 24 || gpu03 - gpu11
 
|-
 
|-
| NVIDIA GeForce GTX 1080 Ti || 11 || 2 or 4 || 3,584 || 0 || 11GB || gtx01 - gtx11
+
| ''gpu_gtx''  ||NVIDIA GeForce GTX 1080 Ti || 11 || 2 or 4 || 3,584 || 0 || 11 || gtx01 - gtx11
 
|-
 
|-
| NVIDIA GeForce RTX 2080 Ti ||  5 || 8 || 4,352 || 0 || 11GB || gtx12 - gtx16
+
| ''gpu_rtx''  ||NVIDIA GeForce RTX 2080 Ti ||  5 || 8 || 4,352 || 0 || 11 || gtx12 - gtx16
 +
 
 
|}
 
|}
 
 
  
 
= HPC applications =
 
= HPC applications =

Revision as of 11:17, 7 April 2020


Connecting to the cluster

If you don't already have an account, please fill out the cluster application form.

SSH access

To access the cluster resources and send commands you need to use an SSH client. On Mac and Linux, from the a terminal simply run:

ssh <Your Net ID>@login.storrs.hpc.uconn.edu

For Linux/MacOSX users, it is recommended to set up and use SSH keys.

Windows users can login using PuTTY.

Once connected, you should see a terminal prompt like:

[<Your Net ID>@cn01 ~]$

Off-campus Access

SSH connections are limited to on-campus addresses from both the wired network and the "UCONN-SECURE" wireless network.

In order to connect to HPC from off campus, you will first need to connect to the VPN: The UConn VPN is the recommended way to access the Storrs HPC cluster from off campus. Windows and Mac users should follow the instructions on that page for installing the VPN client. Linux users should follow these alternate instructions.

Submitting Jobs

All job submission, management, and scheduling is done using the job scheduler software SLURM. To learn more about job submission and management, please read our SLURM Guide.

Always run jobs via SLURM. If you do not, your process may be throttled or terminated.

Please read our usage policy for more details.

Standard Cluster Nodes

There are four classes of nodes available on the HPC cluster, each named after the Intel CPU architecture, listed in the table below.

Configuration of each type of CPU compute node
Name CPU name Nodes Cores per Node Cores Total RAM (GB) CPU Frequency (GHz) Host Names
Sandy Bridge Xeon E5-2650 40 16 640 64 2.00 cn65 - cn104
Ivy Bridge Xeon E5-2680 v2 32 20 640 128 2.80 cn105 - cn136
Haswell Xeon E5-2690 v3 175 24 4,200 128 2.60 cn137 - cn324
Broadwell Xeon E5-2699 v4 4 44 176 256 2.20 cn325 - cn328
Skylake Xeon Gold 6150 81 36 2,916 192 2.70 cn329 - cn409

GPU Cluster Nodes

The following 22 GPU nodes are available on the HPC cluster.

For information on using the GPU nodes, please see the GPU Guide.

Configuration of each type of GPU compute node
Partition GPU name Nodes Cards
per Node
Cuda Cores
per Card
Tensor Cores
per Card
RAM (GB) Host Names
gpu NVIDIA Tesla K40m 2 2 2,880 0 12 gpu01 - gpu02
gpu_v100 NVIDIA Tesla V100 4 1 or 3 5,120 640 16 or 24 gpu03 - gpu11
gpu_gtx NVIDIA GeForce GTX 1080 Ti 11 2 or 4 3,584 0 11 gtx01 - gtx11
gpu_rtx NVIDIA GeForce RTX 2080 Ti 5 8 4,352 0 11 gtx12 - gtx16

HPC applications

We have created helpful software guides to demonstrate how to effectively use popular scientific applications on the HPC cluster.

Troubleshooting

For any errors, please read FAQ first. For further assistance, visit the Help page for further resources and contact information for technical support.