Difference between revisions of "HPC Getting Started"
(→SSH access) |
(→SSH access) |
||
Line 15: | Line 15: | ||
Once connected, you should see a terminal prompt like: | Once connected, you should see a terminal prompt like: | ||
− | [ | + | [Your Net ID@cn01 ~]$ |
==Off-campus Access== | ==Off-campus Access== |
Revision as of 11:27, 21 December 2020
Contents
Connecting to the cluster
If you don't already have an account, please fill out the cluster application form.
SSH access
To access the cluster resources and send commands you need to use an SSH client. On Mac and Linux, from the a terminal simply run:
ssh Your_NetID@login.storrs.hpc.uconn.edu
(Where 'Your_NetID' is your own NetID consisting of 3 letters and 5 numbers)
For Linux/MacOSX users, it is recommended to set up and use SSH keys.
Windows users can login using PuTTY.
Once connected, you should see a terminal prompt like:
[Your Net ID@cn01 ~]$
Off-campus Access
SSH connections are limited to on-campus addresses from both the wired network and the "UCONN-SECURE" wireless network.
In order to connect to HPC from off campus, you will first need to connect to the VPN: The UConn VPN is the recommended way to access the Storrs HPC cluster from off campus. Windows and Mac users should follow the instructions on that page for installing the VPN client. Linux users should follow these alternate instructions.
Submitting Jobs
All job submission, management, and scheduling is done using the job scheduler software SLURM. To learn more about job submission and management, please read our SLURM Guide.
Always run jobs via SLURM. If you do not, your process may be throttled or terminated.
Please read our usage policy for more details.
Standard Cluster Nodes
There are four classes of nodes available on the HPC cluster, each named after the Intel CPU architecture, listed in the table below.
Name | CPU name | Nodes | Cores per Node | Cores Total | RAM (GB) | CPU Frequency (GHz) | Host Names |
---|---|---|---|---|---|---|---|
Sandy Bridge | Xeon E5-2650 | 40 | 16 | 640 | 64 | 2.00 | cn65 - cn104 |
Ivy Bridge | Xeon E5-2680 v2 | 32 | 20 | 640 | 128 | 2.80 | cn105 - cn136 |
Haswell | Xeon E5-2690 v3 | 175 | 24 | 4,200 | 128 | 2.60 | cn137 - cn324 |
Broadwell | Xeon E5-2699 v4 | 4 | 44 | 176 | 256 | 2.20 | cn325 - cn328 |
Skylake | Xeon Gold 6150 | 81 | 36 | 2,916 | 192 | 2.70 | cn329 - cn409 |
GPU Cluster Nodes
The following 22 GPU nodes are available on the HPC cluster.
For information on using the GPU nodes, please see the GPU Guide.
Partition | GPU name | Nodes | Cards per Node |
Cuda Cores per Card |
Tensor Cores per Card |
RAM (GB) | Host Names |
---|---|---|---|---|---|---|---|
gpu | NVIDIA Tesla K40m | 2 | 2 | 2,880 | 0 | 12 | gpu01 - gpu02 |
gpu_v100 | NVIDIA Tesla V100 | 4 | 1 or 3 | 5,120 | 640 | 16 or 24 | gpu03 - gpu11 |
gpu_gtx | NVIDIA GeForce GTX 1080 Ti | 11 | 2 or 4 | 3,584 | 0 | 11 | gtx01 - gtx11 |
gpu_rtx | NVIDIA GeForce RTX 2080 Ti | 5 | 8 | 4,352 | 0 | 11 | gtx12 - gtx16 |
HPC applications
We have created helpful software guides to demonstrate how to effectively use popular scientific applications on the HPC cluster.
Troubleshooting
For any errors, please read FAQ first. For further assistance, visit the Help page for further resources and contact information for technical support.