HPC Getting Started
Overview of cluster nodes
There are 3 classes of nodes available on our HPC cluster, each named after the Intel CPU architecture, listed in the table below. In terms of CPU cycles, the Westmere and Sandy Bridge nodes are nearly equivalent, since the number of cores per node multiplied with the peak CPU frequency is the same.
|Name||CPU name||Nodes||Cores per Node||Cores Total||RAM (GB)||CPU Frequency (GHz)||Host Names|
|Westmere||Xeon X5650||60||12||720||48||2.67||cn01 - cn64|
|Sandy Bridge||Xeon E5-2650||40||16||640||64||2.00||cn65 - cn104|
|Ivy Bridge||Xeon E5-2680||32||20||640||128||2.80||cn105 - cn136|
|Haswell||Xeon E5-2690||112||24||2688||128||2.60||cn137 - cn248|
The GPU cluster is installed in the Westmere node class. How to submit jobs to the specific class is described here.
|GPU name||Nodes||Cards per Node||Cores per Card||Cores Total||RAM on Card (MB)||GPU Frequency (GHz)||Host Names|
|nVidia Tesla M2050||4||8||448||14336||2687||1.15||cn17, cn18, cn35, cn36|
Connecting to the cluster
If you don't have an account, learn how to get one here.
To access the cluster resources and send commands, SSH is used. SSH stands for secure shell. It is the industry standard for remote access and command execution.
SSH client software
No installation is necessary for Mac or GNU/Linux, as they come with ssh installed.
Windows users, can use PuTTY.
On Mac and GNU/Linux, from the a terminal simply run:
ssh <Your Net ID>@login.storrs.hpc.uconn.edu
Windows users can login using PuTTY.
This gives you access to the login node, cn65, and you should see a terminal prompt like:
[<Your Net ID>@cn65 ~]$
For security reasons, we limit SSH connections to on-campus addresses from the "UCONN-SECURE" wireless network or the wired network.
Thus, there are 3 ways to connect to HPC off campus:
- VPN: GNU/Linux users can simply install OpenConnect version 7 or later and connect to the VPN with:
openconnect --juniper sslvpn.uconn.edu
- UConn Skybox: To log into Windows and then access the HPC cluster via PuTTY in the virtual PC.
- Engineering SSH Alternatively, if you have a School of Engineering account, you can first SSH to
icarus.engr.uconn.edu, then to the cluster.
Transferring files in and out of the cluster
Before running your code on the cluster, you need to upload your code and data to the cluster first. You can also download the results from the cluster to your local machine. Check our file transfer guide for details.
All job submission, management and scheduling is done using SLURM. To learn more about job submission and management, please read our SLURM Guide.
Always run jobs via SLURM. If you do not, your process will be killed.
Please read our usage policy for more details.
How to Use a Specific Software Application
See the software guides.
For any errors, please read FAQ first.