Difference between revisions of "HPC Getting Started"
(→Off-campus Access) |
(→Off-campus Access) |
||
Line 20: | Line 20: | ||
There are three ways to connect to HPC from off campus: | There are three ways to connect to HPC from off campus: | ||
− | # [http://remoteaccess.uconn.edu/vpn-overview/connect-via-vpn-client-2/ VPN]: The UConn VPN is the recommended way to access the Storrs HPC cluster from off campus. Windows and Mac users should follow the instructions on that page for installing the VPN client. Linux users | + | # [http://remoteaccess.uconn.edu/vpn-overview/connect-via-vpn-client-2/ VPN]: The UConn VPN is the recommended way to access the Storrs HPC cluster from off campus. Windows and Mac users should follow the instructions on that page for installing the VPN client. Linux users should follow [http://remoteaccess.uconn.edu/vpn-overview/connect-via-vpn-client-on-linux/ these alternate instructions]. |
# [http://skybox.uconn.edu UConn Skybox]: Login to a virtual desktop and then access the cluster via PuTTY. | # [http://skybox.uconn.edu UConn Skybox]: Login to a virtual desktop and then access the cluster via PuTTY. | ||
# [http://help.engr.uconn.edu/linux-remote-access/ Engineering SSH] If you have a School of Engineering account you can login to their SSH relay, <code>icarus.engr.uconn.edu</code>, then SSH to the cluster. This process is outlined as follows: | # [http://help.engr.uconn.edu/linux-remote-access/ Engineering SSH] If you have a School of Engineering account you can login to their SSH relay, <code>icarus.engr.uconn.edu</code>, then SSH to the cluster. This process is outlined as follows: |
Revision as of 14:36, 29 March 2017
Contents
Connecting to the cluster
If you don't already have an account, please fill out the cluster application form.
To access the cluster resources and send commands, SSH is used. SSH stands for secure shell. It is the industry standard for remote access and command execution.
SSH access
On Mac and GNU/Linux, from the a terminal simply run:
ssh <Your Net ID>@login.storrs.hpc.uconn.edu
Windows users can login using PuTTY.
This gives you access to a login node, and you should see a terminal prompt like:
[<Your Net ID>@cn01 ~]$
Off-campus Access
SSH connections are limited to on-campus addresses from both the wired network and the "UCONN-SECURE" wireless network.
There are three ways to connect to HPC from off campus:
- VPN: The UConn VPN is the recommended way to access the Storrs HPC cluster from off campus. Windows and Mac users should follow the instructions on that page for installing the VPN client. Linux users should follow these alternate instructions.
- UConn Skybox: Login to a virtual desktop and then access the cluster via PuTTY.
- Engineering SSH If you have a School of Engineering account you can login to their SSH relay,
icarus.engr.uconn.edu
, then SSH to the cluster. This process is outlined as follows:
[<Your User>@<Your Hostname>]$ ssh <Your Net ID>@icarus.engr.uconn.edu [<Your Net ID>@icarus.engr.uconn.edu]$ ssh <Your Net ID>@login.storrs.hpc.uconn.edu
Submitting Jobs
All job submission, management, and scheduling is done using the job scheduler software SLURM. To learn more about job submission and management, please read our SLURM Guide.
Always run jobs via SLURM. If you do not, your process may be throttled or terminated.
Please read our usage policy for more details.
Overview of cluster nodes
There are four classes of nodes available on the HPC cluster, each named after the Intel CPU architecture, listed in the table below.
Name | CPU name | Nodes | Cores per Node | Cores Total | RAM (GB) | CPU Frequency (GHz) | Host Names |
---|---|---|---|---|---|---|---|
Broadwell | Xeon E5-2699 v4 | 4 | 44 | 176 | 256 | 2.20 | cn325 - cn328 |
Haswell | Xeon E5-2690 v3 | 175 | 24 | 4,200 | 128 | 2.60 | cn137 - cn324 |
Ivy Bridge | Xeon E5-2680 v2 | 32 | 20 | 640 | 128 | 2.80 | cn105 - cn136 |
Sandy Bridge | Xeon E5-2650 | 40 | 16 | 640 | 64 | 2.00 | cn65 - cn104 |
GPUs are installed in two nodes. How to submit jobs to these nodes is described in the GPU Guide.
GPU name | Nodes | Cards per Node | Cores per Card | Cores Total | RAM on Card (MB) | Host Names |
---|---|---|---|---|---|---|
NVIDIA Tesla K40m | 2 | 2 | 2,880 | 11,520 | 12GB | gpu01, gpu02 |
HPC applications
We have created helpful software guides to demonstrate how to effectively use popular scientific applications on the HPC cluster.
Troubleshooting
For any errors, please read FAQ first. For further assistance, visit the Help page for further resources and contact information for technical support.