Difference between revisions of "GPU Guide"

From Storrs HPC Wiki
Jump to: navigation, search
m
(Batch Jobs)
Line 23: Line 23:
 
  #!/bin/bash
 
  #!/bin/bash
 
  #SBATCH --partition=gpu
 
  #SBATCH --partition=gpu
  #SBATCH --gres='''gpu:1'''      #Request a single GPU card. Max value is 2
+
  #SBATCH --gres='''gpu:1'''      #Request a single GPU card. Max value is 2 for K40m and 3 for V100 GPU nodes
 
  #SBATCH -o '''gpujob.out'''
 
  #SBATCH -o '''gpujob.out'''
 
  #SBATCH -e '''gpujob.err'''
 
  #SBATCH -e '''gpujob.err'''

Revision as of 14:16, 2 July 2019

The Storrs HPC environment contains six GPU compute nodes. Two nodes contain a pair of NVIDIA Tesla K40m GPUs. Four nodes contain three NVIDIA Tesla V100 GPUs. Details about the cards are on this page. These nodes are available in the partition named gpu, and can be used by all researchers. Compute jobs submitted to this partition can run for up to twelve hours.

Basic Information

For most work on the GPU nodes, you'll first need to load a CUDA module. To list the available CUDA versions, use the module command:

$ module available cuda
 
------------------- /apps2/Modules/3.2.6/modulefiles -------------------
cuda/7.0    cuda/7.5    cuda/8.0    cuda/8.0.61    cuda/9.1

At the time of writing version 9.1 is the latest we have installed, so we can load it using:

module load cuda/9.1

To compile with CUDA using the NVidia CUDA compiler:

nvcc {MYFILE.cu} -o {OUTPUT_FILE}

Batch Jobs

The script below serves as an example for submitting a job to the scheduler to use a single GPU card, for up to four hours. You should change the variables in bold to meet your needs.

#!/bin/bash
#SBATCH --partition=gpu
#SBATCH --gres=gpu:1       #Request a single GPU card. Max value is 2 for K40m and 3 for V100 GPU nodes
#SBATCH -o gpujob.out
#SBATCH -e gpujob.err
#SBATCH --time=04:00:00

{COMMAND}

If you want to use only K40m nodes, add this line to your batch script

#SBATCH --exclude=gpu[03-06]

To use only V100 GPUs, add this line instead

#SBATCH --exclude=gpu[01-02]

Then, submit the script to the job scheduler:

sbatch gpu.sh

Interactive Jobs

Assign one of the GPU nodes using fisbatch:

module load cuda/8.0
fisbatch --partition=gpu -c <numprocs> --gres=gpu:<numgpus>

Where <numprocs> needs to be replaced by the number of CPU processors you need, and <numgpus> needs to be replaced by the number of GPUs you need.

Then run the deviceQuery sample program to get useful information about the GPU information like memory, processor cores, etc:

$ /apps2/cuda/8.0/samples/1_Utilities/deviceQuery/deviceQuery
/apps2/cuda/8.0/samples/1_Utilities/deviceQuery/deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "Tesla K40m"
  CUDA Driver Version / Runtime Version          8.0 / 8.0
  CUDA Capability Major/Minor version number:    3.5
  Total amount of global memory:                 11440 MBytes (11995578368 bytes)
  (15) Multiprocessors, (192) CUDA Cores/MP:     2880 CUDA Cores
  GPU Max Clock rate:                            745 MHz (0.75 GHz)
  Memory Clock rate:                             3004 Mhz
  Memory Bus Width:                              384-bit
  L2 Cache Size:                                 1572864 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Enabled
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 3 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = Tesla K40m
Result = PASS

Then, exist your interactive session.

$ exit
[screen is terminating]
Connection to gpu01 closed.
FISBATCH -- exiting job