Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

INCLINE contains 30 back-end computational nodes. The majority are compute, which are designed for basic distributed-memory HPC, but INCLINE also contains two high memory and two GPU nodes. Specifics are outlined in the table below:

TypeNumber of NodesCPU TypeGPU TypeCPU Cores / ThreadsMemoryPartitions
COMPUTE
262x AMD EPYC 7662None128C / 256T256GB
compute, compute-unlimited
HIGH MEMORY22x AMD EPYC 7662None128C / 256T2048GB
bigmem, bigmem-unlimited
GPU22x AMD 74522x NVIDIA A10064C / 128T1024GB
gpu, gpu-unlimited

Login Nodes

INCLINE contains 3 login nodes. 

TypeNumber of NodesCPU TypeMemoryAccess
LOGIN
3

2x AMD EPYC 7262
2.3 GHz, 8C/16T, 128M

256GB RDIMM, 3200 MT/s, Dual Rank
login.incline.uccs.edu
l003.incline.uccs.edu*

* Two login nodes (l001,l002) are for general use and are assigned on a round-robin basis to users logging in to login.incline.uccs.edu. The third login node (l003) is for privileged users only.

Users must remember that login node resources are limited, and should be shared responsibly. Large parallel compilations and simulations should always be performed on computational nodes only.

File Systems 

Users on INCLINE have two primary file system locations: HSFS and HOME.

File System TypeMount LocationStorage LimitTime LimitPurpose
HOME/home/<username>100GBUnlimitedStorage of program files, reference data, etc.
HSFS/mmfs1/home/<username>Unlimited30 DayHigh speed parallel storage for parallel IO.

INCLINE does not have persistent, long-term storage. Users are responsible for transferring data off of INCLINE to a reasonable long-term storage location.

The HSFS storage is technically unlimited. After 30 days, users' data may be deleted without warning. 

HOME storage is not high-performance. Users who are running in parallel should make sure their I/O goes to the HSFS only, otherwise they will likely experience a substantial I/O bottleneck.

High Speed Interconnect

The network supporting the back-end computational nodes is provided though Mellanox InfiniBand. Support for low-latency message passing via InfiniBand is provided.