...
INCLINE contains 30 back-end computational nodes. The majority are compute, which are designed for basic distributed-memory HPC, but INCLINE also contains two high memory and two GPU nodes. Specifics are outlined in the table below:
Type | Number of Nodes | CPU Type | GPU Type | CPU Cores / Threads | Memory | Partitions |
---|---|---|---|---|---|---|
COMPUTE | 26 | 2x AMD EPYC 7662 | None | 128C / 256T | 256GB | compute, compute-unlimited |
HIGH MEMORY | 2 | 2x AMD EPYC 7662 | None | 128C / 256T | 2048GB | bigmem, bigmem-unlimited |
GPU | 2 | 2x AMD 7452 | 2x NVIDIA A100 | 64C / 128T | 1024GB | gpu, gpu-unlimited |
Login Nodes
INCLINE contains 3 login nodes.
Type | Number of Nodes | CPU Type | Memory | Access |
---|---|---|---|---|
LOGIN | 3 | 2x AMD EPYC 7262 | 256GB RDIMM, 3200 MT/s, Dual Rank | login.incline.uccs.edu |
* Two login nodes (l001,l002) are for general use and are assigned on a round-robin basis to users logging in to login.incline.uccs.edu. The third login node (l003) is for privileged users only.
Users must remember that login node resources are limited, and should be shared responsibly. Large parallel compilations and simulations should always be performed on computational nodes only.
File Systems
Users on INCLINE have two primary file system locations: HSFS and HOME.
File System Type | Mount Location | Storage Limit | Time Limit | Purpose |
---|---|---|---|---|
HOME | /home/<username> | 100GB | Unlimited | Storage of program files, reference data, etc. |
HSFS | /mmfs1/home/<username> | Unlimited | 30 Day | High speed parallel storage for parallel IO. |
INCLINE does not have persistent, long-term storage. Users are responsible for transferring data off of INCLINE to a reasonable long-term storage location.
The HSFS storage is technically unlimited. After 30 days, users' data may be deleted without warning.
HOME storage is not high-performance. Users who are running in parallel should make sure their I/O goes to the HSFS only, otherwise they will likely experience a substantial I/O bottleneck.
High Speed Interconnect
The network supporting the back-end computational nodes is provided though Mellanox InfiniBand. Support for low-latency message passing via InfiniBand is provided.