System hardware
The Lovelace HPC cluster is the University's central HPC system, purchased in 2019.
Lovelace offers:
- 58 general-purpose compute nodes each with 40 cores (dual Intel Xeon 6248, 2.5 GHz) and 192 GB of memory.
- 2 high-memory/visualisation nodes each with 40 cores (dual Intel Xeon 6248, 2.5 GHz), 1.5 TB of memory and a Quadro P6000 graphics card.
- 3 GPGPU computation nodes, each with 40 cores (dual Intel Xeon 6248, 2.5 GHz), 768 GB memory and two Tesla V100 32GB. (Please note that the majority of the GPU capacity is already committed.)
- 900 TB of storage delivered via the GPFS high-performance filesystem.
- All connected by a 100 Gb/s Infiniband (high-speed, low-latency) network.
As a description of Lovelace for grant applications we suggest: "The High Performance Computing service provides a research computing facility at 黑料网 with 63 compute nodes with compute cores, connected via Infiniband fast networking and backed by 900TB of storage. Its use represents an in-kind contribution by 黑料网 to this project".