Harpers Ferry (2025-) ===================== Harpers Ferry is WVU's latest HPC Cluster. It was deployed in October 2025 and was funded in part through a NASA Congressionally Directed Spending. The cluster has a total of 37 compute nodes, one login node all managed from one master node. Overview -------- Harpers Ferry is a cluster with 37 compute nodes. Hardware -------- HPC has 37 compute nodes. There are two nodes with 3TB of RAM. All other 35 compute nodes have 768 GB. +-----------------+-----------------+--------------+-----------+----------------------------+ | | Names of | | Number of | | CPU cores | | RAM per | | Sockets CPU Manufacturer | | | Nodes | | Nodes | | per node | | Node | | and Processor Model | +=================+=================+==============+===========+============================+ | hcocs[001-035] | 35 | 256 | 768 GB | 2X AMD EPYC 9754 128-Core | +-----------------+-----------------+--------------+-----------+----------------------------+ | hcocx[001-002] | 2 | 256 | 3072 GB | 2X AMD EPYC 9754 128-Core | +-----------------+-----------------+--------------+-----------+----------------------------+ Specifications of AMD EPYC 9754 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Harpers Ferry compute nodes were build with dual socket 4th Generation AMD EPYC™ Processors. The following is a quick summary of some relevant features of these processors. For more information visit the official website at `AMD 9754`_ +---------------------------------+------------------+ | **Name** | AMD EPYC™ 9754 | +---------------------------------+------------------+ | **Family and Series** | EPYC 9004 Series | +---------------------------------+------------------+ | **# of CPU Cores** | 128 | +---------------------------------+------------------+ | **# of Threads** | 256 | +---------------------------------+------------------+ | **Max. Boost Clock** | Up to 3.1 GHz | +---------------------------------+------------------+ | **All Core Boost Speed** | 3.1 GHz | +---------------------------------+------------------+ | **Base Clock** | 2.25 GHz | +---------------------------------+------------------+ | **L3 Cache** | 256 MB | +---------------------------------+------------------+ | **Default TDP** | 360 W | +---------------------------------+------------------+ | **AMD Configurable TDP (cTDP)** | 320-400 W | +---------------------------------+------------------+ | **CPU Socket** | SP5 | +---------------------------------+------------------+ | **Socket Count** | 1P / 2P | +---------------------------------+------------------+ | **Launch Date** | 06/13/2023 | +---------------------------------+------------------+ Software -------- Workload Manager ~~~~~~~~~~~~~~~~ Harpers Ferry uses Slurm as workload manager. There are five paritions defined and there is a default memory per CPU defined for each partition. The default partition is `defq` and allows jobs for up to 6 hours. See table below for more information about the parititions defined. +-----------+-------------+------------+--------------------+------------------+ | Partition | Time Limit | | Number of| | Nodes in | | Default Memory | | | | | Nodes | | Partition | | per CPU | +===========+=============+============+====================+==================+ | defq* | 6 hours | 35 | hcocs[001-035] | 3 GB | +-----------+-------------+------------+--------------------+------------------+ | day_3gb | 1 day | 35 | hcocs[001-035] | 3 GB | +-----------+-------------+------------+--------------------+------------------+ | day_12gb | 1 day | 2 | hcocx[001-002] | 12 GB | +-----------+-------------+------------+--------------------+------------------+ | week_3gb | 1 week | 35 | hcocs[001-035] | 3 GB | +-----------+-------------+------------+--------------------+------------------+ | week_12gb | 1 week | 2 | hcocx[001-002] | 12 GB | +-----------+-------------+------------+--------------------+------------------+ Scientific Software ~~~~~~~~~~~~~~~~~~~ Harpers Ferry has a variety of compilers, numerical libraries, and scientific software specifically compiled and optimized for the hardware architecture. +--------------+----------------+----------------+-------------+-------------------+ | | | GNU Compiler | | GNU Compiler | | NVIDIA | | Intel | | | | Collection | | Collection | | HPCSDK | | oneAPI | +==============+================+================+=============+===================+ | Env. Module | Host compiler | gcc/13.1.0 | nvhpc/26.1 | compiler/2025.3.2 | +--------------+----------------+----------------+-------------+-------------------+ | Version | 11.5.0 | 13.1.0 | 26.1-0 | 2025.3.2 | +--------------+----------------+----------------+-------------+-------------------+ | C Compiler | gcc | gcc | nvc | icx | +--------------+----------------+----------------+-------------+-------------------+ | C++ Compiler | g++ | g++ | nvc++ | icpx | +--------------+----------------+----------------+-------------+-------------------+ | Fortran | gfortran | gfortran | nvfortran | ifx | +--------------+----------------+----------------+-------------+-------------------+ Acknowledgment Message ---------------------- We ask our users to acknowledge the use of Harpers Ferry in all publications that were possible thanks to this resource. The message in the acknowledgment section could be as follows: *Computational resources were provided by the WVU Research Computing Harpers Ferry HPC cluster, which is funded in part through a NASA Congressionally Directed Spending* .. _AMD 9754: https://www.amd.com/en/products/processors/server/epyc/4th-generation-9004-and-8004-series/amd-epyc-9754.html