Policies

Acknowledging Your Use of the WVU High-Performance Computing Environment

Maintaining a first-class HPC environment is expensive in terms of both human and material resources. The Research Office requests that you acknowledge your use of the campus clusters in papers or other publications of research that benefited from these campus resources. Such acknowledgments assist us in convincing people in our administration and funding agencies of the value of these resources to our community and help us obtain new funding for their continued maintenance and expansion.

To acknowledge your use of the clusters, we request that you use the following wording:

For Thorny Flat:

The authors acknowledge the computational resources provided by the WVU Research Computing Thorny Flat HPC cluster, partly funded by NSF OAC-1726534.

For Dolly Sods:

The authors acknowledge the computational resources provided by the WVU Research Computing Dolly Sods HPC cluster, which is funded in part by NSF OAC-2117575.”

We would also appreciate you informing us of publications in which you acknowledge one of the clusters. In retribution, we will work to give visibility to your work and the computational efforts that make it possible. Contact us at helpdesk@hpc.wvu.edu and share with us the PDF or DOI of your publication.

Running Jobs on Login Node

Running jobs/tasks on the login node (i.e., the node you see immediately after you log in) is not permitted. When the RC admin team notices this occurring, the processes will be killed immediately to ensure system stability. If you are having trouble submitting jobs through the scheduler, please follow the instructions on Getting Help. We would be happy to guide you through the proper process of submitting jobs to the cluster using Slurm commands.

Data storage

Users of our HPC clusters can store data typically in two locations. They are called generically $HOME and $SCRATCH. $HOME is usually the folder /users/<username> and $SCRATCH is /scratch/<username>. There are storage limits imposed on those folders to prevent a single user to exhaust the storage available on the cluster. These limits are called the quota and you can use the command quota to see your quota and how much data storage you are currently using. In the case of Thorny Flat, there is a quota of 10GB on the $HOME folder and 20TB on $SCRATCH. For information about where these directories are located and storage quotas, please visit the Disk Storage page.

Sharing User Directories

Initially, sharing user directories or scratch space is not permitted. This is for tracking user purposes. However, we recognize that research is a collaborative effort. Therefore we are attentive to requests for sharing resources across user spaces for temporary or fixed time limits to get jobs completed. Please follow the instructions on Getting Help and the WVU-RC team with these requests.

Users also have the option of using group storage to share files. A Principal Investigator (PI) may request up to 10 GB of group storage for free. Additional storage may be purchased if desired. For more information, please visit Persistent Group Storage.

Getting Software Installed On the Cluster

Software needed on the cluster for research work can be installed at user request. However, we initially recommend that software, scripts, and programming libraries be installed locally in user directories as opposed to system-wide. This is generally because of system-wide installs of software/libraries not supported directly by Red Hat Enterprise Linux (RHEL) may get broken during system-wide updates. If the software/library has a large enough appeal (multiple users/multiple research teams), we can and will assist in installing a system-wide version. If you need assistance installing software/libraries locally in your user directory, please follow the instructions at Getting Help. The WVU-RC team will gladly assist.

User Priority for Job Submission

Different priority defaults are assigned to your job submission depending on the hostname you are submitting jobs on. On Thorny Flat and Dolly Sods priority for partitions are set by fair share queueing. This means that user priority is assigned based on a combination of the job size and how many system resources you have used during the given week, with higher priority assigned to larger jobs and/or user jobs that have used fewer system resources in the week. If you would like to have different priority settings for your research teams queue, please follow the instructions at Getting Help for the WVU-RC team to attend to your request.

Database Management on Clusters

The current setup for the HPC clusters is for general HPC use. This means that the compute nodes were set up with the idea that data will be transferred to the system, computed, and then transferred off. Database management systems are not currently supported on the cluster because the compute nodes were never initially set up to handle the data storage is required. However, if your research team requires Managed Database systems that have significant storage needs with a database management system back-end (i.e. MySQL, Oracle, etc) please Getting Help the WVU-RC team, and we can come up with solutions to handle these data requests.

X11 Forwarding - Running visualization software

Non-compute intensive processes like visualization and quick data analysis can be run on the login node. These processes can include Gnuplot, R, and Matlab. However, if your visualization job requires to process data to generate plot and figures, it is best to run these jobs through the scheduler in batch mode. Compute-intensive jobs, visualization or not, are not permitted to run on the head-node. If you have any questions about the best way to accomplish your computing goal, please follow the instructions on Getting Help through the help desk, and we will provide any assistance needed to fulfill your requirements.