Allocations
On this page
The table of contents requires JavaScript to load.
Overview of CHPC Allocations for the General Environment HPC clusters
NOTE: If you are looking for information about obtaining an allocation on the redwood cluster in CHPC's Protected Environment (PE), please see our PE allocation page.
You may submit allocation requests through the CHPC Portal for allocations on clusters in the General Envirornment (granite and notchpeak). The CHPC is actively migrating web applications to the CHPC Portal, which uses CILogon for authentication. You can log in by clicking the "Login" button on the CHPC Portal, then choosing "University of Utah" as the Identity Provider and clicking "Log On" on the CILogon page. Please use the legacy system for allocations in the Protected Environment (redwood) and all quick allocations.
CHPC awards allocations on the general CPU resources of the notchpeak cluster based on allocation request proposals. Allocations will be required for CPU and GPU resources on the granite cluster starting January 1, 2025. An allocation applies to all allocated clusters: Allocation in the General Environment can be used on both notchpeak and granite, and there is no need to apply separately for each cluster. Allocations are awarded to CHPC PIs, and access to the allocation is made available to all members of the PI's group. Jobs from groups with allocation run at a higher priority and can preempt jobs running without allocation on CHPC-owned (general) resources. Note that the kingspeak and lonepeak general nodes are run in an unallocated manner and are available for use by all CHPC account holders.
In addition, the general GPU resources on notchpeak, kingspeak, and lonepeak are also
run in an unallocated manner; for access to these resources, you must send a request
to helpdesk@chpc.utah.edu and request that you be given access to the GPU resources. Please note that the GPU
resources should only be used for jobs that are making use of the GPUs. On the granite
cluster, jobs on general nodes with GPUs are charged GPU hours, even if they do not
use GPUs. Jobs that do not use GPUs should be run on CPU-only nodes.
All allocation requests must be submitted by a CHPC PI or designated delegate. To specify a delegate, the CHPC PI should email helpdesk@chpc.utah.edu and provide the name and uNID of the user you wish to have allocation delegate rights.
Other resources on the clusters are "owned" by other PIs and your jobs will not be able to run on those nodes at priority unless you are a member of that group. You may run on the "owner nodes" by changing the SBATCH directives for the account and partition as described on the Slurm documentation page. Your job can thus be run on any idle owner nodes, at low priority, but your job can be preempted (killed) if a member of the owner's group submits a job.
Allocations are measured in wall clock core or GPU hours, which are 1 hour of wall clock time for each core or GPU allocated, respectively. In other words, the unit of an allocation is the product of the number of resources consumed (cores or GPUs) and the time for which they were consumed. For more details, see our HPC Allocation Policy. Allocations are managed on a calendar quarter and any allocation remaining at the end of a given allocation quarter is not carried over to the next quarter.
There are two kinds of CPU allocation requests:
- Quick allocation request (1 quarter only, up to 30,000 wall clock core hours)
- Normal allocation request (submitted at most quarterly for up to 4 quarters at a time)
Allocations are not cluster-specific; they are shared across the allocated resources. That is, a single allocation is used for both notchpeak and granite.
Quick Allocation Process
Quick allocations are for PIs who are new to having a CHPC allocation. PIs may submit a quick allocation request for up to 30,000 wall clock core hours for the current calendar quarter at any time during the quarter. It is expected that after gaining experience using our systems with the quick allocation, the allocation process (below) will be followed. Quick allocations are reviewed by senior CHPC staff and awarded at CHPC's discretion.
Normal Allocation Process
You can submit a normal allocation request through the CHPC Portal. Requests for allocations are accepted 4 times per year, according to the following schedule:
- December 1st for allocation for the period of January 1 through March 31
- March 1st for allocations for the period of April 1 through June 30
- June 1st for allocations for the period of July 1 through September 30
- September 1st for allocations for the period of October 1 through December 31
The current maximum award is 300,000 core hours per quarter. Additionally, GPU allocations on granite are currently limited to 1,500 GPU hours per quarter, though this is subject to change. This is currently independent of the GPU type, though we are exploring the possibility of weighting time on GPUs by specifications such as the global memory or throughput.
PIs are asked to submit a request for the four upcoming calendar quarters. Your allocation request will then need to be renewed the following year by submitting another request. You may ask for fewer quarters, but if you expect to be using our systems long term we request that you submit a full year request. If you find your allocation amount needs to be adjusted, you can submit a new request to replace your existing request at any allocation cycle during the year.
Please fill out the allocation request form completely! The committee particularly needs to see your justification for the number of core wall clock hours you have requested. If you are new to the allocation process, feel free to take a look at this Sample Application. It is good to have a track record using our systems via the quick allocation process and smaller allocations before you request a large allocation. Be sure to list your sources of funding, including granting agency and grant/contract numbers. It is also important to let us know of publications resulting from work supported by CHPC. This information is critical to us as it helps us continue to support the computation research community. The more you succeed, the more we succeed. Our mission to to help you get your research done!
For more information, see the HPC Allocation Policy.