Welcome to the Info TEST server!

Skip to content. | Skip to navigation

Sections
Info Services > Computing Guide > Cluster Processing > Appendix > Available Hardware Resources

Available Hardware Resources

Describes available resources at NMASC and NAASC.

NMASC (nmpost)

The NMASC provides a processing cluster called the nmpost cluster to support CASA and AIPS execution, and a Lustre filesystem for a data storage.

 

Cluster

The nmpost cluster consists of 120 nodes.  Nodes 001-060 are available for batch processing or interactive processing.  They have no local storage, instead they have a 40Gb/s QDR Infiniband connection to the NMASC Lustre filesystem.  These nodes support automatic EVLA pipeline processing, archive retrievals, batch processing requests and interactive processing sessions.  Nodes 061-120 are reserved for our VLASS project.  They have local NVMe storage, but no connection to the NMASC Lustre filesystem.

The operating system is Red Hat Enterprise Linux 7.  The cluster scheduling software is Slurm or HTCondorTorque/Moab is deprecated.

NodesCPUSocketsCores per SocketClockCPU MarkRAMOSLocal StorageGPU(s)InterconnectPartitionsScheduler

nmpost{001..005}

Gold 6136 2 12 3GHz 33,555 498GiB RHEL7 NA NA 40Gb/s QDR Infiniband batch Slurm

nmpost{006..010}

E5-2630v3 2 8 2.4GHz 17,041 498GiB RHEL7 NA NA 40Gb/s QDR Infiniband interactive, batch Slurm

nmpost011

Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL7 NA NA 40Gb/s QDR Infiniband batch HTCondor

nmpost{012..015}

Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL7 NA NA 40Gb/s QDR Infiniband batch Torque

nmpost{016..020}

Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL7 NA NA 40Gb/s QDR Infiniband batch HTCondor

nmpost{021..024}

E5-2640v4 2 10 2.4GHz 20,394 498GiB RHEL7 NA NA 40Gb/s QDR Infiniband interactive, batch Slurm

nmpost025

E5-2640v4 2 10 2.4GHz 20,394 498GiB RHEL8 NA NA 40Gb/s QDR Infiniband rhel8 Slurm

nmpost{026..036}

E5-2640v4 2 10 2.4GHz 20,394 498GiB RHEL7 NA NA 40Gb/s QDR Infiniband interactive, batch Slurm

nmpost033

E5-2640v4 2 10 2.4GHz 20,394 498GiB RHEL8 NA NA 40Gb/s QDR Infiniband rhel8 HTCondor

nmpost{034..036}

E5-2640v4 2 10 2.4GHz 20,394 498GiB RHEL7 NA NA 40Gb/s QDR Infiniband interactive, batch Slurm

nmpost037

E5-2640v4 2 10 2.4GHz 20,394 498GiB RHEL8 NA Nvidia L4 40Gb/s QDR Infiniband rhel8 HTCondor

nmpost{038..039}

E5-2640v4 2 10 2.4GHz 20,394 498GiB RHEL7 NA NA 40Gb/s QDR Infiniband NA NA

nmpost040

E5-2640v4 2 10 2.4GHz 20,394 498GiB RHEL8 NA Nvidia L4 40Gb/s QDR Infiniband rhel8 Slurm

nmpost041

Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL7 NA NA 40Gb/s QDR Infiniband NA NA

nmpost{042..045}

Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL7 NA NA 40Gb/s QDR Infiniband batch Slurm

nmpost{046..050}

Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL7 NA NA 40Gb/s QDR Infiniband interactive, batch Slurm

nmpost051

Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL8 NA NA 40Gb/s QDR Infiniband rhel8 Slurm

nmpost{052..056}

Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL7 NA NA 40Gb/s QDR Infiniband interactive, batch Slurm

nmpost{057..060}

Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL7 NA NA 40Gb/s QDR Infiniband NA NA
nmpost{061..086} Gold 6136 2 12 3.0GHz 33,555 750GiB RHEL7 5.8TB NVMe NA 40Gb/s QDR Infiniband vlass HTCondor
nmpost{087..089} Gold 6136 2 12 3.0GHz 33,555 750GiB RHEL8 5.8TB NVMe Tesla T4 40Gb/s QDR Infiniband nvlga Slurm
nmpost090 Gold 6136 2 12 3.0GHz 33,555 750GiB RHEL8 5.8TB NVMe Nvidia L4 40Gb/s QDR Infiniband nvlga Slurm
nmpost{091..120} Gold 6136 2 12 3.0GHz 33,555 750GiB RHEL7 3.5TB NVMe NA NA vlass HTCondor

 CPU Marks are relative figures indicating performance.  For example a node with a CPU Mark of 4000 can theoretically process roughly two times faster than a node with a CPU Mark of 2000.  CPU Marks are not a measure of FLOPS.

RAM is in Gibibytes (Base2) units because both Slurm and HTCondor user Base2 for their memory requests.  We also reserve 4GiB of RAM for the operating system.  So the RAM column represents the maximum memory you can request for each node.

Lustre Filesystem

Lustre is a distributed parallel filesystem commonly used in HPC environments.  Each cluster node sees the same shared filesystem.  The NMASC Lustre filesystem is made up of ten storage servers each with four RAID arrays (40 total arrays) which are each 44TB in size.  The total storage volume is 1.8PB.  Individual cluster nodes can read and write to the Lustre filesystem in excess of 1GByte/sec.  The entire filesystem can sustain roughly 15GB/s aggregate I/O.

The Lustre filesystem appears as /lustre/aoc on all Lustre enabled client computers.

 

Public Workstations

The NMASC has nine workstations for local visitors.  The systems vary in processor, memory and storage since work is expected to be done mostly on the compute cluster, but all have 10Gb/s ethernet connections to the Lustre filesystem.  Instructions for reserving workstations can be found in the Computing section of the Visiting the DSOC web page.

 


 

NAASC (cvpost)

The NAASC provides a processing cluster called the cvpost cluster to support CASA execution, and a Lustre filesystem for a data storage.


Cluster

The cvpost cluster consists of a number of nodes, most of which are available to users for either batch or interactive processing. Some nodes are reserved for large programs.  None of the nodes have local storage, instead each node has a 40Gb/s QDR Infiniband connection to the NAASC Lustre filesystem. The cluster supports automatic ALMA pipeline processing, archive retrievals, batch processing requests and interactive processing sessions.

The operating system is being upgraded from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8.    The cluster scheduling software is Slurm or HTCondorTorque/Moab is deprecated.

NodesCPUSocketsCores per SocketClockCPU MarkRAMOSInterconnectPartitionScheduler
cvpost001 E5-2670 2 8 2.6GHz 15,695 247GiB RHEL8 40Gb/s QDR Infiniband batch Slurm
cvpost{002..004} E5-2640v3 2 8 2.6GHz 18,094 247GiB RHEL8 40Gb/s QDR Infiniband batch Slurm
cvpost005 E5-2670 2 8 2.6GHz 15,695 247GiB RHEL8 40Gb/s QDR Infiniband batch Slurm
cvpost{006..007} E5-2640v3 2 8 2.6GHz 18,094 247GiB RHEL8 40Gb/s QDR Infiniband batch Slurm
cvpost{011..013} E5-2670 2 8 2.6GHz 15,695 247GiB RHEL8 40Gb/s QDR Infiniband batch Slurm
cvpost{018..023} E5-2640v3 2 8 2.6GHz 18,094 247GiB RHEL8 40Gb/s QDR Infiniband interactive, batch Slurm
cvpost{027..030} E5-2630v3 2 8 2.4GHz 10,271 247GiB RHEL7 40Gb/s QDR Infiniband NA NA
cvpost{101..108} Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL8 40Gb/s QDR Infiniband interactive2, batch2 Slurm
cvpost109 Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL8 40Gb/s QDR Infiniband NA NA
cvpost{110..116} Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL8 40Gb/s QDR Infiniband interactive2, batch2 Slurm
cvpost117 Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL8 40Gb/s QDR Infiniband NA NA
cvpost118 Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL8 40Gb/s QDR Infiniband interactive2, batch2 Slurm
cvpost119 Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL8 40Gb/s QDR Infiniband CVPOST HTCondor
cvpost{120..122} Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL8 40Gb/s QDR Infiniband NA NA
cvpost123 Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL8 40Gb/s QDR Infiniband interactive2, batch2 Slurm
cvpost{124..127} Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL8 40Gb/s QDR Infiniband plwg, plwg-i Slurm
cvpost128 Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL8 40Gb/s QDR Infiniband NA NA
cvpost{129..133} Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL8 40Gb/s QDR Infiniband plwg, plwg-i Slurm
cvpost{134..135} Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL8 40Gb/s QDR Infiniband NA NA
cvpost{136..137} Gold 6136 2 12 3.0GHz 33,555 750GiB RHEL8 40Gb/s QDR Infiniband plwg, plwg-i Slurm
cvpost{138..139} Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL8 40Gb/s QDR Infiniband batch2 Torque
cvpst140 Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL8 40Gb/s QDR Infiniband NA NA
cvpost{141..142} Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL8 40Gb/s QDR Infiniband batch2 Slurm
cvpost143 Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL8 40Gb/s QDR Infiniband batch2 Torque
cvpost144 Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL8 40Gb/s QDR Infiniband interactive2, batch2 Slurm
cvpost145 Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL7 40Gb/s QDR Infiniband NA NA
cvpost{146..147} Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL8 40Gb/s QDR Infiniband NA NA
cvpost{148..150} Gold 6136 2 12 3.0GHz 33,555 498GiB RHEL8 40Gb/s QDR Infiniband interactive2, batch2 Slurm
CPU Marks are relative figures indicating performance.  For example a node with a CPU Mark of 4000 can theoretically process roughly two times faster than a node with a CPU Mark of 2000.  CPU Marks are not a measure of FLOPS.

RAM is in Gibibytes (Base2) units because both Slurm and HTCondor user Base2 for their memory requests.  We also reserve 4GiB of RAM for the operating system.  So the RAM column represents the maximum memory you can request for each node.

Lustre Filesystem

Lustre is a distributed parallel filesystem commonly used in HPC environments.  Each cluster node sees the same shared filesystem.  There are two Lustre filesystems available at the NAASC. First, the NAASC Lustre filesystem is scratch space, used for operational and pipeline functions.  It is made up of six storage servers each with four RAID arrays (16 total arrays) which are each 64TB in size.  An additional two servers each provide four RAID arrays, each with ~190 TB.  The total storage volume is 3.0PB.  Individual nodes can read and write to the Lustre filesystem in excess of 1GByte/sec, the entire filesystem can sustain roughly 10GB/s aggregate I/O.

The Lustre filesystem appears as /lustre/naasc on all Lustre enabled client computers.

A separate, long term storage system is provided by CV Lustre.  This storage, while intended mostly for staff use, is also leveraged for guest ("observer") accounts.  It has four servers, each providing four RAID arrays of 128TB apiece, and one additional server with 4 x 192TB arrays.  Total capacity is 2.8 PB.


Public Workstations

The NAASC has five workstations for local visitors.  The systems have 8 x Intel E5-1660 Xeon 3.0 GHz processors, 32 GB RAM, 3 TB local disk space (accessible as /home/<hostname>_1 and /home/<hostname>_2) and a 10Gb/s ethernet connection to the Lustre filesystem.

Search All NRAO