High End Computing

Introduction

 

The High End Computing Cluster is a centrally-run service to support researchers and research students at Lancaster who require high performance computing. This includes workloads with requirements that can't be met by the IUS or desktop PCs.

The combined facility offers 2200 CPU cores, 11TB of memory, 32TB of high performance filestore and 1PB of medium performance filestore. The service combines what was the previously separately supported services of local high performance computing (HPC) users and the local Particle Physics research group (GridPP).

The cluster operating system is Scientific Linux, with job submission handled by Sun Grid Engine (SGE). The service supports a wide variety of third-party software including numerical packages, libraries and C and Fortran compilers.

How it works

The HEC has three basic components; a login node, where users login in to submit jobs; the compute nodes, which run those jobs; and dedicated file systems, which share user and other files across the cluster.

From the login node, users create a batch job script which describes the tasks their job(s) are to perform in a format similar to a unix shell script. The batch job is then submitted to the SGE job scheduler which will portion out user jobs to free compute nodes. Job submission commands can be supplemented with additional information, such as requests for specific amounts of memory (for large memory jobs), or multiple nodes (in the case of parallel jobs).

Hardware

Login node: The login node is a dual-socket hexacore  Intel Nehalem server with 48Gb of memory.

Compute nodes: The compute nodes consist of Viglen HX420Ti chassis, each housing four servers, for a total of 262 compute nodes. Compute nodes are dual-socket quad- core or hexa-core. The standard memory size per compute node is 24Gb, with a few nodes offering 96Gb in order to support very large memory jobs. A further 18 nodes support GPU computing, each offering two NVIDIA Tesla M2075 cards.

File store: The primary file storage system is an 32TB Panasas Activescale Series 8 Storage Cluster. A series of Viglen HS424i storage nodes act as secondary file system providing medium-performance filestore for the local GridPP initiative.

Software

A number of statistical and numerical packages and libraries are installed in addition to Fortran 90, C and C++ compilers.

Frequently asked questions about the HEC

This service is running normally.

The service is currently running normally.

Key information

  • Available to: Staff, postgraduate researchers
  • Availability:

    Availability: 24/7 hours - 7 days

    Mon - Fri (excl. holidays) 09:00 to 17:00 for account requests

  • Feedback: Tell us what you think about this service