DiRAC Services support a significant portion of STFC’s science programme, providing simulation and data modelling resources for the UK Frontier Science theory community in Particle Physics, astroparticle physics, Astrophysics, cosmology, solar system & planetary science and Nuclear physics (PPAN; collectively STFC Frontier Science). DiRAC services are optimised for these research communities and operate as a single distributed facility which  provides the range of architectures needed to deliver our world-leading science outcomes.

We have three Services: Data Intensive; Memory Intensive and Extreme Scaling, with our machines hosted at four University sites across the UK: Cambridge; Durham; Edinburgh and Leicester.

Information on how to apply for time on our Services can be found here, and how our Services map onto our Science agenda can be found here. The DiRAC Data Management Plan is available for download here.

For general enquires please email DiRAC Support or the Project Office.


Data Intensive Service

The Data Intensive Service is jointly hosted by the Universities of Cambridge and Leicester.

Data Intensive@Cambridge

DiRAC has a 13% share of the CSD3 petascale HPC platform (Peta4 & Wilkes2), hosted at Cambridge University.

Peta4
The Peta4 system provides 1.5 petaflops of compute capability:

  • 342 C6320p node Intel KNL Cluster (Intel Xeon Phi CPU 7210 @1.30Ghz) with 96GB of RAM per node.
  • 768 Skylake nodes each with 2 x Intel Xeon Skylake 6142 processors, 2.6GHz 16-core (32 cores per node
    • 384 nodes with 192 GB memory
    • 384 nodes with 384 GB memory
  • The HPC interconnect is Intel OmniPath in 2:1 Blocking
  • The storage consists of 750 TB of disk storage offering a Lustre parallel filesystem and 750 GB of Tape.

hpcs-home.jpg

With 1.697 PFlops the new CSD3 Peta4 CPU/KNL cluster is at position number 75 in the November 2017 Top500 list of the 500 most powerful commercially available computer systems.

Wilkes2
The Wilkes2 system provides 1.19 petaflops of compute capability:

  • 360 NVIDIA GPU cluster with four NVIDIA Tesla P100 GPUs, in 90 Dell EMC server nodes, each with 96GB memory connected by Mellanox EDR Infiniband, providing 1.19 petaflops of computational performance.

For more information email Cambridge Support


Data Intensive@Leicester

lecester

Data Intensive 2.5x

The DI system has two login nodes, Mellanox EDR interconnect in a 2:1 blocking setup and 3PB Lustre storage.

Main Cluster

  • 136 dual-socket nodes with Intel Xeon Skylake 6140, two FMA AVX512, 2.3GHz; 36 cores, 192 GB RAM. 4896 cores in total.

Large-Memory

  • 1 x 6TB server with 144 cores X6154@ 3.0GHz base
  • 3 x 1.5TB server with 36 cores X6140@ 2.3GHz base

The DI System at Leicester is designed to offer fast, responsive I/O.

Data Intensive 2 (formerly “Complexity”)
  • 272 Intel Xeon Sandybridge nodes with 128 GB RAM per node, 4352 cores (95Tflop/s) connected via non-blocking Mellanox FDR interconnect.

  • This cluster features an innovative Switching architecture designed,  built and delivered by Leicester University and Hewlett Packard.

The total storage available to both systems is in excess of 1PB.

Further information is available on the web page or by emailing Leicester support.


Memory Intensive Service

The Memory Intensive Service is hosted by the University of Durham at the Institute for Computational Cosmology (ICC).

Memory Intensive 2.5x

  • 2 x 1.5TB login nodes with 5120 Intel Xeon Skylake processors, 1FMA AVX512, 2.2GHz, 28 cores

  • 147 compute nodes, each with 768 GB of RAM and 2 x X5120 2.2Ghz per node, offering a total of 4116 cores.

  • The system is connected via Mellanox EDR in a 2:1 blocking configuration. 333TB of fast I/O scratch space and 1PB of Data space on Lustre.

Memory Intensive 2 (Formerly “Data Centric”)

  • 11,000 cores over the two clusters COSMA5 and COSMA6.  The nodes offer 128GB of memory per node and are connected via a Mellanox FDR 10 2:1 Blocking Infiniband fabric.

  • The IB fabric connects COSMA5 to a GPFS and COSMA6 to Lustre filesystem, with the I/O performance for both being 10-11GB/s write and 5-6GB/s read

 

More information on the Memory Intensive 2 system can be found  here  and further enquiries on the Memory Intensive Service can be emailed to ICC Support


Extreme Scaling Service

The Extreme Scaling Service is hosted by the University of Edinburgh. DiRAC Extreme Scaling (also know as Tesseract) is available to industry, commerce and academic researhers. General information on Tesseract, as well as the User Guide, is available here.

  • 4116 Intel Xeon Skylake processors, 844 nodes, 12 cores per socket, two sockets per node,  FMA AVX512, 2.2GHz base, 3.0Ghz turbo,  96GB RAM

  • 2.4PB lustre storage and Hypercube OPA interconnect.

  • This system is configured for good to excellent strong scaling and vectorised codes and has High Performance I/O and Interconnect.

Further information on the Extreme Scaling Service is available by emailing DiRAC Support.


Our Services Supporting our Science

DiRAC operates within a framework of well-established science cases which have been fully peer reviewed to deliver a transformative research programme aimed at creating novel and improved computing techniques and facilities. We tailor our Services’ architectures towards solving these science problems and by doing so help underpin research covering the full remit of STFC’s astronomy, particle, nuclear and accelerator physics Science Challenges. Some brief illustrations of how our Services map onto our Science Agenda can be found below and for more information please email theProject Office.

The Data Intensive Service addresses the problems associated with driving scientific discovery through the analysis of large data sets using a combination of modelling and simulation, e.g. the large-volume data sets from flagship astronomical satellites such as Planck and Gaia, and ground-based facilities such as the Square Kilometre Array (SKA).  One project using the Data Intensive Service is looking at breaking resonances between migrating planets.

The Memory Intensive Service supports detailed and complex simulations related to Computational Fluid Dynamic problems, for example cosmological simulations of galaxy formation and evolution, which require access to very large amounts of memory (more than 300 terabytes) to enable codes to ‘follow’ structures as they form.   The innovative design of this Service supports physically detailed simulations which can use an entire DiRAC machine for weeks or months at a time. More on the Virgo project, which uses the Memory Intensive Service can be found here.

The Extreme Scaling Service supports codes that make full use of multi-petaflop HPC systems. DiRAC works with industry on the design of systems using Lattice QCD in theoretical particle physics as a driver.   This field of physics provides theoretical input on the properties of hadrons to assist with the interpretation of data from experiments such as the Large Hadron Collider. To find out more about one of the Lattice QCD projects using the Extreme Scaling Service see the 2017 Science Highlights page.


The DiRAC Data Management Plan can be found here.