DiRAC Services support a significant portion of STFC’s science programme, providing simulation and data modelling resources for the UK Frontier Science theory community in Particle Physics, astroparticle physics, Astrophysics, cosmology, solar system & planetary science and Nuclear physics (PPAN; collectively STFC Frontier Science). DiRAC services are optimised for these research communities and operate as a single distributed facility which provides the range of architectures needed to deliver our world-leading science outcomes.

Based at four University sites (Cambridge; Leicester; Durham & Edinburgh), we host three Services,: Data Intensive Cambridge; Data Intensive Leicester; Memory Intensive and Extreme Scaling.

Information on how to apply for time on our Services can be found here, and how our Services map onto our Science agenda can be found here. The DiRAC Data Management Plan is available for download here.

For general enquires please email DiRAC Support or the Project Office.


Data Intensive Service

The Data Intensive Service is jointly hosted by the Universities of Cambridge and Leicester.

Data Intensive@Cambridge

DiRAC has a part share of the CSD3 petascale HPC platform (Cumulus & Wilkes2)  hosted at Cambridge University.

Cumulus 

The Cumulus system provides a total of 2.27 Petaflops of compute capability consisting of:    

  • 1152  Skylake nodes each with 2 x Intel Xeon Skylake 6142 processors, 2.6GHz 16-core (32 cores per node)

 768 nodes with 192 GB memory
 384 nodes with 384 GB memory

  • 342 C6320p node Intel KNL Cluster (Intel Xeon Phi CPU 7210 @ 1.30GHz) with 96GB of RAM per node
  • The HPC interconnect is Intel OmniPath in 2:1 Blocking
  • The DiRAC share of Skylake is 15482 CPUs and of KNL its 44 Nodes

With 2.2714 PFlops the Cumulus CPU/KNL cluster is at position number 87 in the November 2018 Top500
list of the 500 most powerful commercially available computer systems.

 Wilkes2

The Wilkes2 system provides 1.19 petaflops of compute capability 

    • 360 NVIDIA GPU cluster with four NVIDIA Tesla P100 GPUs, in 90 Dell EMC server nodes, each with 96GB memory connected by Mellanox EDR Infiniband, providing 1.19 petaflops of computational performance.
    • The DiRAC share of Wilkes2 is 46GPUs

 

  • Storage available to DiRAC consists of 1.53PB of storage storage offering a Lustre parallel filesystem and 750GB of tape. 

For more information email Cambridge Support


Data Intensive@Leicester

lecester

Data Intensive 2.5x

The DI system has two login nodes, Mellanox EDR interconnect in a 2:1 blocking setup and 3PB Lustre storage.

Main Cluster

  • 408 dual-socket nodes with Intel Xeon Skylake 6140, two FMA AVX512, 2.3GHz; 36 cores, 192 GB RAM. 14688 cores  and 3.5PB storage in total.

Large-Memory

  • 1 x 6TB server with 144 cores X6154@ 3.0GHz base
  • 3 x 3TB server with 36 cores X6140@ 2.3GHz base

The DI System at Leicester is designed to offer fast, responsive I/O.

Further information is available on the web page or by emailing Leicester support.


Memory Intensive Service

The Memory Intensive Service is hosted by the University of Durham at the Institute for Computational Cosmology (ICC).

Memory Intensive 2.5x

  • 2 x 1.5TB login nodes with 5120 Intel Xeon Skylake processors, 1FMA AVX512, 2.2GHz, 28 cores

  • 452 compute nodes, each with 512 GB of RAM and 2 x X5120 2.2Ghz per node, offering a total of 12 656 cores.

  • The system is connected via Mellanox EDR in a 2:1 blocking configuration. 333TB of fast I/O scratch space and 1.6PB of Data space on Lustre.

Memory Intensive 2 (Formerly “Data Centric”)

  • 9184 cores in the COSMA6 cluster.  The 574 nodes offer 128GB of memory per node and are connected via a Mellanox FDR 10 2:1 Blocking Infiniband fabric. Storage capacity on COSMA6 is 2.5PB.

  • The IB fabric connects COSMA6 to Lustre filesystem, with the I/O performance for both being 10-11GB/s write and 5-6GB/s read

More information on the Memory Intensive 2 system can be found  here  and further enquiries on the Memory Intensive Service can be emailed to ICC Support


Extreme Scaling Service

The Extreme Scaling Service is hosted by the University of Edinburgh. DiRAC Extreme Scaling (also know as Tesseract) is available to industry, commerce and academic researchers. General information on Tesseract, as well as the User Guide, is available here.

  • 4116 Intel Xeon Skylake processors, 1468 nodes, 12 cores per socket, two sockets per node,  FMA AVX512, 2.2GHz base, 3.0Ghz turbo,  96GB RAM

  • 8 GPU compute nodes with two 2.1GHz, 12-core Intel Xeon (Skylake) Silver 4166 processors; 96 GB of memory; and 4 Nvidia V100 (Volta) GPU accelerators connected over PCIe
  • 3PB lustre storage and Hypercube OPA interconnect.

  • This system is configured for good to excellent strong scaling and vectorised codes and has High Performance I/O and Interconnect.

Further information on the Extreme Scaling Service is available by emailing DiRAC Support.

 

Our Services Supporting our Science

DiRAC operates within a framework of well-established science cases which have been fully peer reviewed to deliver a transformative research programme aimed at creating novel and improved computing techniques and facilities. We tailor our Services’ architectures towards solving these science problems and by doing so help underpin research covering the full remit of STFC’s astronomy, particle, nuclear and accelerator physics Science Challenges. Some brief illustrations of how our Services map onto our Science Agenda can be found below and for more information please email theProject Office.

The Data Intensive Service addresses the problems associated with driving scientific discovery through the analysis of large data sets using a combination of modelling and simulation, e.g. the large-volume data sets from flagship astronomical satellites such as Planck and Gaia, and ground-based facilities such as the Square Kilometre Array (SKA).  One project using the Data Intensive Service is looking at breaking resonances between migrating planets.

The Memory Intensive Service supports detailed and complex simulations related to Computational Fluid Dynamic problems, for example cosmological simulations of galaxy formation and evolution, which require access to very large amounts of memory (more than 300 terabytes) to enable codes to ‘follow’ structures as they form.   The innovative design of this Service supports physically detailed simulations which can use an entire DiRAC machine for weeks or months at a time. More on the Virgo project, which uses the Memory Intensive Service can be found here.

The Extreme Scaling Service supports codes that make full use of multi-petaflop HPC systems. DiRAC works with industry on the design of systems using Lattice QCD in theoretical particle physics as a driver.   This field of physics provides theoretical input on the properties of hadrons to assist with the interpretation of data from experiments such as the Large Hadron Collider. To find out more about one of the Lattice QCD projects using the Extreme Scaling Service see the 2017 Science Highlights page.


The DiRAC Data Management Plan can be found here.