MPI Library

MPI Library


Produce a digital repository for the sharing and archiving of benchmarking data for key DiRAC codes.

Summary of work undertaken 

A wiki was created within the DiRAC instance of the confluence package, currently hosted at the University of Edinburgh. This was created as a long-term repository for benchmarking data from the DiRAC systems. The wiki is intended to be updated as and when new benchmark results are available, for example during procurement activities or when new versions of applications are introduced. The repository has space for detailed information and comments from the benchmark runners to highlight special features of the run. It also has space for MPI profiling information from the run.

Initial data from existing benchmark runs has been loaded into the wiki.


Creation of an MPI Library – Final Report

Previous DiRAC Days


09:00Simon Hands (Swansea)Welcome (pdf)
09:10Mark Wilkinson (Dirac)Director’s Report (pdf)
09:30Debora Sijacki (Cambridge)Simulating the most energetic events in the Universe (pdf)
10:00Jacek Dobaczewski (York)Computing Atomic Nuclei (pdf)
11:00David Wilson (TC Dublin)Hadron resonances from lattice QCD (pdf)
11:30Mark Hannam (Cardiff)How DiRAC helps us measure black holes (pdf)
12:00Vera Guelpers (Southampton)Lattice QCD calculations for high-precision tests of the Standard Model of particle physics (pdf)
12:30David Britton (Glasgow) /
Peter Clarke (Edinburgh)
GridPP (pdf) / IRIS (pdf)
13:00Lunch, (Room 218, Wallace Bulding)
14:30Parallel Sessions
17:00Poster Presentations, (Room 218, Wallace Bulding)


9:00 – 9:30Welcome and Director’s reportOak Room
9:30 – 10:30Morning sessionOak Room
10:30 – 11:00Coffee
11:00 – 13:00Morning session (cont.)Oak Room
13:00 – 14:30Lunch
14:30 – 17:15Parallel Session 1Left hand side
of Oak Room
Parallel Session 2Right hand side
of Oak Room
17:15 – 18:00Poster session, prizes and receptionChestnut and
Birch Rooms



Practicalities, Welcome and Introduction
Adrian Jenkins/Alastair Basden

Introduction to The DIRAC3 Technical Case
Jeremy Yates

The new Data Intensive Service
Paul Calleja

The new Extreme Scaling Service
Antonin Portelli

The new Memory Intensive Service
Alastair Basden

The new Research Software Engineer Service
Andy Turner

The new Data Curation Service
Alastair Basden

The ExCALIBUR Hardware and Enabling Software Programme
Martin Hamilton

Multi-factor authentication on DiRAC resources
Jon Wakelin

Meet the technical team
All teams

Student Cluster Competition
Mark Wilkinson

Innovation Placement outputs
Mark Wilkinson

Suggestions for follow up meetings and projects
Alastair Basden
14:00Director’s talk

A steeply-inclined trajectory for the Chicxulub impact.
Thomas Davison

B Meson Oscillations
Christine Davies

The external photo-evaporation of planet-forming discs
Tom Haworth

DiRAC enables prediction for matter-anti-matter asymmetry in the Standard Model.
Christopher Sachrajda

Identifying and Quantifying the Role of Magnetic Reconnection in Space Plasma Turbulence.
Jeffersson Andres Agudelo Rueda

Extreme QCD: Quantifying the QCD Phase Diagram
Aleksandr Nikolaev
16:25Hackathon Presentations
16:50Poster Competition results
16:55Closing Talk

DiRAC-3 Launch

Welcome: Jeremy Yates, Chair of DiRAC Technical Directorate

Overview of the new HPC Systems and hardware across all services: Mark Wilkinson, Director

Overview of User Guide: Anushka Sharma, Senior Technical Programme Coordinator

Overview of DiRAC Training: Richard Regan, Training Manager

TauREx on DiRAC-3: Dr. Ahmed Faris Al-Refaie, University College London

A Lattice Field Theory Ecosystem for DiRAC-3: L Del Debbio, University of Edinburgh

Large cosmological runs on the new DiRAC MI “Cosma-8” facility: Matthieu Schaller, Lorentz Institute, on behalf of the Virgo Consortium

The Technical Working Group experience of deploying DiRAC3: Alastair Basden, Technical Manager

The Research Software Engineer Experience: James Richings, Edinburgh and Athena Elafrou, Cambridge

Closing remarks: Simon Hands, DiRAC Community Development Director

Welcome: Jeremy Yates, Chair of DiRAC Technical Directorate

Research Image Competition 2022

Research Image Competition 2022

DiRAC is excited to launch our inaugural research image competition and encourage our past and present users to submit aesthetically inspiring and scientifically interesting imagery which has been generated using the DiRAC facility over the past three years.

The competition is a wonderful opportunity to have your research imagery displayed across the DiRAC platform, on our website, on social media, and in print media (see our 2022 calendar, right) and will help promote interest in your area of scientific study.

There are two categories for submission and the winners of each category will be selected by a panel of experts.

The top image in each category being awarded a £250 Amazon e-voucher, kindly donated by Q Associates, a Logicalis company.

Details on how to submit can be found below. Submission deadline is 5pm on 14th October.

Prize winners will be notified on 1st November and results annouced on our website and social media channels.

DiRAC 2022 Calendar comprising previously submitted images generated on DiRAC.


Theme 1: Particle and Nuclear Physics

Theme 2: Astronomy, Cosmology and Solar & Planetary Science

Any imagery submitted to this competition could be used in future marketing/publicity materials relating to DiRAC in either digital or print and as such, should have visual impact and scientific interest. We will be producing a 2023 DiRAC image calendar, the imagery for which will be drawn from the submissions to this competition and a selection of the top images will be displayed in print at our annual DiRAC Science Day event.

Entry requirements:
  • Images must be generated as a result of research work carried out using the DiRAC facility
  • Images should not be older than three years
  • Digital images must be submitted in one of the following formats: JPEG, TIFF, PNG or PDF
  • All entries must be accompanied by an entry form (see details below)
  • Entires should be accompanied by a short description, of no more than 150 words, giving scientific context to the image
  • Images may be generated specifically for this competition, but should result from research performed within the last 3 years
  • Author names should be included
  • Up to three submissions may be made per person
  • Images should be of a sufficient size and resolution (300 dpi minimum)
  • File sizes should not be larger than 15mb
  • Competition opens 1st Sept
  • The deadline for submissions is 5pm, Friday 14th October

  • Dr Clare Jenner (Chair), DiRAC Deputy Director
  • Dr Jonathan Allday, Teacher and Science Writer
    • Author of Quarks, Leptons and the Big Bang and Space-time
  • Prof Peter Coles, Maynooth University
    • Peter is an author and runs the In The Dark science blog
  • Prof Lucie Green, University College London
    • Lucie is a Science broadcaster and former presenter of The Sky at Night
  • Prof Tara Shears, University of Liverpool
    • Tara has worked with the Arts Catalyst visual arts project
  • Georgina Ellis, Q Associates
    • Senior Account Manager

DiRAC Image Competition 2022
Maximum upload size: 20MB

Queries can be directed to Simon Hands, DiRAC Community Director

Automated Benchmarks (Reframe)

Automated Benchmarks (Reframe)


To utilise Reframe as a single wrapper for the suite of existing DiRAC and UCL Tier2 benchmarks – with the aim of providing a single set of benchmarks that can be run as needed following system upgrades.

Summary of work undertaken 

The following were successfully added to Reframe:

  • The benchmarks for Swift and Grid
  • The benchmarks for CP2K
  • The benchmarks for HPGMG, IMB, and Sombrero (the latter is a mini-app for Swift)

Work on benchmarks that has been progressed, but as yet not completed due to the technical challenges:

  • Ramses, Sphng, and Trove

Implementation of Reframe for Benchmarks – Final Report

AI/ML Benchmark

AI/ML Benchmark


To create a self-contained AI Benchmark/workflow in the domain of synthetic brain imaging.

Summary of work undertaken 

Training epochs were run for three provided model configurations on the UCL AI platform on both a single GPU and multiple GPU devices. Several multiple day runs of ~100 epochs were run. As training scripts were configured to run for 100,000 epochs and each epoch takes around an hour or more to run, ‘full’ runs of the model were not performed.

Python requirements and the package associated with the code were installed on the Cambridge HPC service (following the same set up process documented in the repository README below).


A Public GitHub repository containing the open-source (GPL v3) release of the research code developed by Kings College London. The GitHub repository ( has a GPL v3 license file included.

The README file in the repository contains full details of how to install the dependencies and Python package, and includes platform dependent requirement specifications with pinned versions for the support operating system and Python version combinations. There is also documentation on how to run the model training with the example configurations provided.

Development of an Artificial Intelligence-Machine Learning Benchmark – Final Report

Advanced Application & Systems Performance Analysis Tools

Advanced Application & Systems Performance Analysis Tools


To produce an application that monitors workload usage of hardware components.

Summary of work undertaken 

The assets from the Cloud Road-testing for UKRI Workloads work package were extended to ensure every platform has monitoring to get visibility into how well a workload is making use of the hardware assigned to that specific platform.

The Jupyter Notebook and Linux machine platforms both ran an isolated Prometheus Node Exporter and Grafana stack. A similar stack was run on Slurm, alongside Slurm specific information, such as the current jobs being run. 


OpenStack Cloud Dashboard – Azimuth was modified to link to a Grafana that can provide insights into the users current usage and resource allocations. (Although, there are still some missing links in making a full end-to-end prototype.)

DiRAC Wide Dashboard – there is no working prototype for a DiRAC wide dashboard, but architecturally it was shown how this could be adopted for each site, and then aggregated centrally, using the same technologies.

Authorisation Module

Authorisation Module


To re-engineer an existing Authorisation application to be suitable for use within the DiRAC ecosphere.

Summary of work undertaken 

A Data Access Controller was developed with the intention of each HPC service hosting an instance of this service. It will be responsible for querying instances of the IG App to enable local users to prove that they have permission to access a locally stored dataset by virtue of their participation in the project owning it.

An end-to-end workflow was successfully demonstrated, whereby users were able to register datasets from the Information Governance app and create new shared directories on the local filesystem, and that permissions on those directories were automatically updated in response to changes in the IG App.


Investigation of an Authorisation Module – Final Report




To explore and document the experiences of porting some representative codes to a OneAPI programming module.

Summary of work undertaken 

The project documented the experiences of porting some representative codes to one or other of two promising programming models: SYCL or OpenMP offload.  The programming models are supported by Intel OneAPI and other commercial and open source compilers.

Five candidate codes (OpenQCD, OpenMM, HemeLB, dGpoly3D, and AREPO) were selected, profiled and kernels were ported. (An absolute performance comparison between programming models was not a goal of this work.)

The experience of a group of research software engineers, most of whom were novices in SYCL or OpenMP GPU offload programming was examined.


The final report from this piece of work is expected in Autumn 2022.




To understand the multiple dimensions of prediction of concepts in social and biomedical science questionnaires.

Summary of work undertaken 

This work package extended the scope of the research tackled in the RCNIC project to:

  • Dive deeper into questions related to the size and quality of the training data and how this affects the performance of the designed ML models.
  • Assess the performance of the trained ML models for automated tagging of question texts with the top-level concept topics (16 in number) from existing thesauri such as European Language Social Science Thesaurus (ELSST) in ‘inference mode’, i.e. with new unseen questionnaires (that were not part of the training and validation set).
  • Investigate new ML models (such as hierarchical approaches) for tagging question texts (and response domains) with the 120 second-level topics from ELSST.

Applying machine learning models to social or biomedical science questionnaires – Final Report