2018 Highlights

Some of the Research Highlights from our users in 2018:

2018 list of publications

The weighty issue of quarks

Figure 1. An artist’s impression of an ηc meson containing a charm quark and antiquark (large blue circles) in a strongly interacting background soup of gluons (wavy lines) and quark-antiquark pairs (small circles).

Electrons make circular tracks when they move perpendicular to a magnetic field in a particle detector. From the radius of the circle, knowing the electric charge, we can determine the electron mass. Quarks, on the other hand, are never seen as free particles. Instead, all we see in our particle detectors are the bound states of quarks, called hadrons. We can measure hadron masses, but how do we then determine, or even define, the mass of a quark?

The answer is that we must define the mass as the parameter that enters the equations of QCD, the theory of the strong interaction that binds quarks into hadrons. We can solve the equations of QCD using the numerical approach of lattice QCD and hence determine the quark mass parameter that we need to put into the theory to give the correct mass for a calibration hadron containing that quark. Since experimentalists can measure some hadron masses very well, we can do this accurately. For example, to fix the charm quark mass, mc, we use the mass of the ηc (pictured in Figure 1).

To be of use to other theorists, however, we must convert the lattice quark mass into that appropriate to QCD defined in continuous space-time. We must do this accurately too, to avoid introducing a sizeable uncertainty into the result.

Figure 2. A comparison of lattice QCD results for mc, given in the conventional continuum QCD scheme. The top result is our new one, and the third our 2014 result, both obtained on DiRAC. The dark grey band is the lattice QCD world average, with much better uncertainty than the previous average light grey band (marked PDG).

HPQCD developed one of the main techniques used to determine quark masses (the JJ method) and most recently used this, on DiRAC, to determine mc to 1% in 2014 (arXiv:1408.4169). This year we used a completely different technique (called RI-SMOM) to ‘match’ our lattice quark mass to that of continuum QCD, directly on the lattice (arXiv:1805.06225). We were careful to quantify and remove lattice artefacts, which had been ignored in previous calculations. Our new 1% accurate result agrees very well with our 2014 value and also with a third, MRS, method developed by FNAL/MILC/TUM in 2018. This gives a world-average for mc of 1.2753(65) GeV (1.36 times the mass of the proton). The quark masses need to be accurately known so that we can test whether the newly-discovered Higgs boson is behaving as we expect when it decays to hadrons via quark-antiquark pairs of different flavours.

Light quark flavour breaking effects in B meson decay constants

In the standard model of particle physics, there are six types of quarks whimsically called up u, down d, strange s, charm c, bottom b and top t in order of increasing mass. These bind together to form two classes of hadrons, mesons consisting of a quark – antiquark pair and baryons with three quarks. An example of a baryon is the proton (uud) while examples of mesons are the pion (ud), kaon (us), B (ub) and Bs (sb). These mesons decay to a lepton (usually the electron) and a neutrino. The decays are parametrised by ‘meson decay constants’, from which, together with experimentally determined leptonic decay rates elements of the CKM matrix can be found. These are fundamental parameters of the Standard Model and their determination can help to isolate possible new physics. Presently recent B physics experiments have left us with a number of heavy flavour physics puzzles which will be more intensively investigated by Belle II.

Figure 1: Left panel: A ‘fan’ plot for the light quark ud and us decay constants (normalised with the average ud, us decay constant) for one lattice spacing against the pion mass; Right panel: The heavy b quark ub, sb decay constants, again normalised, for several lattice spacings also plotted against the pion mass.

In our Lattice QCD programme we have developed a method by expanding about a point in the light quark mass parameter space where the u, d, s quark masses are initially set equal, and then extrapolating to the physical quark masses while keeping the average quark mass constant. The original expansions were for hadron masses, and lead to a ‘fan’ plot where the different hadron masses radiate from their common point. They were extended to decay constants for the pion and kaon mesons, [1], as shown in the left panel of the figure. We have now further extended this programme to the heavier quark masses and in particular the b, [2]. In the right panel of the figure we give the equivalent plot for the B and Bs mesons, together with the present (FLAG16) values.

[1] V.G. Bornyakov et al., Phys. Lett. B 767 (2017) 366.

[2] S. Hollitt et al., PoS(LATTICE2018)268, arXiv:1811.06677 [hep-lat].

Gravitational Waves 2018

The Advanced LIGO and Virgo detectors were undergoing an upgrade during 2018, so the year was devoted to completing analyses of data from the first two observing runs (O1 and O2), and preparations for the third observing run (O3), due to start in April 2019.

The chief results from the LIGO and Virgo collaborations were the final estimates of the physical properties of the binary-neutron-star merger, GW170817, including stronger constraints on the neutron star tidal deformability, and therefore its equation of state and radius; and the release of the first catalogue of observations from O1 and O2, based on a re-analysis of all data, which in turn had undergone improved characterisation and cleansing. This re-analysis uncovered new signals in the data, and doubled the number of observed black hole binary mergers from five to ten. This included re-classifying the marginal “LIGO-Virgo transient” from October 12, 2015 as a bona-fide GW signal, and among the new signals, GW170729, which is likely the most massive binary yet observed, at around 85 solar masses. All of this work used the waveform models that were calibrated to numerical-relativity simulations performed on DiRAC.

Figure 2: The eleven gravitational-wave signals observed in the first and second LIGO-Virgo observing runs (O1 and O2).

Modelling work also continued. The first inspiral-merger-ringdown model to include higher signal harmonics (PhenomHM) was published in Physical Review Letters, and is now being used to update the measurements of the properties of GW170729. The bulk of the most extensive and systematic set of simulations of precessing binaries was completed on Cosma6, and a preliminary model of the precession effects through merger and ringdown (the first of its kind) was completed, with publication expected in 2019, for use in analysis of O3 observations.

Exomol: Computational spectroscopy of Exoplanets

Molecular spectra play a critical role for the retrieval of atmospheres of exoplanets and cool starts. However, the lack of laboratory data on many of molecular species severely limits models for objects as diverse as Jupiter, exoplanets and brown dwarfs. The UCL ExoMol team is world leader in providing molecular line lists for exoplanet and other hot atmospheres. The ExoMol procedure uses a mixture of ab initio calculations and available laboratory data. These line lists form the input for opacity models for cool stars and brown dwarfs as well as for radiative transport models involving exoplanets. So far ExoMol has collected molecular opacities for more than 50 molecules (130 isotopologues): 30 line lists have been generated using the ExoMol methodology and other line lists were taken from external sources or from our work predating the ExoMol project.

Ethylene and Acetylene are important absorber in hot Jupiter exoplanets, brown dwarfs and cool carbon stars. As part of our Hydrocarbons DiRAC project, we have new line lists for these molecules, each of which contain over 30 billion transitions. In order to accomplish this data intensive task, some of the UK’s most advanced supercomputers have been used, provided by the DiRAC project. Even larger line lists have been completed. For example, the methyl chloride’s line list contains over 300 billion lines.

Many of those exoplanets discovered thus far are categorized as rocky objects with an atmosphere. Most of these objects are however hot due to their short orbital period. Models suggest that water is the dominant species in their atmospheres. The hot temperatures are expected to turn these atmospheres into a (high pressure) steam bath containing remains of melted rock. The spectroscopy of these hot rocky objects will be very different from that of cooler objects or hot gas giants. Molecules suggested to be important for the spectroscopy of these objects are reviewed together with the current status of the corresponding spectroscopic data. We have started building a comprehensive database of linelist/cross sections applicable for atmospheric models of rocky super-Earths as part of the ExoMol project.

An efficient usage of line lists comprising billions of lines is extremely challenging. To address this challenge of Big Data, we have developed a new Fortran 2003 program ExoCross. ExoCross takes line lists as input and returns pressure- and temperature-dependent cross sections as well a variety of other derived molecular properties which depend on the underlying spectroscopic data. These include state-dependent lifetimes, temperature-dependent cooling functions, and thermodynamic properties such as partition functions and specific heats and is designed to work with the recently proposed method of super-lines. It is capable of simulating non-LTE spectra using a simple two-temperature approach.

Supported by the DiRAC high performance facilities, we have organised and run a workshop “Digital Exoplanets” in Prague, January 2019 (www.digitalexoplanets.org). The workshop gathered over 50 computational exoplanetary scientists (with more than half career researchers) and focused on the data for atmospheric models of exoplanets and cool stars. The main theme of the workshop was the open access to the data and computer codes. This was the first workshops of this kind, focusing not only on observations and theory, but mostly on ‘how to actually run these codes’. The participants had the opportunity to learn how to use (install, run, debug etc) different main stream atmospheric and spectroscopic codes during the hands-on sessions, which were the central part of the workshop. These practical sessions were run on the resources provided by DiRAC team, with the help of the DiRAC team, which is greatly appreciated.

UK MHD Consortium: 1 Solar and Planetary Interiors

One of the great challenges in the study of planetary interiors is to understand the generation of magnetic fields in the interiors of the giant gas planets. Wieland Dietrich and Chris Jones from the University of Leeds have been addressing this problem through high resolution magnetohydrodynamic models, with the novel feature of incorporating realistic radially-dependent profiles for the electrical conductivity (Icarus 305 (2018) 15-32). This allows the study of a range of models in which the conductivity ranges from Jupiter-like, with only a thin outer non-conducting shell to Saturn-like, with a thick non-conducting shell. They find Jupiter-like steady dipolar fields, quadrupolar dynamos for profiles between those of Jupiter and Saturn, and dipolar fields again for Saturn-like profiles. The non-axisymmetric components of the Saturn model are, though, more pronounced than in Saturn itself, prompting the suggestion that a stably stratified region – postulated to exist in Saturn, but not yet incorporated in the model – may be of significance.

Figure 3: The figure shows (from left to right) time-averaged meridional sections of the azimuthal flow, meridional stream function, kinetic helicity, radial and azimuthal magnetic fields; the right hand plots are typical snapshots of the radial field and azimuthal flow at the surface. The top row is for the case of uniform electrical conductivity; the bottom row is for a Jupiter-like profile, in which the electrical conductivity drops off sharply at 90% of the planetary radius.

Leicester Highlight: Planet-driven warps in protoplanetary discs

Since their discovery in the 1990s our knowledge of extra-solar planets has increased at a staggering rate, and we now know that most, perhaps all, stars host planetary systems. These planets form in cold discs of dust and gas around young stars, and in recent years our observations of these protoplanetary discs have also advanced dramatically. We typically treat planet-forming discs as flat, smooth structures but recent observations have revealed a wealth of sub-structures in these discs, including spiral arms, gaps, misalignments and warps. While we can explain the origin of some of these features, many are still open questions. One of the best-studied protoplanetary discs is around the nearby star TW Hya, and this almost face-on disc shows evidence of gaps (possibly due to planets) as well as a localised dark region in the outer disc. Observations with the Hubble Space Telescope found that this dark region is moving around the outer disc, but it is seen to move much more quickly than gas orbits at that radius. This strongly suggests that the dark region is actually a shadow cast by something in the inner disc (where the gas orbits faster). This was interpreted as evidence of a disc warp, such that the inner part of the disc is inclined with respect to the outer disc. The relative misalignment between the inner and outer disc then results in a shadow being cast on the outer disc. Such a configuration could be caused if a misaligned planet is present in this system, where the planets orbit is tilted away from the mid-plane of the disc. The motion of the inner, tilted disc and hence the shadow would thus be governed by the gravity of this misaligned planet, and it might be possible to infer properties of the planet from the observed shadow.

Figure 4: 3-D numerical simulation of en inclined planet in a protoplanetary disc, performed on the DiRAC Data Intensitve @ Leicester system. Our simulations showed that an inclined giant planet can create an observable warp in the disk. (Figure adapted from Nealon et al. 2018).

Simulating such a configuration is technically challenging, because we need to resolve the out-of-plane interactions between the disc and the planet, as well as including the influence of the outer disc. In practice this requires high resolution 3-D hydrodynamic simulations covering a large dynamic range. Using DiRAC, we were able to run a suite of simulations to study the interaction between misaligned planets and their parent discs in detail, investigating the importance of different planet masses and inclinations on the subsequent disc structure. For planets that were massive enough to carve the disc into an inner disc and an outer disc (as seen in Fig.1), we confirmed it was possible to drive the configuration predicted from the observations of TW Hya. Importantly, the large spatial scales we were able to simulate allowed us to show that this was possible even for small planet inclinations. We subsequently ran detailed radiative transfer simulations to calculate how our simulated discs would appear to our telescopes, and confirmed that such a configuration is broadly consistent with what is observed in TW Hya. New observations are finding that misalignments of this type are common in planet-forming discs, and in the coming years DiRAC will allow us to understand the dynamics and evolution of these nascent planetary systems in detail.

COSMOS Consortium

The COSMOS consortium (dp002) continues to exploit DiRAC HPC Facilities and has made progress towards ambitious sciences in extreme gravity, inflation and the early universe, the study of the cosmic microwave sky and large-scale structure (including the Final Data Release for the ESA Planck satellite). Here, we highlight recent breakthroughs in general relativistic simulations of black holes and inflation using GRChombo, a flexible AMR numerical relativity code publicly released by COSMOS consortium members in early 2018 (see grchombo.org and github.com/GRChombo).

The groundbreaking discovery of a gravitational signal from a binary black hole merger in 2016 by LIGO is now followed by an exciting series of detections, including that of the first neutron star binary merger in 2018. With the current LIGO/VIRGO O3 science run with upgraded detectors detecting multiple events weekly, the new era of gravitational wave astronomy is well underway.

The GRChombo AMR code, originally developed and now anchored by COSMOS consortium members, is well-placed to exploit these new observations of gravitational waves (GWs). We have continued our long-standing industrial collaboration through the COSMOS Intel Parallel Computing Centre (IPCC) to modernize our research codes in cosmology and general relativity, focusing on the optimization of GRChombo for multi/many-core systems; for example, GRChombo is one of the few codes that can take full advantage of the many-core Xeon Phi systems. In addition, GRChombo continues to grow in functional capabilities, with the addition of a powerful MultiGrid initial conditions solver and the pioneering many-core OSPRay realtime ray tracer and volume renderer with in-situ capabilities (supported by the Intel Viz team).

The advent of GW astronomy has opened up a window into the one of the key questions in modern cosmology about the nature of dark matter. COSMOS investigators are taking a leading role in this search. One of the prime candidates for dark matter is an ultralight field which are sometimes called “axion-like particles” or ALPs. In recent papers (notably arXiv:1609.04724 and arXiv:1806.09367), we have investigated the formation of highly compact “axion dark stars” from these light fields, and the potential GW signals created by their collisions with neutron stars and black holes (arXiv:1808.04668, shown in the figure below), and with other axion stars (arXiv:1802.06733). In (arXiv:1904.12783), we showed that these fields can also interact directly with black holes to grow “scalar hair”, leading to a possibly detectable dephasing of the GW signals from BH binary mergers.

A volume rendering of the bosonic energy density (blue) and the BH conformal factor (grey).

Beyond GW astronomy, COSMOS consortium investigators are also leading in the numerical study of fundamental questions in physics. For example, while inflation is now the paradigmatic theory of the early universe, providing a dynamical mechanism to explain why the universe is homogenous over cosmological scales and provide a source of the seeds of large scale structure. Nevertheless, can inflation begin in the first place, if the initial conditions are not homogenous? In recent work (e.g. arXiv:1712.07352), we classified the models of inflation which are not robust to inhomogenous initial conditions. Intriguingly, we found that low scale inflation – preferred by many string theory phenomenological models ­– is not robust, failing to inflate in the presence of large scale but subdominant inhomogeneities. 

In addition, we continue to take the lead in investigating the nature of gravity in higher dimensions. Following our work in showing the failure of weak cosmic censorship in 5D due to black ring instabilities, we continue to study the end states of higher dimensional rotating (Myers-Perry) black holes. We find that, similarly to the black ring systems, these black holes exhibit the Gregory-Laflamme instability, leading to a fractal instability with the ultimate end state likely to exhibit naked singularities in an asymptotically flat space time (shown above). Given these results in extra dimensions, it is clear that, if cosmic censorship is indeed true, then it must be a property specific to the three-dimensional space that we live in; without this essential property, Einstein’s theory of general relativity could lose its predictive power.

Modelling Planetary Collisions with the SWIFT code

We have produced the highest resolution simulations of the giant impact that is thought to have knocked Uranus’ spin axis into the plane of the solar system. The two orders of magnitude increase in resolution was enabled by our use of the SWIFT hydrodynamics and gravity code on the DiRAC facility in Durham. We incorporated equations of state for various different materials into SWIFT, which has primarily been designed for cosmological applications.  The high-resolution simulation of this planetary giant impact shows features that are entirely absent from previous smaller calculations. For instance after the initial impact, the subsequent sloshing around of the system can be sufficiently vigorous to unbind some of the icy mantle from the proto-Uranus – an effect that was not seen in earlier million-particle simulations. Determining what material was placed into orbit as a result of this impact is crucial for deciding if the currently observed Uranian satellites could have formed from this debris. Again, the higher resolution simulations now allow us to make reliable predictions for this. These represent the first scientific results with the SWIFT code, and demonstrate how it will open up new ways to constrain models of planetary giant impacts relevant to most bodies in the solar system.

A mid-collision snapshot of a grazing impact with 10$^8$ SPH particles coloured by their material and internal energy, showing some of the detailed evolution and mixing that can now be resolved. In the left panel, light and dark grey show the target’s ice and rock material, respectively, and purple and brown show the same for the impactor. Light blue is the target’s H-He atmosphere.

This calculation used the Dell 940 system with four Intel skylake with 112 cores.  This system was provided by Intel and Dell as a proof-of-concept system as part of the Dell/Intel and DiRAC@Durham industrial engagement at list price of approximately £100k.

The FABLE simulations of galaxies, groups and clusters

Clusters of galaxies collapse from the densest regions of the early Universe to become the largest objects in existence by the present day. Their unparalleled sizes make them powerful tools to probe the growth of large-scale structure and further our understanding of cosmology. At the same time, clusters represent unique astrophysical playgrounds in which galaxies interact with each other and with the intracluster gas, which itself is churned up by the powerful energy output of supermassive black holes. This process, known as AGN feedback, stems from black holes at the centres of galaxies swallowing gas, converting it into outflows of energy and impacting not only their galaxy, but also the surrounding intracluster medium millions of light-years away.

The inclusion of AGN feedback in cosmological hydrodynamical simulations of galaxy formation has proven to be a key ingredient for producing realistic galaxies in a representative region of the universe. Projects such as the Illustris simulation have met with great success in this area and thus have made great strides in our understanding of galaxy formation and evolution, as well as guiding observations. Yet whilst this model matches a wide range of observed galaxy properties, it struggles to reproduce other important properties of galaxy clusters and groups such as the contents of the intracluster medium.

Figure 1: The main panel shows the gas distribution of one of the largest fable clusters, revealing an array of interconnecting filaments and infalling galaxies and groups. Panel 1 shows an observed spiral galaxy with an enormous tail of gas shining in X-rays and panel 2 shows a similar view for one of our simulated galaxies. This gas has been stripped from the galaxy as it falls into the dense cluster environment. Panel 3 highlights a smaller group of galaxies of various shapes and sizes, while panel 4 shows the structure of AREPO’s moving hydrodynamical mesh through a slice of the central galaxy.

With the FABLE project we have built upon the success of Illustris by improving the models for feedback from AGN and stars so that we can simultaneously produce realistic galaxies, groups and clusters, thereby completing our picture of the Universe across this vast range of scales. Consisting of high-resolution simulations of more than thirty groups and clusters performed with the state-of-the-art moving-mesh code AREPO, FABLE reproduces a number of important observables such as the build-up of stars in galaxies and the thermodynamic properties of the intracluster medium.

FABLE further reproduces a range of integrated cluster properties and their interdependencies as represented via the oft-studied cluster scaling relations. These relations are vital for the cosmological interpretation of observed cluster data, the availability of which is set to be greatly expanded by numerous ongoing and future surveys, for example with observations of the Sunyaev-Zel’dovich effect via SPT-3G and ACTpol or in X-rays with the eROSITA and Athena observatories. Yet as larger cluster surveys beat down the statistical uncertainties, we are increasingly limited by our incomplete understanding of cluster physics and its impact on the cluster scaling relations, which are required to link cluster observables to their fundamental masses.

Figure 2: The evolution of the fable cluster shown in the first figure, starting from just 2.2 billion years after the Big Bang (left) through to the onset of a major merger between two smaller clusters 5.7 billion years later (right) that will eventually form one of the largest possible gravitationally-bound objects in the simulated (and real) universe.

Cosmological hydrodynamical simulations are in a unique position to quantify these effects and thereby facilitate the use of clusters as precise cosmological probes. Indeed, careful comparison of our results with observed scaling relations hints towards a non-negligible bias in observational studies that attempt to measure the masses of clusters from their X-ray emission under simplifying assumptions about the clusters’ dynamical state.

Furthermore, the increased size and depth of future surveys means that many new clusters will lie at vast distances from which their detectable emission may have taken many billions of years to reach us. This means it is vital to understand how the cluster scaling relations change over time so that they can be applied reliably to the most distant objects in upcoming datasets. While current observations are severely limited in their ability to constrain such evolution, simulations allow us to study the scaling relations for a well-defined sample of objects over the whole history of the Universe. Upcoming results from FABLE demonstrate a significant departure of the evolution of the relations from the simple model typically assumed in observational studies, which has important implications for interpreting the incoming wealth of new cluster data.

Fast and energetic AGN-driven outflows in simulated dwarf galaxies

Dwarf galaxies, the least massive galaxies in the Universe, are the smallest building blocks of the cosmic structure formation process. They are interesting cosmological probes, as there are a number of apparent discrepancies when comparing observations of dwarf galaxies with the predictions of structure formation models. For example, the large disparity in the number of observed dwarfs with respect to the predicted number of dark matter haloes that may be hosting these systems (the missing satellite problem) or the lack of observed massive dwarf galaxies (the too-big-to-fail problem). Some have suggested that these issues could be resolved by changing our models of dark matter, which provides the skeleton for cosmic structure formation. Others argue that the focus should be on improving our models of the luminous, ‘baryonic’ matter that we can observe directly.

One of the most important baryonic processes, reionization, stems from the radiation produced by the first galaxies in the Universe. This radiation shuts down star formation in the low-mass dwarf galaxies rendering them undetectable. For the more massive dwarf galaxies it has been suggested that violent stellar explosions – supernovae (SNe) – could shut down star formation, however this remains controversial. Most simulations indicate that another ingredient would be needed for the theoretical models to explain the sparsity of massive dwarf galaxies in our Universe.

Figure 1: Edge-on projections of all the simulation runs at t = 300 Myr. The first row shows surface densities, the second row shows temperatures, the third row shows vertical velocities, and the fourth row shows metallicities. The runs with low star formation efficiency are depicted on the left hand side, and the high star formation efficiency runs on the right hand side. With only SN feedback, the outflows are slow, warm, and high-density. With additional AGN feedback, the outflows have an additional hot, low-density component moving at high velocities.

Recently, it has been proposed that active galactic nuclei (AGN) – actively-accreting massive black holes – might be this missing ingredient. Previously, theorists did not include AGN in their simulations of dwarf galaxies as these were only observed in more massive galaxies. However, the systematic analysis of optical surveys has revealed a population of dwarf galaxies hosting AGN, which have been confirmed by X-ray follow-up observations. All of these AGN are extremely luminous relative to their black hole mass and are likely only the tip of the iceberg. There has also been some first evidence of direct AGN feedback from the MaNGA survey, which identified six dwarf galaxies that appear to have an AGN that is preventing on-going star formation. These observational results are very exciting as they suggest that AGN may be able to solve the too-big-to-fail problem.

It is therefore timely to study the physical properties of dwarf galaxies, in particular whether the presence of an AGN can affect their evolution. Using the moving mesh code arepo, we have investigated different models of AGN activity, ranging from simple energy-driven spherical winds to collimated, mass-loaded, bipolar outflows in high resolution simulations of isolated dwarf galaxies hosting an active black hole. Our simulations also include a novel implementation of star formation and mechanical SN feedback.

Figure 2: Comparison between the line-of-sight velocity maps for stars and ionized gas for a galaxy without observed AGN activity (left panel) and a galaxy with AGN activity (right panel). With AGN activity, the ionized gas is visibly offset from the stellar component.

Figure 1 shows the visuals of our simulation suite. Here, edge-on projections of surface density, temperature, vertical velocity and metallicity at t = 300 Myr are plotted. The control runs with neither SN nor AGN feedback (’NoFeedback’ runs) do not produce any outflows. With added SN feedback, we obtain relatively cold and slow-moving outflows. When we also include AGN activity in our simulations, the outflows are much faster as well as much more energetic, reaching temperatures of up to 10 9 K.

These results provide tantalizing evidence that the observed correlation between large kinematic offsets and AGN activity may be due to high-velocity AGN-driven outflows (see Figure 2 for an example of the different line-of-sight velocity maps observed with MaNGA). In our simulations, the same correlation arises because only the AGN-driven gas outflows are fast enough to alter the velocity patterns set by the rotational motion of the galaxy. The relatively slow-moving supernova-driven outflows on the other hand do not offset the rotational motion of the gas so that it stays aligned with the stars (see Figure 3, left panel). This then results in a significant difference between the kinematic position angles (PAs) of stars and gas for dwarf galaxies hosting AGN (see Figure 3, right panel).

Figure 3: Left panel: Comparison between the 2D velocity maps of the stars and gas in the simulations at t = 150 Myr. The colour coding of the pixels indicates the surface density. The spatial coverage of the primary MaNGA sample of 1.5R eff is indicated by a white circle. With added AGN feedback, the central circular motion is significantly altered by the outflows so that the gas would be seen as kinematically offset. With SN feedback alone, the circular motion remains dominant and we do not observe this feature. Right panel: Comparison between the line-of-sight velocity maps for stars and ionized gas at t = 150 Myr. Only gas and stars within 1.5R eff are included (in analogy to MaNGA), and the spatial resolution is matched to the MaNGA resolution at the mean redshift of the primary sample. For the AGN runs, the ionized gas is visibly offset from the stellar component, just like in the observed MaNGA galaxies.

We also investigated the effects of these AGN-driven outflows on star formation and found that AGN activity has a small but systematic effect on the central star formation rates (SFRs) for all set-ups explored, while substantial effects on the global SFR are only obtained with strong SNe and a sustained high-luminosity AGN with an isotropic wind. Our findings from the isolated galaxy set-up therefore indicate that AGN are unlikely to directly affect global dwarf SFRs. However, in realistic cosmological environments inflows are known to be important especially for
high-redshift dwarfs. It is hence possible that AGN-boosted outflows may prevent some of this cosmic ‘pristine’ gas reaching the dwarfs in the first place, providing a mechanism for indirect star formation regulation. This possibility will be addressed in a follow-up study using cosmological zoom-in simulations.

Horizon: Studying the origin and amplification of magnetic fields in galaxies

Magnetic fields permeate the Universe from cosmological to molecular cloud scales, but neither their origin nor the mechanism of how they grow in amplitude is pinned down. We have used DiRAC to make headway on this on two fronts. Firstly in Martin-Alvarez et al. 2018, MNRAS, 479,3348 we sought to understand whether hierarchical galaxy growth could amplify extremely weak (~ 10-19 Gauss) primordial magnetic fields to micro-Gauss levels in present day galaxies. For this purpose, we ran a suite of constrained transport magnetohydrodynamical adaptive mesh refinement cosmological zoom simulations with different stellar feedback prescriptions. We found that cosmological accretion at high redshift excited a galactic turbulent dynamo, which amplified the magnetic field even in the absence of stellar feedback. Our convergence tests are the first in the literature to demonstrate the capture of the elusive turbulent dynamo in cosmological simulations of galaxies.

Figure: The magnetic energy density at redshift 4 in a (150 kpc)3 piece of a cosmological volume. Using a new tracer algorithm (Katz et al. 2019), we have color-coded the magnetic energy density by origin where green corresponds to primordial magnetic fields, red denotes fields injected by supernovae and blue indicates the interaction of the two. Primordial magnetic fields dominate the volume of the simulation whereas supernovae-injected magnetic fields are associated with halos (Katz et al. 2019) where they may be amplified by a turbulent dynamo (Martin-Alvarez et al. 2018).

Secondly, in Katz et al. 2019, MNRAS, 484, 2620, we presented an algorithm to trace magnetic fields produced by different sources, ranging from primordial fields generated for example by inflation to magnetic fields injected by supernovae and black holes. The figure above displays the magnetic energy density, color-coded by origin (green corresponds to primordial magnetic fields, red denotes fields injected by supernovae, and blue indicates the interaction of the two) in one of our simulation outputs of a 150 kpc on a side cosmological volume at redshift 4. With this algorithm, we will be able to methodically explore the contributions of different sources to the magnetization of the Universe.

Turbulence, Shocks and Dissipation in Space Plasmas

In situ observations from solar and heliospheric spacecraft missions measure power spectra of the solar wind plasma and electromagnetic field fluctuations which show a power-law behaviour over several decades in frequency. This is interpreted as the manifestation of a turbulent cascade of energy from large fluid scales to small kinetic scales, characteristic of the motion of the ions and the electrons. Direct numerical simulations of turbulent plasmas are not only important for interpreting the nature of the solar wind fluctuations at large scales, but also a fundamental tool to understand how energy is channelled to protons and electrons. In particular, the hybrid particle-in-cell model, which treats the ions as particles and the electrons as a charge-neutralizing massless fluid, has proved to be the most appropriate and efficient method to follow the turbulent cascade down to sub-ion scales, retaining ion kinetic effects [1].

Figure 1. Left panel: Pseudocolor plot of the power of the perpendicular (with respect to the mean field) magnetic fluctuations when turbulence is fully developed in a 2D 4096×4096 simulation. Middle panel: The same as in the left panel, but for a 3D 512x512x512 simulation. Right panel: Omnidirectional 1D power spectra of magnetic field B, electric field E, ion bulk velocity u, parallel magnetic field Bz, and ion density n, from the 3D simulation. Power laws with different slopes represent theoretical predictions and/or observational results, as reference. In the insets, a direct comparison from 2D and 3D simulations shows a remarkable agreement.

We have used DiRAC resources to perform a collection of state-of-the-art two-dimensional (2D, Fig. 1 left panel) and three-dimensional (3D, Fig. 1 middle) hybrid simulations of plasma turbulence, investigating the processes responsible for coronal heating and solar wind heating and acceleration. We have analysed, among other things, the spectral properties (Fig. 1, right), the spectral anisotropy [2], the intermittency properties, and the nature of fluctuations at kinetic scales. We also investigated the interplay between plasma turbulence and the proton firehose instabilities in a slowly expanding plasma, performing the first 3D hybrid simulation employing the expanding-box model, to take into account the solar wind expansion [3].

Our numerical results are being used to interpret in situ observations in the Earth’s magnetosheath and in the solar wind from the Magnetospheric Multiscale (MMS) mission [4], as well as to predict upcoming results from Parker Solar Probe (PSP) and Solar Orbiter.

More recently, we have produced high-resolution high-cadence 2D and 3D data exploring the conditions encountered by PSP on its first perihelion. Analysis of the results is ongoing and it will provide a large amount of information on many different aspects of plasma turbulence, to be directly compared with present and upcoming spacecraft observations.

[1] Franci et al., Solar Wind Turbulent Cascade from MHD to Sub-ion Scales: Large-size 3D Hybrid Particle-in-cell Simulations, ApJ 853(1):26, 2018
[2] Landi et al., Spectral anisotropies and intermittency of plasma turbulence at ion kinetic scales, submitted to ApJL, 2019
[3] Hellinger et al., Turbulence vs. fire hose instabilities: 3-D hybrid expanding box simulations, submitted to ApJ, 2019
[4] Franci et al., Predicting and interpreting data of solar wind turbulence with numerical modelling: hybrid simulations vs. MMS observations, in preparation, 2019

Longest ever simulation of a planet in a protoplanetary disc

Ragusa et al (MNRAS 474,460: 2018) analysed the longest ever simulation of a planet in a protoplanetary disc, using the FARGO-3D code (Ben´ıtez-Llambay & Masset 2016) to follow a planet’s evolution over a record 300000 orbits. During this time the planet’s eccentricity (blue in figure below) underwent a complex evolutionary variation as it exchanged angular momentum with the disc. The simulations demonstrate that a relatively small (factor three) change in the disc mass has a major effect on the final eccentricity of the planet; the complex evolutionary pattern can be understood in terms of a superposition of precessing eccentric modes of the planet + disc system. These findings are relevant to explaining the significant eccentricity of the first hot Jupiter observed in a protoplanetary disc.

Figure 1: Eccentricity e as a function of time for light (left-hand panel) and massive (right-hand panel) case. The blue curve shows the planet eccentricity, the green curve the disc eccentricity at R = 4.7 in the light case and at R = 5 (azimuthal averages) in the massive one, while the red curve is a global measurement of the disc eccentricity starting from the AMD.

Rosotti & Juhasz (MNRAS 474,L32; 2018) proposed a novel way to probe the vertical temperature profile in protoplanetary discs from measuring the pitch angle of spiral structure in discs in both the infrared and submm. Their hydrodynamic simulations, using the PLUTO code (Mignone et al. 2007) of a planet induced spiral in a disc were post-processed with the radiative transfer code RADMC-3D and were able to demonstrate that spiral structure is significantly more tightly wound in the submm than the infrared. This finding is directly linked to the fact that submm emission traces cold mid-plane regions of the disc. This work means that multi-wavelength observations of spiral structure in real discs can be used as a `dynamical thermometer’ of disc mid-plane regions.

Winter et al (MNRAS 475,552; 2018) presented long timescale Smoothed Particle Hydrodynamical simulations using the GANDALF code (co-developped in house: Hubber et al 2018; MNRAS 473,1603) of a remarkable young stellar system whose widely spaced components are associated with extremely extended far infrared emission. These simulations demonstrated that the morphology of the emission and the kinematics of the stellar components can be explained through a dynamical history involving a previous star-disc encounter. This successful reproduction of an observed system provides some of the first evidence that young stars indeed from dynamically complex few-body systems, as suggested by star formation simulations.

Galactic Archaeology in the Gaia Era

On 25th April 2018, the ESA’s Gaia mission has made its second data release (Gaia DR2). Inspired by our N-body simulations of the Milky Way-like disk in this and our previous projects, Kawata et al. (2014, MNRAS, 443, 2757) and Kawata et al. (2018b, MNRAS, 473, 867), we studied the rotational and vertical velocity structure of the Galactic disk as a function of the Galactocentric radius, across a radial range of 5<Rgal<12 kpc, using the Gaia DR2 data. We find many diagonal ridge features (Fig. 1). We have found also radial wave-like oscillations of the peak of the vertical distribution in the Gaia DR2 data.

Figure 1: Normalized distribution of the rotation velocity for the selected Gaia DR2 stars with the RVS radial velocity data (upper) and our all selected Gaia DR2 stars (lower) as a function of the Galactocentric radius. The vertical dashed lines show the position of the Scutum, Sagittarius, Local, and Perseus spiral arms from left to right, which are calculated from Reid et al. (2014, ApJ, 783, 130). The vertical solid line is the assumed solar radius. The white lines highlight the identified ridge features.

We noticed that the diagonal ridge features found in Gaia DR2 looks qualitatively similar to what we see in our N-body/SPH simulations (an example shown in Fig.2). The simulation shows transient spiral arms and we considered that the transient and winding spiral arms act as perturbers in the disk to make these ridge structures. We followed up this idea in Hunt, Hong, Bovy Kawata & Grand (2019, MNRAS, 481, 3794), and using a toy models of transient and winding spiral arms, we showed that the rotation velocity ridge features can be explained with the transient and winding spiral arms.

Figure 2: The rotation velocity vs. the galactocentric radius for the star particles in N-body/SPH simulation from Kawata et al. (2014). The particles are selected from a similar region of the galactic disk to where the Gaia DR2 data are analysed in the left panel.

2017 Highlights

Some of the Research highlights from our users in 2017:

2017 list of publications


New Models give insight into the heart of the Rosette Nebula

New research, led by the University of Leeds, offers an explanation for the discrepancy between the size and age of the Rosetta Nebula’s central cavity and that of its central stars.

The Rosette Nebula (Credit Nick Wright, Keele University)

The Rosette Nebula in the Milky Way Galaxy, roughly 5,000 light-years from Earth, is known for its rose-like shape and distinctive hole at its centre. The nebula is an interstellar cloud of dust, hydrogen, helium and other ionised gases with several massive stars found in a cluster at its heart.

Stellar winds and ionising radiation from these massive stars affect the shape of the giant molecular cloud. But the size and age of the cavity observed in the centre of Rosette Nebula is too small when compared to the age of its central stars — something that has has puzzled astronomers for decades.

Through computer simulations run in part on DiRAC Resources, astronomers at Leeds and at Keele University have found the formation of the Nebula is likely to be in a thin sheet-like molecular cloud rather than in a spherical or thick disc-like shape, as some photographs may suggest. A thin disc-like structure of the cloud focusing the stellar winds away from the cloud’s centre would account for the comparatively small size of the central cavity.

Study lead author, Dr Christopher Wareing, from the School of Physics and Astronomy said: “The massive stars that make up the Rosette Nebula’s central cluster are a few millions of years old and halfway through their lifecycle. For the length of time their stellar winds would have been flowing, you would expect a central cavity up to ten times bigger.”

A projection of the simulated molecular cloud and nebula along a specific line of sight (and) A slice through the simulation of the Rosette Nebula. (Credit: C. J. Wareing et al.)

“We simulated the stellar wind feedback and formation of the nebula in various molecular cloud models including a clumpy sphere, a thick filamentary disc and a thin disc, all created from the same low density initial atomic cloud.

“It was the thin disc that reproduced the physical appearance – cavity size, shape and magnetic field alignment — of the Nebula, at an age compatible with the central stars and their wind strengths.”

More information on this result can be found here, the STFC Press Release can be found here and Alt-metrics for the paper, indicating it has had a very high level of attention, can be found here here.


Gravitational Waves 2017

2017 saw a series of impressive discoveries during the LIGO-Virgo second observing run. Early in January we made our third binary-black-hole observation (GW170104). Another black-hole signal arrived later in June (GW170608).

In August came two impressive observations. The first followed the inclusion in August of the French-Italian Virgo detector in the observing run, with the first binary-black-hole observation that included three detectors, on August 14th. A third detector made it possible to accurately triangulate the position of the source on the sky, making a dramatic improvement over previous observations.

Only three days later, on August 17th, came the first observation of gravitational waves from two neutron stars spiralling together. This observation was astounding in several ways. The source was only 40 Mpc away, ten times closer than the breakthrough binary-black-hole merger observed in 2015; that lead to the strongest signal we have yet observed. This signal made it possible to provide the strongest support yet for Einstein’s general relativity, an independent measurement of the Hubble constant, the first strong-field constraints on the equation of state of neutron stars.

Beyond that, the results of the merger – a gamma-ray burst and afterglow – were observed by many EM telescope teams. This marks the beginning of multi-messenger astronomy, and the implications for astronomy and astrophysics are still being understood.

Fig. 1: Time-frequency map of two neutron stars spiralling together.

An integral part of all of these GW measurements were the theoretical models of binary-black-hole signals that were developed with the aid of numerical simulations performed on DiRAC. In the weeks following the first binary-neutron-star observation, these models were extended to include neutron-star tidal terms. In preparation for the next observing run, due to begin in early 2019, these models have also been extended to an approximate model of subdominant harmonics, and this model is now being more precisely tuned to numerical simulations.



Shining Light on the Structure of Hadrons

Understanding matter at its deepest level is complicated by the fact that the fundamental constituents known as quarks are hidden from us. Instead of being available for experimental scrutiny, as electrons are, quarks are always bound together into particles called hadrons. We have to infer the properties of quarks, and that of the strong interaction that binds them, from experimental studies of hadrons. A window into the hadron is provided by striking it with a photon, the quantum carrier of the electromagnetic interaction. The photon can penetrate the hadron to interact with the quarks inside, but the bound state nature of the hadron is revealed by the fact that the struck quark must redistribute the photon’s momentum between its fellow constituents for the hadron to change direction as momentum conservation requires (see Fig. 1). The hadron’s electromagnetic form factor, F(Q2), parameterises this bound-state behavior as a function of the squared 3-momentum transferred from the photon. New experiments at Jefferson Lab in the USA will shortly be taking data to determine the electromagnetic form factor of hadrons called pi and K mesons up to much larger values of Q2 than has been possible before. The aim is to reach a regime in Q2 where the strong interaction becomes relatively weak and F(Q2) is expected to have simple behavior from QCD, the theory of the strong interaction.

Fig 1. When a pi meson interacts with a photon (wavy blue line), momentum must be redistributed via gluon exchanged (curly blue line).

However, it is not at all clear where this regime begins. Hadron electromagnetic form factors can also be calculated using the numerical techniques of lattice QCD. The HPQCD collaboration has recently been able to reach a value of Q2 of 7 GeV2 for the determination of F(Q2) for a ‘pseudopion’, a particle like the pi meson but made of strange quarks instead of up and down quarks.

Fig. 2. Lattice QCD results from arXiv: 1701.04250 compared to a simple pole model at low Q2 and leading-order perturbative QCD at high Q2.

This improves numerical speed and statistical accuracy, but still allows us to compare to high-Q2 expectations. We find Q2F(Q2) to be flat in Q2 but in clear disagreement with the simple high Q2 result from perturbative QCD (see Fig. 2).

Calculations of F(Q2) for pi and K meson are underway to make direct predictions for the JLAB experiments. Darwin, and now CSD3, at Cambridge have proved ideal for this work since we can store intermediate quark propagators for re-use at multiple Q2 values.


Stellar Structure and Nucleosyntheis

Movie showing the evolution of the velocity magnitude of a 10243 resolution run of Carbon-burning shell (video credit: Dr Raphael Hirschi). This movie shows the turbulence in the central convective zone and gravity waves in the top and bottom stable regions. In addition, this movie nicely shows the Kelvin-Helmhotz (shear) instability developing at the top boundary due to the fast horizontal motion of the convective flow when it is deflected and “de”celerated by the boundary. Results from Cristini et al 2016, in prep.Very high resolution movie of the C-shell.


COSMOS Consortium: Flashes of light on dark matter

Flashes of light on dark matter

Figure 1: The “cosmic web”, image credit Matteo Veil.

Simulations of the dense cosmic forest in intergalactic space illuminated by distant quasars have revealed new insights into the properties of dark matter that have recently been published in Physical Review Letters. The study and its results further support the theory of Cold Dark Matter, which is composed of particles that move very slowly, while, for the first time, they highlight the incompatibility with another model, the Fuzzy Dark Matter (FDM), for which dark matter particles have larger velocities.In order to study these dark matter properties, COSMOS Consortium investigators analysed the interaction of the “cosmic web” – a network of filaments made up of gas and dark matter present in the whole Universe – with the light coming from very distant quasars and galaxies. Photons interacting with the hydrogen of the cosmic filaments create many absorption lines called the “Lyman-alpha forest”. The atomic interaction of photons with the hydrogen present in the cosmic filaments is used to study the properties of the cosmos and of the dark matter at enormous distances. On the basis of the results obtained, they were able to infer some of the characteristics of the particles that compose the dark matter. In particular, evidence showed for the first time that the mass of the particles, which allegedly compose the dark matter according to the FDM model, is not consistent with the Lyman-alpha forest observed by the Keck telescope (Hawaii, US) and the Very Large Telescope (European Southern Observatory, Chile). The data, instead, support the scenario envisaged by the cold dark matter model.

GRChombo goes public!

In February 2018, GRChombo, a new COSMOS Consortium code for numerical general relativity simulations targeted at theoretical physics applications, was made open source. It is developed primarily by a consortium of physicists from Argonne National Laboratory, Cambridge University, Göttingen University, King’s College London and Queen Mary University of London, in close collaboration with the COSMOS Intel Parallel Computing Centre at DAMTP Cambridge. GRChombo is maintained by a collaboration of numerical relativists and cosmologists with a wide range of research interests, from early universe cosmology to astrophysics and mathematical general relativity. Further information can be found at www.grchombo.org, including details of how to obtain and use the code. The GRChombo team hopes that this will be a useful resource for other researchers wishing to investigate the effects of strong gravity in fundamental physics. The next release of GRChombo (version 1.1) will shortly incorporate in-situ or “on the fly” visualization capabilities for AMR grids which were developed as part of the Software-Defined Visualization collaboration involving Intel, TACC and COSMOS IPCC (see sdvis.org)



A new model of black hole growth and AGN feedback: an analytic model that explains the phenomenology seen in the EAGLE simulations as well as the observed colour bimodality in the galaxy population. In galaxies less massive than 3×1010 M, young stars and supernovae drive a high entropy outflow. Above a halo mass of 1012 M the outflow ceases to be buoyant and star formation is unable to prevent the build-up of gas in the central regions, triggering a strongly non-linear response from the black hole. Its accretion rate rises rapidly, disrupting the incoming supply of cool gas and starving the galaxy of fuel. The host galaxy makes a transition to the red sequence.

Large scale distribution of the hot, X-ray emitting gas (purple) and stars (yellow) in one of the C-EAGLE cluster regions. The filamentary structure surrounding the cluster halo can clearly be seen. The image is approximately 10 Mpc across. Credit: David Barnes and the Virgo Consortium.

A new theory for the nature of the galaxies that reionised the Universe, based on the EAGLE simulations, in which bursts of star formation can create channels through which ionising photons escape. It was shown that carbon-enhanced metal poor (CEMP) stars are the siblings of the stars that reionised the Universe. Today, these may be found in the intracluster light in groups and clusters.

Construction and analysis of synthetic HI rotation curves of dwarf galaxies in the APOSTLE cosmological simulations of Local Group analogues. Using the same techniques employed by observers (e.g. tllted ring modelling), for each individual galaxy a large diversity of rotation curves is obtained, depending on the orientation of the line of sight. These variations arise from non-circular motions in the gas. As a result, the presence of a core in the dark matter distribution is sometimes incorrectly inferred when there is actually a central cusp, as predicted in the simplest Λ-CDM models.

Completion of the C-EAGLE set of 30 galaxy cluster resimulations using the EAGLE code (including examples with self-interacting dark matter). Their X-ray properties, such as the spectroscopic temperature soft-band luminosity, metal content and Sunyaev-Zel’dovich properties, are in agreement with the observed relations.

The present-day cosmic large scale structure on a scale of 25 Mpc. Circles indicate dark matter haloes resimulated in E-MOSAICS; the mages on the right are zooms showing the distribution of gas, stars and globular clusters in one such galaxy. The latter are coloured to indicate whether they formed in-situ within the galaxy or were brought in by accreted satellites. The images at the bottom show the evolution of the galaxy from the epoch of reionisation at z=10 to the present day.

A new implementation of radiation feedback on galactic scales whereby star formation and the effects of their radiation field on the evolution of the gas are calculated self-consistently.

The first simulations that self-consistently follow the formation and evolution of star clusters and their host galaxy in a full cosmological setting. These “E-MOSAIC” simulations graft the MOSAICS semi-analytic model of star cluster formation and evolution into the EAGLE simulations (see Figure 1).

Simulations of galaxy formation using the EAGLE code in models where the dark matter consists of (warm) sterile neutrinos. Large regions of the sterile neutrino parameter space can be ruled because not enough satellite galaxies are produced.

An explanation for the mass discrepancy-radial acceleration relation (MDAR) of disc galaxies in the context of Λ-CDM and its verification in the EAGLE and APOSTLE hydrodynamics simulations, demonstrating that the MDAR does not require MOND.

Figure 1: Projections around one of our haloes at z~6.4, showing (top) gas column density, (middle) Minimum temperature, and (bottom) hydrogen ionization fraction, showing clear self-shielding.

Using artificial neural networks to constrain the halo baryon fraction during reionization. Radiative feedback from stars and galaxies has been proposed as a potential solution to many of the tensions with simplistic galaxy formation models based on Λ-CDM, such as the faint end of the UV luminosity function. The total energy budget of radiation could exceed that of galactic winds and supernovae combined, which has driven the development of sophisticated algorithms that evolve both the radiation field and the hydrodynamical response of gas simultaneously, in a cosmological context. In Sullivan, Iliev and Dixon (2018) we probed self-feedback on galactic scales using the adaptive mesh refinement, radiative transfer, hydrodynamics, and N-body code Ramses-RT. Unlike previous studies which assume a homogeneous UV background, we self-consistently evolved both the radiation field and gas to constrain the halo baryon fraction during cosmic reionization. We demonstrated that the characteristic halo mass with mean baryon fraction half the cosmic mean, Mc(z), shows very little variation as a function of mass-weighted ionization fraction. Furthermore, we found that the inclusion of metal cooling and the ability to resolve scales small enough for self-shielding to become efficient leads to a significant drop in Mc when compared to recent studies. Finally, we developed an Artificial Neural Network that is capable of predicting the baryon fraction of haloes based on recent tidal interactions, gas temperature, and mass-weighted ionization fraction. Such a model can be applied to any reionization history, and trivially incorporated into semi-analytical models of galaxy formation.

Figure 2: Halo baryon fraction, in units of the cosmic mean,The colour bars show (top to bottom) the strongest tidal force exerted over the last dynamical time, mass-weighted hydrogen ionization fraction, mass-weighted gas temperature (in units of the virial temperature) and the virial ratio (ratio of kinetic to potential energies). The dashed lines show the mean baryon fraction in bins of logarithmic halo mass, where the error bars correspond to the standard deviation.We use the existing halo catalogue data, as it is presented in the left-hand column, for our network predictions. The ANN is trained using only our fiducial simulation, and is then tested against our other models, showing excellent agreement.


Breaking resonances between migrating planets

One of the major breakthroughs in astronomy in the last decade has been the discovery that extra-solar planets are ubiquitous. We now have a variety of ways of detecting exoplanets, and our telescopes have now found thousands of planetary systems. Perhaps the most surprising discovery was that most known extra-solar planetary systems look nothing like the Solar System: we see giant planets orbiting their stars in just few days; planets on highly eccentric orbits; and even systems where multiple rocky or icy planets are packed into very closely-spaced orbits. The latter type of system, discovered by the Kepler mission, are particularly interesting, as these planets orbit too close to their host stars to have formed in their current locations. The conventional picture of planet formation suggests that such planets formed much further from their stars, and then “migrated” inwards due to gravitational interactions with their parent protoplanetary gas disc. Planets of different mass migrate at different rates, however, and in multi-planet systems this process invariably leads to neighbouring planets becoming trapped in so-called mean-motion resonances. These resonances are special locations in planetary systems, where adjacent planets’ orbital periods are integer multiples of one another (i.e., a 2:1 resonance occurs when the outer planet has exactly twice the period of the inner planet). When in such a resonance the planets’ feel one another’s gravity much more strongly than they would do otherwise. Planets which become “trapped” in these locations are therefore expected to remain in resonance indefinitely, but this is at odds with what we see in real systems: most exoplanets in multiple systems are not in resonance with their neighbours. This stark discrepancy between theory and reality has led astronomers to question the very basics of our understanding of how planetary systems form.

Figure 1: Numerical simulation of a pair of “super-Earth” planets migrating though a protoplanetary disc, performed on the DiRAC-2 Data Analytic cluster. The simulation followed the planets’ migration into, and eventually out of, the 2:1 mean-motion resonance, showing that such resonances can be readily broken. (Figure adapted from Hands & Alexander 2018.)

In the last few years astronomers have suggested a wide variety of ways around this problem, but as yet no solution has gained acceptance. The Theoretical Astrophysics Group in Leicester has recently used high-resolution numerical simulations to study how planets in multiple systems migrate through their parent protoplanetary discs. The computational power provided by DiRAC allowed us to follow the evolution of multi-planet systems for many thousands of orbits, capturing the planet-disc and planet-planet interactions in exquisite detail (see Figure 1). These simulations show that under certain conditions (conditions which are prevalent in many planet-forming discs) the gravity between the disc and the planet can overcome the gravity between the planets, breaking the system out of the resonance. This mechanism appears to operate efficiently, with planets escaping resonance after as little as a thousand years. This potentially allows multi-planet systems to migrate into their observed tightly-packed orbits without becoming “trapped” in resonances, and offers an elegant answer to the question of how to assemble the tightly-packed planetary systems observed by Kepler.


Extreme QCD: Quantifying the QCD Phase Diagram

Our collaboration uses the DiRAC supercomputers to simulate the interaction of quarks, the fundamental particles which make up protons, neutrons and other hadrons. The force which holds quarks together is Quantum ChromoDynamics, “QCD”. We are particularly interested in the behavior of QCD as the temperature increases to billions and even trillions of Celsius. These conditions existed in the first moments after the Big Bang, and are recreated (on a much smaller scale!) in heavy ion collision experiments in CERN (near Geneva) and the Brookhaven laboratory (in New York State).

The intriguing thing about QCD at these temperatures is that it undergoes a substantial change in nature. At low temperatures, QCD is an extremely strong, attractive force and so it’s effectively impossible to prise quarks apart, whereas at temperatures above the “confining” temperature, Tc, it is much weaker and the quarks are virtually free and the hadrons they once formed “melt”.

We study this effect by calculating the masses of protons and other hadrons and their “parity partners”, which are like their mirror-image siblings. At low temperatures, hadrons have lower masses than their partners. Above Tc, due to the fact that QCD acquires more symmetry, hadrons and their partners may become equal in mass.

Figure 1: Plot of the “R”-value as function of temperature.

Our results are summarized in the plot where we show the “R” value which measures how close the masses of the partners are: R=1 corresponds to very different masses, and R=0 means that they have identical masses. For a variety of hadrons including the proton (N), delta (Δ) and omega (Ω) as the temperature increases above Tc, we see that R is close to zero implying that the parity partners have near equal masses. This confirms the long-held, but unconfirmed suspicions regarding the nature of QCD. Drilling deeper into this data, it’s clear from the figure, that the Ω hadron partners are the least close in mass. This can be understood because the constituent quarks in Ω are the “strange” quarks which are significantly heavier than the virtually massless “up” and “down” quarks inside the N and Δ. This extra mass means that the symmetry mechanism mentioned does not work as well.


Hadron structure

Hadrons fall into two classes: baryons, such as the familiar proton, and mesons, for example the pion. Deep Inelastic Scattering was the first direct evidence that baryons really were made up of much smaller constituents, “partons”, i.e. quarks and gluons. Investigating the structure of hadrons has become a vital component of many experiments at (for example) JLAB and the LHC.

Figure 1: First results of the method leading to a reconstruction of a parton distribution function (left). Ratio of the electric and magnetic form factors as function of momentum transfer (right).

In [1] we have proposed an alternative method of measuring the parton distribution in hadrons. Parton distribution functions tell us how the momentum of a nucleon is shared out between its partons. The traditional lattice method of measuring parton distributions proceeds through the operator product expansion. In that approach lattice operators are constructed which correspond to moments of the distribution – each moment needs a different operator, with its individual renormalisation constant. Our new approach directly involves simulating scattering on hadrons – in essence we are repeating a deep inelastic scattering experiment on the lattice. First results of the method leading to a reconstruction of a parton distribution function (left panel) are shown above.

Other investigations are designed to determine the distribution of electromagnetic currents in hadrons due to the presence of quarks. These can be parameterised by two functions (electric and magnetic form factors). For most of the second half of the 20th century, it was believed that they both depended on momentum in the same way, so that their ratio was constant. However recent experiments have shown that at higher momentum transfer the ratio decreases and perhaps passes through zero. To investigate this theoretically is difficult (due to the larger momentum transfer). We have developed a method, [2], that allows this range to be considerably extended, approaching the region where the sign might change, as shown in the right panel.

Bibliography: [1] A. J. Chambers et al., Phys. Rev. Lett. 118 (2017) 242001. [2] A. J. Chambers et al., Phys. Rev. D96 (2017) 114509.


Lattice QCD tests Standard Model at high precision

Figure 1: Fermilab g-2 Experiment (Photo: Wikipedia/Glukicov)

We have recently studied the properties of the electron’s heavier sibling, the muon. Its anomalous magnetic moment, aμ, quantifies how vacuum fluctuations contribute when it interacts with an external magnetic field. With accuracy of the order of 1ppm, aμ is one of the most precisely determined quantities in experimental as well as theoretical particle physics. Since its value is sensitive to contributions from yet unknown new physics, the persistent 3−4σ tension between experiment and theory is generating a lot of interest within the particle physics community. Is this really a first sign of new and yet unknown physics? Two new experiments at Fermilab (USA, see Figure 1) and J-PARC (Japan), respectively, are trying to address this question. Both efforts are expected to reduce the uncertainty on the experimental value by a factor of around four, down to 140ppb. This puts much pressure on the theory community to match this precision – only the combination of theory and experiment will be able to answer this important question.

Figure 2: Summary of results for the HVP [4] – RBC/UKQCD 2018 are the results addressed in the text – note the tension with the “No new physics data point”. Green points are other lattice determinations, magenta points are phenome-nological determinations.

On the theory side the error budget is dominated by hadronic uncertainties, in particular uncertainties on predictions for vacuum polarisation effects due to the strong interaction. DiRAC has enabled the first-ever computation of these effects starting from first principles, i.e. from the Standard Model Lagrangian by allowing simulations of Lattice Quantum Chromodynamics including effects of Quantum Electrodynamics. In a series of papers [1-4] researchers from the Universities of Edinburgh and Southampton, together with their collaborators in the US, have developed the required theoretical understanding and the numerical and algorithmic techniques needed to complete their ambitious and state-of-the-art program. In Lattice Quantum Chromodynamics one performs the calculation by constructing a discrete four-dimensional space-time grid (the lattice) on which one numerically solves the QCD equations of motion. Such lattice QCD simulations are the only known method to address the above questions without having to rely on ad hoc assumptions. This type of computation crucially relies on access to the world’s fastest parallel supercomputers like DiRAC’s Extreme Scaling system.

There is still some way to go until a full theoretical computation will match experimental precision. The DiRAC researchers have however developed a method that allows to get very close to the required precision by combining complementary experimental data and their numerical simulations. It will be exciting to compare to new experimental results expected by end of 2018!

Bibliography:[1] Calculation of the hadronic vacuum polarization disconnected contribution to the muon anomalous magnetic moment, Blum et al., Phys.Rev.Lett. 116 (2016) no.23, 232002 [2] Lattice calculation of the leading strange quark-connected contribution to the muon g–2, RBC/UKQCD Collaboration, Blum et al., JHEP 1604 (2016) 063 [3] Isospin breaking corrections to meson masses and the hadronic vacuum polarization: a comparative study, Boyle et al., JHEP 1709 (2017) 153 [4] Calculation of the hadronic vacuum polarization contribution to the muon anomalous magnetic moment, Blum et al., arXiv:1801.07224


Black Hole highlight
Modelling black hole spins and jets in the new era of gravitational wave astronomy

Active galactic nuclei (AGN), with powerful nuclear emission that can outshine the entire host galaxy, represent some of the most extreme physical environments in the Universe. At the heart of these systems reside rapidly growing supermassive black holes (SMBH). They act as efficient gravity engines releasing vast amounts of energy through accretion, over most of cosmic time. It is believed that these energy outbursts may change not only the properties of gas in the vicinity of the AGN, but transform the whole host galaxy, expelling material all the way into intergalactic space.

Figure 1: Column density rendering of a self-gravitating gaseous disc surrounding an equal mass, 106 M sol SMBH binary. Gas streams excited by the time-varying gravitational potential of the binary develop from the cavity edge to the SMBHs. These streams provide fresh material for accretion and affect the binary dynamics through gravitational torques, extracting angular momentum and reducing the SMBH separation. Simulations have been performed on DiRAC supercomputers at the Universities of Cambridge, Leicester and Durham.

There are, none the less, many outstanding questions on the connection between SMBHs and their host galaxies, and numerical simulations have a major role in advancing our knowledge and providing testable theories that future observations may prove or rule out. For example, it is well established that accretion discs play a fundamental role in modulating mass accretion onto SMBHs and driving their spin evolution, with profound implications for relativistic jets and gravitational waves from coalescing SMBH binaries. Their contribution is, however, customarily neglected in large-scale galaxy formation simulations due to the inherent complexity.

Researchers at the Institute of Astronomy (IoA) and at the Kavli Institute for Cosmology (KICC) at the University of Cambridge, have taken a major step forward in solving this issue by developing a novel numerical model within the moving-mesh code AREPO, which self-consistently describes mass feeding through an accretion disc, governing both the evolution of mass and spin of a SMBH. In a recently submitted paper (Fiacconi, Sijacki, Pringle, MNRAS submitted, arXiv:1712.00023), they showed that SMBHs in spiral galaxies, like in our own Galaxy, are expected to be highly spinning, while SMBHs in more massive elliptical galaxies likely have lower spins. IoA/KICC researchers are now exploring the pairing of SMBH spins in binaries (see Fig. 1), which will represent the main cosmological source of gravitational waves for the LISA space observatory (launch 2034).

SMBHs spins are thought to be closely related to the jet phenomena, which are often observed to inflate giant lobes of relativistic plasma in galaxy clusters. Understanding the propagation and interaction of jets with the surroundings by means of hydrodynamical simulations is challenging because of the large dynamic range involved, spanning over 8 orders of magnitude in spatialscales. Building on an earlier implementation of a super-Lagrangian refinement scheme that allows unprecedented spatial resolution close to SMBHs (Curtis & Sijacki, 2015, MNRAS, 454, 3445), IoA/KICC researchers have developed a new jet model for AREPO capable of injecting realistic, highly beamed jets (see Fig. 2; Bourne & Sijacki, 2017, MNRAS, 472, 4707). They found that the inflation of lobes can heat the intra-cluster medium (ICM) through a number of processes, although jets seem to be unable to drive significant turbulence within the ICM in agreement with the gas kinematics observed by the Hitomi satellite in the centre of the Perseus

Figure 2: Simulated AGN-driven jet propagating through the ICM. Each panel shows a different gas property in a slice through the jet. Also shown in the middle panel is the Voronoi cell structure, highlighting the wide dynamic range afforded by the super-Lagrangian refinement scheme: smallest cell size is ~10pc while the size of the computational domain is 20Mpc. Simulations have been performed on DiRAC supercomputers at the Universities of Cambridge, Leicester and Durham.

The next immediate aim is to combine the new models of accretion and jet feedback, as jet power and direction are expected to be directly related to the BH spin. With the upcoming Hitomi replacement, X-ray Astronomy Recovery Mission (XARM, launch 2021), and future Advanced Telescope for High Energy Astrophysics (Athena, launch 2028) on the horizon, IoA/KICC researchers’ simulations on DiRAC supercomputers will provide testable predictions and help interpret these new observations.


Exeter: The formation of massive stars

Figure 1: Density cut through the simulation. The colour scale shows density (scaled between 10−19 and 10−14 g/cc). The arrows show the direction of the velocity field, and their colour the speed (between 0 and 100 km/s).

Stars form as molecular clouds gravitationally collapse and fragment. The formation of the most massive stars (generally understood to be those with masses greater than twenty times the mass of the Sun) is complicated by the presence of the intense protostellar radiation field, which produces outward forces which may hinder accretion and even lead to outflows. Incorporating the effects of the radiation field on the gas is a complex problem, involving a combination of microphysical effects (dust heating, scattering, ionisation) that require a polychromatic treatment of a radiation field.

Figure 2: The line profile (upper left), integrated intensity map (upper right in Jy km/s/pixel), moment 1 map (lower left) and position velocity diagram (lower right in mJy/pixel), for the CH3CN J = 13 → 12, K = 3 line.

At Exeter we have developed a novel radiation hydrodynamic (RHD) code that treats the problem using a Monte Carlo method, splitting the radiation field into many millions of photon packets that propagate through the computational domain, undergoing absorptions, scatterings and re-emissions. We have run this code on DiRAC’s Complexity system to simulate the growth of a massive star. We find that the star forms via stochastic disc accretion and produces fast, radiation-driven bipolar cavities. The evolution of the envelope infall rate and the accretion rate on to the protostar are broadly consistent with observational constraints. After 35 kyr the star has a mass of 25 solar masses and is surrounded by a 7 solar-mass Keplerian disc of 1500 au radius (see Figure 1). Once again these results are consistent with those from recent high-resolution studies of discs around forming massive stars.

We are able to construct so called-synthetic observations from our RHD models. For example, molecular line simulations of a methyl cyanide (CH3CN) transition compare well with observations in terms of surface brightness and line width. These results indicate that it should be possible to reliably extract the protostellar mass from such observations (see Figure 2).


Radiative Transfer Simulations for Type I Supernovae

Figure 1: Comparison between synthetic nebular spectra of W7 calculated with ARTIS (coloured by emitting ion) and the model of Fransson & Jerkstrand (2015; black line).

Between tens and hundreds of days after explosion, the material ejected from Type Ia supernovae become optically thin at most wavelengths. At this time, the observed spectra are dominated by emission from forbidden radiative transitions of singly- and doubly-ionised Fe, Co, and Ni. With almost all of the 56Ni having decayed prior to the nebular phase, most of the energy injection is from the decay of 56Co. The decays of 56Co release gamma rays and positrons, which Compton and Coulomb scatter with thermal electrons, resulting in a population of high-energy (~MeV) non-thermal electrons. These non-thermal electrons are especially important for the ionisation balance, as the rate of non-thermal ionisation is generally higher than photoionisation for the most important ions at this time.

Figure 2: Comparison between normalised ARTIS synthetic spectra for W7 and sub-Chandrasekhar models (normal and high metallicity), and the observed nebular spectrum of SN2011fe from Mazzali et al. (2015).

The ARTIS code (Sim 2007, Kromer & Sim 2009) is a multi-dimensional radiative transfer code that uses the Monte Carlo method with indivisible energy packets (Lucy 2002) to compute spectra for modern supernova explosion simulations. To extend the range of validity to late times, we have (2016/17) developed a new non-LTE population/ionisation solver that includes non-thermal ionisation, excitation, and heating processes, and a comprehensive new atomic database with forbidden transitions and modern photoionisation cross sections. This allows us, for the first time, to calculate spectra out to one year post-explosion (early studies with ARTIS were typically limited to less than one month). Extensive testing has verified that our code can reproduce previous one-dimensional work (see Figure 1).

Taking advantage of DiRAC HPC resources, we have calculated nebular spectra for a range of explosion scenarios, including both Chandrasekhar- and sub-Chandrasekhar-mass explosions (Nomoto et al. 1984; Iwamoto et al. 1999; Sim et al. 2010). Results substantiate the potential of the [Ni II] 7378 feature to distinguish between explosion scenarios (Shingles et al.).


Understanding our Milky Way

The most important new result in 2017 was our (Debattista et al. 2017) demonstration that all the trends observed in the Milky Way’s bulge are produced by the evolution of the bar. Trends such as the X-shape of the metal-rich bulge stars, which is absent in the metal-poor stars, the vertical metallicity gradient, the old age of bulge stars, the weak bar of the oldest stars compared with the strong bar in slightly younger stars, and the varying kinematics of stars with metallicity, have all, in the past, been interpreted as being due to the bulge having formed partly in situ and partly accreted. Our simulations show that all these trends result purely from secular evolution of the bar itself (i.e. the bulge is part of the bar), with no hints of an accreted component other than the stellar halo, at about 1% of the Milky Way’s total stellar mass. Our results have resolved 20 years of conflicting interpretations of the origin of the Milky Way’s bulge and represent an important step in understanding it. We explained the physics of this evolution via a new concept, kinematic fractionation: the morphological separation of stellar populations on the basis of their kinematics at bar formation. The simulations predict strongly X-shaped metallicity maps in external edge-on boxy/peanut-shaped bulges, a prediction which has since been confirmed using MUSE observations (Gonzalez et al. 2017), establishing a novel link between the Milky Way and galaxies in general.

Figure 1: Metallicity map.


Protoplanetary disks

Figure 1: The CO emissivity profile of the extensive disc around the young star IM Lupi, demonstrating that the observed profile is best fit by the photoevaporative flow driven by photoevaporation with a low ambient ultraviolet radiation field.

We have completed a study (Rosotti et al 2017) involving the longest term integration of a planet + protoplanetary disc system that has ever been conducted (totaling 3 x 105 planetary orbits) with a view to understanding whether discs can drive significant planetary eccentricity. This is a fundamental issue for interpreting data on exoplanetary eccentricity. We have shown that there is a complicated exchange of eccentricity between the disc and the planet on these timescales which can be understood in terms of two secular modes that are subject to long term damping and pumping. While there is still much to be understood about the processes responsible for mode damping and pumping, our study has shown that eccentricity growth cannot be determined via the sort of short timescale (< 1000 orbit) simulations that have been conducted previously. These GPU accelerated calculations pave the way for a more comprehensive exploration of long term eccentricity driving of planets in discs.

In Haworth et al 2017 we have presented the first calculations that demonstrate that discs may lose significant mass from their outer regions as a result of photoevaporation by ambient far ultraviolet radiation, even in regions where the ambient radiation levels are very low. This study involved a novel code, TORUS-3DPDR which solves for radiative transfer in dust, hydrodynamics and photochemistry within the outflowing wind and which demonstrates that the extended CO emission observed over many 100s of AU from the disc IM Lupi is well explained by a photoevaporative wind.


Excited Charmonia and Charmonium Resonances from Lattice QCD

Figure 1:  Examples of the results: left is the spectrum in the isospin-1 hidden-charm channel with JP=1+ (J is spin, P is parity), right is the isospin-0 doubly-charmed channel with JP=1+. Boxes are the computed masses and one-sigma statistical uncertainties; horizontal lines show the meson-meson energy levels expected if there were no meson-meson interactions with an adjacent number indicating the degeneracy if it is larger than one (lattice QCD calculations are performed in a finite volume and so the spectrum of meson-meson energy levels is discrete). The number of energy levels in the computed spectrum is equal to the number of expected non-interacting meson-meson levels and they all lie close to the non-interacting levels. This suggests that there are only weak meson-meson interactions and no strong indications of a bound state or narrow resonance in these channels.

Experiments have observed a number of puzzling charmonium-like states in recent years. Of particular interest are states with ‘exotic flavour’, i.e. those with flavour quantum numbers which cannot arise from solely a quark-antiquark pair and so signal physics beyond a conventional meson. One proposed explanation for such states is that they are tetraquarks, compact states containing two quarks and two antiquarks. In Cheung et al [JHEP 1711 (2017) 033, arXiv:1709.01417] we developed methods to investigate the existence of tetraquarks using first-principles lattice QCD computations. The spectrum of states was calculated in a selection of exotic-flavour channels (isospin-1 hidden-charm and doubly-charmed channels), including a number where tetraquarks are expected in phenomenological models. There were no strong indications for any bound states or narrow resonances in the channels studied and so no sign of tetraquark candidates.


Thousands of exoplanets have been discovered in the recent years, most of them are gas giant and hundreds appear to be rocky. Many of these exoplanets have very short orbital period, hence hot atmospheres. The hot temperatures are expected to turn these atmospheres into a high pressure steam, thus having very different spectroscopic signatures than cooler objects, which will influence interpretation of the atmospheric observations. At present interpretation is severely impacted by the lack of corresponding spectroscopic data. The massive number of absorbers in the atmospheres of exoplanets also affects the cooling and therefore the evolution of the young hot objects; comprehensive molecular data is also crucial to model these processes.

Figure 1: A giant planet that orbits close to its sun (Illustration: NASA/ESA/G. Bacon (STScI)).

Our group ExoMol specialises in and leads the field of producing theoretical spectral data (so-called line lists) for molecules of atmospheric, astrophysical, and industrial importance applicable for high temperature applications. The ExoMol database already contains over 300 billion transitions for over than 50 molecular species. In order to accomplish this data intensive task, we use some of the UK’s most advanced supercomputers, provided by the Distributed Research utilising Advanced Computing (DiRAC) project and run by the University of Cambridge. Our calculations require tens millions CPU hours, hundreds thousands GPU hours, and up to 6 TB of RAM, the processing power only accessible to us through the DiRAC project. We are in a unique position to provide line lists for more complex molecules (such as large hydrocarbons), comprising up to 1012 lines.

Methane is a major absorber in hot Jupiter exoplanets, brown dwarfs and cool carbon stars. There is a huge demand for a reliable hot methane line list and there are several groups working towards this objective. As part of our project we have generated a new 34 billion methane line list (known as 34to10), which extends our original 10to10 line list to higher temperatures (up to 2000 K). Even larger line lists are now being completed (for silane, ethylene, acetylene, propene, methyl chloride etc).

Billions of transitions make their direct (line-by-line usage) application in radiative transfer calculations impractical. To mitigate this big data issue we have developed a new, hybrid line list format, based on the idea of temperature-dependent absorption continuum. The new hybrid scheme has being now applied to larger hydrocarbon’s with line lists containing hundreds billions of transitions.


Galactic Archaeology in the Gaia Era

Impacts of a flaring star-forming disc and stellar radial mixing on the vertical metallicity gradient. We ran an N-body simulation of the evolution of a barred disk galaxy similar in size to the Milky Way, using our original Tree N-body code, GCD+. We first assumed the thin disk has a constant scale-height, zd, independent of the radii, ”constant zd model”. We then assigned the metallicity of the thin disk particles to follow the radial metallicity gradient of [Fe/H] = 0.5−0.08× R with a dispersion of 0.05 dex with no vertical metallicity gradient at t = 0. Left two panels of Fig. 1 show the initial (at t = 0) and final (t = 8 Gyr) vertical metallicity distribution for the thin disk component of this model (constant zd model). Interestingly, after 8 Gyr of evolution, the vertical metallicity gradient becomes positive, which is inconsistent with the negative vertical metallicity gradients observed in the mono-age population of the thin disk (e.g. Ciucă, Kawata et al. 2018, MNRAS, 475, 1203).

Figure 1: Vertical metallicity distributions of the thin disk component for constant scale-height, zd, model (left two panels) and flaring model (right two panels) in the radial ranges of 6 < R < 10 kpc at t = 0 and 8 Gyr. The white dots and the error bars at t = 8 show the mean and dispersion of [Fe/H] at different height, |z|, and the solid line shows the best fit straight line. The dotted line shows the vertical metallicity gradient assumed at R = 8 kpc at t = 0.

We presented that a flaring thin disk is one possibility to remedy this issue. We made ”flaring model” with a 2nd thin disk component whose vertical scale height increases with radius. This flaring thin disk component mimics a mono-age population that is born in a flaring star-forming region. We assigned the metallicity following the same radial metallicity gradient as our constant zd model to the flaring disk particles. Right two panels of Fig. 1 show the initial (at t = 0) and final (t = 8 Gyr) vertical metallicity distribution for the flaring thin disk component of this model. In this case, the metal poor stars initially in the outer disk become dominant at high vertical height at every radii, which can drive a negative vertical metallicity gradient. Therefore, if mono-age populations of the Milky Way thin disk stars are formed within a flaring star forming disk, each mono-age thin disk population can have a negative vertical metallicity gradient. Then, older population has a steeper negative vertical metallicity gradient, because they have more time to experience more radial mixing. This can explain the steeper negative vertical metallicity gradient observed in the older population of the thin disk with the combined data of Gaia DR1 and RAVE DR5 (Ciucă, Kawata et al. 2018).


Linking the nuclear interaction to the structure of the heavy elements

Figure 1: Successful comparison between the experimental charge distribution of 36S and our simulations, along with the predicted charge profile of 34Si.

Ab initio calculation of the potential bubble nucleus 34Si. The possibility that an unconventional depletion (referred to as a “bubble”) occurs in the center of the charge density distribution of certain nuclei due to a purely quantum mechanical effect has attracted theoretical and experimental attention in recent years. Based on a mean-field rationale, a correlation between the occurrence of such a semibubble and an anomalously weak splitting between low angular-momentum spin-orbit partners has been further conjectured. Energy density functional and valence-space shell model calculations have been performed to identify and characterize the best candidates, among which 34Si appears as a particularly interesting case. While the experimental determination of the charge density distribution of the unstable 34Si is currently out of reach, we studied the potential bubble structure of 34Si on the basis of the state-of-the-art ab initio self-consistent Green’s function many-body method and using HPC resources at CEA Saclay (France) and the DiRAC Complexity cluster (UK). Demanding that the experimental charge density distribution and the root-mean-square radius of 36S be well reproduced, along with 34Si and 36S binding energies, only leaves one model specific model of the Hamiltonian as a serious candidate to perform this prediction. In this context, a bubble structure, whose fingerprint should be visible in an electron scattering experiment of 34Si, is predicted. The occurrence of a bubble structure in the charge distribution of 34Si is convincingly established on the basis of state-of-the-art ab initio calculations.


Understanding the origin of strong galactic outflows

Understanding the origin of strong galactic outflows is one of the key issues in galaxy formation theory. However, due to our incomplete picture of stellar feedback processes and the lack of numerical resolution, much remains uncertain about how gas is blown away from star-forming regions in galaxies. Using high-resolution radiation-hydrodynamic simulations of a gas-rich isolated disk, we investigated the impact of radiation feedback and supernova explosions on the star formation histories and gas outflow rates. In particular, we have introduced radiation pressure from resonantly scattered Lyman α photons for the first time, and found that it efficiently destroys giant molecular clouds in the early stage of bursty star formation episodes in a metal-poor dwarf galaxy. As a result, supernovae explode in low-density environments, carrying a large amount of gas that is comparable to the star formation rate. We also demonstrated that this early feedback self-controls the gas outflow rate by regulating a bursty star formation history, as opposed to some recent claims that multi-scattering photons can drive extremely massive outflows. As Lyman α photons scatter very efficiently in a dust-poor medium, it may also have played a crucial role in suppressing the formation of extremely metal-poor globular clusters in the early Universe.

Figure 1: radiation-hydrodynamic simulations of a metal-poor dwarf galaxy. The left panels display the projected distribution of gas (top) and stellar density (bottom). The right panels show that the majority of Lyman α radiation is generated (top) and scattering (bottom) inside star-forming regions. Lyman α feedback suppresses the formation of a massive stellar bulge by blowing gas out from the galaxy centre. In order to cover a wide dynamic range from several hundred kpc to a few pc, the simulations were carried out on the high-performance computing facility, DiRAC, employing ~10 million adaptively refined grids.


2013 Highlights

Blackhole Highlight:  The environment of bright QSO’s at z ~ 6


Figure 1. The distribution of massive black holes in different patches of the 900 million year old Universe according to the recent simulations carried out at the IoA/KICC. Black holes are both more massive and more numerous as the overdensity increases.

Observations of nearby galaxies and high redshift quasars suggest that black holes are present in the majority of galaxies. The SDSS, UKIDSS and VISTA surveys have revealed numerous high-redshift QSOs harbouring black holes with masses in excess of a billion solar masses at a time when the Universe was less than one billion years old (about 10% of it’s current age). The fact that such massive black holes have managed to grow by this time provides a significant challenge for theoretical models of the formation and growth of supermassive black holes.

Researchers at the IoA/KICC have performed a suite of state-of-the-art numerical simulations which self-consistently follow the growth of black holes from very early times to the present day. By simulating a large volume of the Universe, they investigated the growth of massive black hole in a variety of cosmological environments. The images shown in Figure 1 illustratethe distribution of gas in different patches of the 900 million year old Universe. Black holes are far more numerous and massive in the most overdense regions of the Universe. The biggest black circles in the bottom row correspond to black holes with masses close to one billion solar masses. Only very massive black holes located in highly overdense regions were found to give rise to peak luminosities in agreement with observational estimates for the brightest, high redshift quasars, as shown in Figure 2. This finding indicates that these quasars should reside in some of the rarest dark matter halos at the time. Members of the IoA/KICC also used these simulations to study the so called AGN feedback, the interaction between massive black holes and gas in their surroundings. They found that the energy released due to accretion onto the most massive black holes gives rise to powerful outflows that push gas at speeds exceeding a thousand km/s. The comparison of simulated quasar outflows with observations will provide an essential tool to pin down the physical mechanisms underlying the interaction of massive black holes and their host galaxies.

COSMOS Highlight:  Planck Satellite Science – Searches for Non-Gaussianity

The ESA Planck satellite, launched in May 2009, provides an unprecedented high resolution survey of the temperature of the cosmic microwave background (CMB) radiation. COSMOS and HPCS@DiRAC resources were vital for the science exploitation efforts in several key Planck papers. This effort has led to new estimates of cosmological parameters – shown in Figure 1. Planck has also crossed important qualitative thresholds, making gravitational lensing and the study of non-Gaussian statistics truly quantitive subject areas for the first time.

Using the unique capabilities of the COSMOS system, consortium members have used Planck data to undertake the most stringent tests of the inflationary paradigm to date by studying the prediction that primordial fluctuations should have Gaussian statistics (most results are in the ground-breaking Planck Non-Gaussianity paper (dp002.03)). The threepoint correlator (“triangles on the sky”) or bispectrum was evaluated to high precision for the first time (Figure 1); it is a 3D tetrahedral object depending on three triangle multipoles l1, l2, l3. Searches were undertaken for the standard forms of local, equilateral and orthogonal bispectra, described by the ‘nonlinearity’ parameter fNL

These limits, up to four times stronger than WMAP, significantly constrain broad classes of alternative inflationary models. Despite the stringent constraints on scale-invariant models, the Planck bispectrum reconstructions exhibit large non-Gaussian signals (Figure 1), inspiring further investigation of other non-Gaussian models using separable modal methods developed on COSMOS. The two most promising classes of models appear to be those with excited initial states (non-Bunch Davies vacua) and those exhibiting periodic features with apparent high significance. Intensive non-Gaussian analysis of the Planck data is ongoing for a broad range of primordial models, as well as investigating second-order late-time effects such as ISW-lensing.


Figure 1. CMB bispectrum reconstruction obtained from the Planck data employing Fourier modes (left) and higher resolution polynomial modes (right); several isosurfaces are shown both positive (red) and negative (blue). The predicted ISW-lensing signal can be seen along the tetrahedral edges (for the first time) and apparent ‘oscillatory’ features in the interior.

ECOGAL Highlight:  Galactic scale models of star formation

Galactic scale models of star formation focus on resolving how large-scale gas flows in spiral galaxies form the dense gas clouds where star formation occurs on scales 10,000 times smaller. The rate at which stars form in galaxies appears to be linked to the global as well as local properties. Using DiRAC, we have performed the largest-scale numerical simulations that can resolve the dense regions where stars form, and hence directly study the physics that drives star formation. We are using these simulations to understand how star formation is initiated, and what determines the resulting properties including why it has such a low efficiency (~1%).


Figure 1. The dense gas formed from a spiral shock is viewed perpendicular to the plane of the galaxy.

There also exist ‘special’ regions such as the centre of our Galaxy, where star formation is more exotic and primarily forms only very massive stars. Using realistic models for the galaxy’s gravity, we are studying the dynamics of gas clouds and how they can produced the observed structures and star formation events, including the young stars that orbit the super-massive black hole.

In order to construct a full theory of star formation, we need to also include additional physics of magnetic fields and the radiative and kinetic feedback processes from young stars. Constructing these self-consistent models, akin to modelling a full galactic ecology, requires the ability to model 100 to 1000 million individual elements in the galaxy and can only be performed with the HPC resources provided through DiRAC.


Figure 2. A gas cloud moving on an eccentric orbit in the Galactic Center is tidally sheared to form nearly complete stream.

EXETER Highlight:  The formation of a star and its magnetized outflows in radiation magneto-hydrodynamic simulations

As a gravitationally unstable molecular cloud core undergoes collapse to form a star, it undergoes several distinct phases. Initially it collapses almost isothermally as energy is easily radiated away. However, eventually the density rises to the point where radio and infrared radiation cannot freely escape from the gas, and the temperature begins to increase. This substantially increases the pressure, resulting in the formation of a pressure supported gaseous object with a typical size of ~10 AU (known as the first core). This object increases in mass as it accretes surrounding gas and, as it does so, its central temperature continues to rise. When its central temperature reaches ~2000 K, the molecular hydrogen from which it is primarily composed dissociates into atomic hydrogen. This is an endothermic process which results in a second collapse occurring inside the first core. This second collapse continues until the hydrogen is completely atomic, where upon a second, or stellar, core is formed with a size similar to that of our Sun. This object then accretes to its final mass to become a star. If reasonably strong magnetic fields are present initially, outflows from both the first core and the stellar core can be launched at speeds of several km/s. These outflows can carry some of the mass, angular momentum, and energy away from the forming star.


Figure 1. Visualizations of the magnetic field direction in the outflows from the first cores (top) and stellar cores (bottom) in three calculations with different initial magnetic field strengths (magnetic field strength increases from left to right). Published in Bate, Tricco & Price (2013).

The DiRAC facility, Complexity, has been used to perform the first fluid dynamical simulations that include both magnetic fields and radiation transport to follow this whole process. Such calculations are essential for predicting what may be observed in highresolution observations that will be made using the Attacama Large (Sub-)Millimetre Array (ALMA) over the next few years, and interpreting these observations. In the Figure, we show visualizations of the magnetic field structure in the outflows. Animations of the calculations are at: http://www.astro.ex.ac.uk/people/mbate/Animations/stellarcore.html and the full dataset from the calculations is on the Open Research Exeter archive at http://hdl.handle.net/10871/13883 .



Figure 1. Absorption of methane at T = 1500 K – new theoretical spectrum (10to10) compared to the experiment (HITRAN12).

The aim of the ExoMol project is to compute comprehensive spectroscopic line lists for hot molecules thought likely to occur in the atmospheres of exoplanets, brown dwarfs and cool stars. The list of such molecules is quite long but only those for four or more atoms require the use of DiRAC resources. Our calculations have been split between two computers located at Cambridge: Darwin, which was used to diagonalise small matrices and to compute transition intensities, and COSMOS which was essential for diagonalising larger matrices. We encountered some performance issues with COSMOS when performing large matrix diagonalisations. This problem was referred to SGI who have very recently supplied us with a new diagonaliser for COSMOS. Our initial tests with this new diagonaliser are very encouraging and suggest that our performance issues should largely be resolved. However we have yet to use diagonaliser in productions runs and the results discussed below did not employ it.

Work using DiRAC during 2013 focussed on four main molecules: methane (CH4), phosphine (PH3), formaldehyde (H2CO) and sulphur trioxide (SO3).Methane is a major absorber in hot Jupiter exoplanets, brown dwarfs and cool carbon stars. There is a huge demand for a reliable hot methane line list and there are several groups working towards this objective. This project was therefore given priority. Our main result is that we have generated a new methane line list, called 10to10, which contains just under 10 billion transitions. This line list is complete for wavelengths longer than 1 μm and temperatures up to 1500 K It is by some distance the most comprehensive line list available for methane (see Fig.1). It is currently being actively used by about a dozen groups worldwide to model methane in a variety of astronomical objects (and one group for studies of the earth’s atmosphere). Fig. 2 shows the spectrum of the brown dwarf spectrum of 2MASS J0559-1404 compared to our simulations.

Our tests on this line list show that for photometric studies of the K, H, J bands show that previously available line lists (a) agree well with 10to10 at 300 K for the K and H bands but significantly underestimate J-band absorption due to lack of experimental data in this region; and (b) seriously underestimate absorption by methane for all bands at temperatures above 1000 K. We have also completed initial, room temperature line lists for PH3 and SO3, and a full line list for hot H2CO.

HORIZON Highlight

Dark matter is one of the strangest puzzles in physics: the universe seems to be composed of far more stuff than we can see directly. The evidence is overwhelming, yet indirect.Over the past few decades, astronomers have been able to measure the tug of gravitation in a huge range of environments – from galaxies, through clusters of hundreds of galaxies, to the early universe as seen in the cosmic microwave background – and we keep coming to the same conclusion, that the tug is surprisingly strong. To explain that, one needs around six times more matter than is seen directly. The missing five sixths is known as dark matter.

But attempts to detect particles of this matter have given no clear result. That doesn’t mean dark matter doesn’t exist, but it does mean we can’t be certain what it’s made from. If we can’t manufacture or find particles of it here on Earth, the only way to make progress is to keep studying its behaviour in the sky.


Figure 1.

As our ability to infer the presence of dark matter in the night sky has improved, we can now measure how the invisible mass is distributed through individual galaxies. In the meantime, increasingly powerful computers have allowed the expected distribution to be predicted (based on assumptions about the particle’s nature). It has proved exceptionally difficult to get these two methods to agree – perhaps pointing to an exotic dark matter particle which behaves in unexpected ways; or, alternatively, that the entire dark matter idea is on the wrong track.

Using the Dirac-2 Complexity cluster, the Horizon collaboration has been studying this problem from a different perspective. Perhaps the difficulty in making theoretical ideas agree with the real Universe stems from an incomplete physical picture of what goes on inside galaxies. We know that vast amounts of energy are available – young stars pump out light and occasionally explode in supernovae. If some of this energy is transferred into the dark matter, the distribution can be radically reshaped.

We established almost two years ago that there were viable physical mechanisms for transporting stellar energy through to the dark matter distribution (dp016.6). Since then, we have been studying with higher resolution simulations how sensitive this mechanism is to fine details of how the gas behaves (e.g. dp016.7) – always the hardest thing to get right in galaxy simulations, because there are so many factors (including resolution, the cooling rates and the coupling of stellar populations to the surrounding gas). Our latest work, (Pontzen et al 2013, MNRAS submitted), shows that even if one gets many of these details intentionally wrong, the dark matter reshaping remains possible.

This is an encouraging sign that despite remaining uncertainty, we can soon determine whether data from the real universe support – or perhaps refute – the simplest dark matter models.

Leicester Highlight: Tearing up the Disk: How Black Holes Accrete

Almost all galaxies have supermassive black holes in their centres. These holes grow because gas from the galaxy falls into them, a process which also makes the holes spin. In general the gas spirals slowly in to the hole in a disk, initially lying in a plane which is at an angle to its spin equator. But near to the hole, powerful forces drag it into this plane. This means that the gas flows through a warped disk, whose plane changes as it moves in. Until recently it was thought that this process occurred fairly smoothly, with a smooth warp gently straightening the disk plane. However work by Ogilvie in 1999 suggested that in some cases the warp could be much more abrupt, because the forces holding the disk together actually weaken in a warp. The disk would then break into a pair of disks, the outer ring in the original misaligned plane, and the inner one aligned to the black hole spin.


Figure 1. Simulation of disk tearing. Gas spirals towards a supermassive black hole in a plane which is highly inclined to the black hole spin. Close to the hole the gas is forced to orbit in the equatorial plane, and this breaks the disk. The inner disk tumbles in space, and eventually is rotating partially in the opposite direction to the outer disk. Where the two disks touch their gas collides. This removes the rotation that supports it against the black hole gravity, so it falls to a new orbit much closer to the hole. But there it also encounters gas moving in the opposed direction, so the process repeats. This produces rapid gas infall on the black hole, making it grow rapidly, and producing an extremely luminous source of radiation at the center of the host galaxy.

Only detailed work with a powerful supercomputer can check this possibility. Our work with DiRAC2 shows for the first time that it actually occurs in many realistic cases. The spin forces near the black hole break the central regions of tilted disks around spinning black holes into a set of distinct planes with only tenuous flows connecting them. These component disks then precess – their planes tumble in space – independently of each other. In a most cases the continued precession of these disks eventually sets up partially counterrotating gas flows, so that gas in one disk is moving almost oppositely to gas in a nearby disk. As these opposed flows interact, this drives rapid infall towards the black hole. The process can repeat itself as disks form and interact even closer to the black hole. This produces a cascade of tumbling and precessing disks closer and closer to the black hole. This is important because rapid gas infall makes the black hole grow rapidly. This in turn makes the infalling gas extremely hot and bright, accounting for the most luminous objects in the entire Universe.

Simulations of the Circumgalactic Medium


Figure 1. Environment of galaxy haloes.


Figure 2. Halo mass distribution.

We are using cosmological numerical simulations to predict the expected physical properties of the Circumgalactic Medium (CGM) in the environment of masive starforming galaxies in the context of a ΛCDM cosmology. Our primary goal is to establish the gaseous flow pattern around the galaxies and the spatial extent to which winds impact on the CGM through comparison with the observations. Since we wish to compare with observations , as a first step in this pilot project we need to identify the sites of galaxies in the simulation. We do this by identifying their dark matter haloes. This is not straightforward, akin to identifying distinct clouds in the sky. An illustration of the complex regions in which galaxy haloes reside is shown in Figure 1. Different halo identification methods produce different halo catalogs, and different simulation numerical methods produce different haloes. We compare the haloes found using two different algorithms applied to results from two simulation methods to seek a range of halo masses for which the methods agree. Figure 2 shows agreement between these methods for haloes more massive than ten billion solar masses, corresponding to galaxy-size haloes. The flow fields around the haloes are currently being explored.


We have examined the evolution of astrophysical disc models for which the temperature decreases as a function of radius, leading to an angular velocity profile that varies with both radius and height. This work demonstrates for the first time that growth of the vertical shear instability in discs leads to a sustained turbulent flow whose associated Reynolds stress leads to outward angular momentum transport. The results may have application to the outer regions of proto-planetary discs, influencing mass accretion and the formation of planets.


Figure 1. Contours showing the growth of vertical velocity perturbations in a disk due to the development of the vertical shear instability.

We have examined the migration of low mass planets embedded in magnetized discs with a layered structure consisting of turbulent regions near the surface, and a non-turbulent “dead zone” near the disc midplane. A planet migrates because it experiences a net torque with two contributions: a Lindblad torque that drives inward migration; a corotation torque that slows or reverses migration. Our results show for the first time that the corotation torque in a dead zone is ineffective, with the implication being that low mass protoplanets will migrate rapidly as they grow to become super-Earths or Neptune-like planets. A paper describing these results is in preparation.


Figure 2. The left panel shows a 10 Earth mass planet embedded in the dead zone of a disc. The right panel shows the log of the accretion stress as a function of radius and height. Values near the midplane correspond to a ~ 0.0001, and near the surface a ~ 0.1.

Constraining the nature of dark matter using the COCO simulations

The standard cosmological model, the ”Lambda cold dark matter model” (LCDM) is backed up by an impressive array of data that cover a huge range of scales, from the entire observable universe, probed by measurements of temperature anisotropies in the microwave background radiation, to the scales of galaxy clusters and individual bright galaxies, sampled by large galaxy surveys. On smaller scales than this, there is no strong evidence to support the standard model. Yet, it is on such scales that the nature of the dark matter is most clearly manifest. In the standard model, the dark matter consists of cold particles, such as the lightest supersymmetric particles. There are, however, models of particle physics that predict lighter particles, such as sterile neutrinos, that would behave as warm (WDM), rather than cold (CDM), dark matter. If the dark matter is warm, free streaming in the early universe would have erased primordial fluctuations below mass scales corresponding to dwarf galaxies. The abundance and properties of dwarf galaxies could then encode information about the nature of the dark matter.


Figure 1. The image above is a slice through the simulation volume, with the intergalactic gas colour coded from blue to green to red with increasing temperature. The inset images show details in a small region of the simulation centered on a spiral galaxy seen face-on.Enter a caption

The Figure below shows two simulations from a Virgo Consortium project called “Copernicus Complexio” (COCO). These show the structure that grows in a region of diameter approximately 150 million light years in model universes in which the dark matter is cold (left) or warm (right). The WDM model is chosen to have as large a free streaming length as is allowed by observations of gas clouds seen in the early universe (the so-called “Lyman-alpha forest” in distant quasar sight-lines). There are about a hundred haloes in each volume with masses similar to that of the dark halo around the Milky Way galaxy.

Each of these is resolved with about 10 million particles making it possible for the first time to obtain a good statistical sample of well resolved subhaloes. On the scales apparent in this figure there is very little difference between the two models. However, on small scales there are large differences. In CDM tens of thousands of subhalos are visible; in WDM only a few tens are. In principle this difference should be detectable in observational studies.

The project made extensive use of the DiRAC-2 data centric facility because the simulations, with 13 billion particles each, require a machine that has both an exceptionally large memory per core and a large total memory.

CLOVER:  Charming Physics

Strongly interacting particles (hadrons) come in two classes: mesons made of quarks and antiquarks and baryons consisting of three quarks. Quarks come in six varieties or flavours: up, down, strange, charm (u, d, s, c) and much heavier bottom and top quarks (not considered here).

Symmetries between the different quark flavours mean that particles group together in families called multiplets. The proton and neutron are part of a multiplet with 20 members, shown in the left figure. Particles with no charm quark are in the bottom layer, followed by a layer of particles with one charm quark and then with two charm quarks. At present there is very little experimental knowledge about the doubly-charmed particles in the top layer.


Figure 1. The octet multiplet (left) and a compilation of recent lattice determinations of baryon mass splittings (right).

We want to understand how particle masses are related to their position in the multiplet, and the masses of the quarks they contain. In computer simulations we are not limited to the quark masses nature has given us – we can see what would happen if the quarks had quite other masses. As an example, in the second figure we show how the masses of the particles in the lower slice of the multiplet change as we move between a situation with all quark masses equal (the crossing point of the fan) and the physical point (at the left edge of the graph).

While the main force between the quarks and antiquarks comes from QCD there is also a contribution from the electromagnetic force, (quantum electrodynamics, QED), which is usually left out of lattice calculations. We are also doing calculations with both forces included, to see what the effects of QED are. We can see clearly that mesons in which the quark and antiquark have the same sign of electrical charge become heavier compared with mesons where the quark and antiquark have opposite charges, as you might expect from the fact that opposite charges attract, and like charges repel.

Simulating two forces needs more computing power than just looking at one force, so DiRAC II is important in making all aspects of this project possible.

HOT QCD:  Extreme QCD: Towards Quantitative Understanding of the QCD Phase Diagram


Figure 1. Plot of the temperature dependency of the conductivity of matter across transition temperature.

Matter is usually comprised of protons and neutrons which consist of quarks bound tightly together by the Strong Interaction of Particle Physics. However at incredibly large temperatures of a few trillion Celsius, quarks become free and a new and poorly understood “Quark-Gluon Plasma” (QGP) phase is created. While the QGP is presumed to have existed soon after the Big Bang, it has also been produced in experiments where heavy nuclei (such as gold) are collided in particle accelerators at virtually the speed of light. This has been performed in the Large Hadron Collider in CERN and at the Brookhaven National Laboratory on Long Island, USA.

Because each nucleus is incredibly small (100 billion of them side-by-side would span a distance of 1mm) the region of QGP created is correspondingly small. Due to this tiny size, it is impossible to place detectors inside this region, and, in any case, they wouldn’t work because they’d instantly melt into the plasma phase! The plasma “fireball” also expands and cools incredibly rapidly, so it quickly returns to the normal state of matter where quarks are tightly bound.

So to understand the QGP, physicists have to rely on observations of particles ejected from the fireball region out into the “normal” phase of matter and into their detectors and then “reverse engineer” the physics of the QGP. To make the connection from the detected particles back to the fireball, it is therefore essential to understand the QGP’s “transport” properties, i.e. how it expands and flows as a bulk material.

One such property is the electrical conductivity, which is what the quantity this project has calculated. This requires supercomputers, such as those provided by the DiRAC Consortium, in order to simulate the many degrees of freedom of the strong Interaction. Our results, shown in the figure, are the first time anyone has calculated the temperature dependency of the conductivity of matter across transition temperature. In this figure, this transition temperature corresponds to around 180 MeV.

HPQCD:  Light is right – working with up and down quarks at physical masses


Figure 1. A meson annihilates to a lepton and antineutrino (from a virtual W meson).

The world of quarks and gluons, currently the most fundamental building blocks of matter known, is hidden from us. Quarks are never seen as free particles; instead we see their bound states, known as hadrons, in experiments such as those at CERN’s Large Hadron Collider. Accurate calculations with the theory that describes quark and gluon interactions, Quantum Chromodynamics (QCD), are critical to connect the world of quarks to that of hadrons in a quantitative way.


Figure 2. Our results for decay constants of K and pi mesons as a function of the up/down quark mass. The result shown at 0.036 on the x-axis correspond to the real-world masses.

Calculations are made tractable through a numerical technique known as lattice QCD, but they are computationally very challenging. The numerical cost increases as the quark mass falls and so one issue has been that of handling up and down quarks. These have very small masses (a few percent of that of the proton) in the real world. In the past calculations have been done with several values of heavier masses and then an extrapolation to the physical point is performed.

With the computing resources of DiRAC phase II (the Data Analytic Cluster in Cambridge), we are now able to perform calculations with up and down quarks having their physically light masses and this means that more accurate results can be obtained. Our formalism is numerically very efficient and has particularly small disretisation errors.

Key results that we have obtained this way during 2013 are those of the decay constants of the π, K, B and Bs mesons. The decay constant parameterizes the probability of the quark and antiquark in the meson being at the same point and annihilating, for example to a W boson of the weak force, as seen in Fig. 1.

Our results for the decay constants (Fig. 2) are the world’s most accurate. They have enabled us to test the CKM matrix of couplings between quarks and the W boson with unprecedented accuracy. We have also improved the prediction of the rate from known physics of the very rare process, Bs → μ+ μ, recently seen by the LHCb experiment. The sensitivity of this process to new physics makes a future comparison of the experimental and theoretical rates potentially very exciting.

UKQCD BSM:  Beyond the Standard Model- is there anything more than the Higgs?

The Standard Model of particle interaction describes the interactions of all the constituent of the matter to an impressive degree of accuracy. One of the successes of this model is the unification of the electromagnetic and the weak interactions into a new sector called the Electroweak sector. In this model, the Electroweak sector is characterised by a breaking of the SU(2) × U(1) gauge group, which explains why the photon is massless while the W and Z bosons (the mediators of the weak force) have a mass. The electroweak breaking is due to the widely known and notoriously elusive Higgs sector, which describes the interactions of a new particle, the Higgs boson. In addition to giving mass to the mediators of the weak force, the Higgs boson provides mass for ordinary fermionic matter (leptons and quarks). However this elegant model is believed to be valid only up to a certain energy scale, above which new physics is bound to manifest.


Figure 1. The standard model of particle physics (left) and technicolor particles (right).

Many models have been proposed over the years to explain the theory at the scales of new physics. Technicolor is the framework according to which Electroweak symmetry breaking is due to the breaking of the chiral symmetry in a new strong interaction. The model proposes a different answer to the origin of all particles masses, by means of a new mechanism to generate mass for the leptons. These ideas are inspired a similar mechanism that is already at work in the theory of the strong interactions, Quantum Chromodynamics (QCD). A fundamental requirement for any beyond the Standard Model theory is that the framework does not spoil any lower energy prediction, i.e. that they are compatible with current observation. This is a severe constraint, which in Technicolor is implemented by the mechanism of walking, i.e. the slow running of the gauge coupling in an intermediate range of energies. This happens for near-conformal gauge theories. The question then becomes: is there a near-conformal gauge theory that can account for the observed Electroweak symmetry breaking?

The resources offered by the Dirac consortium allowed us to perform Monte Carlo simulation for a theory that has been conjectured be a candidate for the realisation of the Technicolor framework. The model, called Minimal Walking Technicolor, is an SU(2) gauge theory coupled with two adjoint Dirac fermion flavours. We proved that (near-)conformality can be rephrased as a mass spectrum with constant mass ratios between the particles when the constituent fermion mass goes to zero and we observed this feature numerically.



Figure 1. This figure displays the new result for the neutral kaon mixing amplitude BK in green. This new result was calculated in the continuum limit and directly at physical quark masses. The line and the curve represent two fits to previous data points wich were used to extrapolate to quark masses. With our new algorithm and greater machine power we have been able to both eliminate the systematic extrapolation error and simultaneously reduce the statistical error on this important result.

We have carried out simulations of lattice QCD including up, down and strange quark loops using a chiral fermion discretization which is unique in fully preserving the meaning of left handed spin orientation. The W boson only interacts with left handed fermions, and so preserving this symmetry is deeply important to the calculation of the complex weak matrix elements required to support the experiments such as the Large Hadron Collider.

One important example of these is our determination of the two pion decay amplitudes of the Kaon. This is the process in which the asymmetry between matter and antimatter was discovered. Our calculations, which won the 2012 International Ken Wilson Lattice Award, involve mixing between many operators in the effective weak Hamiltonian. The calculation is only tractable with chiral Fermions, but gives rise to a wholly new constraint on the difference between matter and anti-matter in the standard model. Such constraints enable experiments to search for the effects of undiscovered physics beyond the standard model. Our calculation was updated in journal papers, one of which observed in detail a numerical cancellation that explains a long standing puzzle about the likelihood of decay into different pion charges known as the ΔI=1/2 rule – a case of numerical simulation leading to deeper understanding.

This progress over the last year has only been possible due to the powerful DiRAC BlueGene/Q supercomputer in Edinburgh, which our scientists helped IBM to develop, to simulate chiral fermions at the quark masses seen in nature and with spacings for the space-time grid (including all effects up to energy scales of 1.75 and 2.3 GeV). This has allowed us to predict continuum physics with the complete elimination of systematic errors arising from mass extrapolation. The calculation was shared between UKQCD and our USJapan international collaborators in the Riken-Brookhaven-Columbia collaboration, and also made use of BlueGene/Q systems at Argonne and Brookhaven National Laboratories in the US.

Our calculations on neutral kaon mixing are also relevant to CP violation. We presented both the precise continuum results for the standard model process at physical quark masses and for processes that only arise beyond the standard model.

Nuclear Physics


Figure 1. 40-Ti one-neutron removal spectral strength distribution associated with J-pi = 1/2+.

It has proven extremely difficult to describe nuclei directly from a set of nuclear forces, apart from the very lightest elements. In the last decade there has been great progress, supported by computational advances, in various methods to tackle more complex nuclei. Many of these are specialised to systems with even number of neutrons and protons, or just closed shell nuclei, where many calculations simplify. Alternative phenomenological approaches, such as the nuclear shell model, can typically only work within a single major shell.

The complications in the problem arise from the structure of the nuclear force, which is noncentral and spin-dependent, as well as the superfluidity in atomic nuclei. One thus needs to work in large model spaces, take care of the supefluid Cooper pairs, and work with the complex operator structure of the nuclear force. One natural formalism to do all of this is based on the Gor’kov (Green’s function) approach to superfluid systems. Barbieri et al have recently introduced an approach based on the Gor’kov pattern that works for open-shell nuclei , away from the shell closure. A crucial issue for ab initio approaches concerns the ability of performing numerical calculations in increasingly large model spaces, with the aims of thoroughly checking the convergence and of constantly extending the reach to heavier systems.

A long-standing problem with self-consistent calculations of one-body propagators in finite systems concerns the rapid increase of the number of poles generated at each iterative step. The fast growth is expected as the Lehmann representation of one-body Green’s functions develops a continuous cut along the real energy axis in connection with unbound states. This cut is discretised by a growing number of discrete energy states as tthe size of the model space is increased. In practical calculations, one needs to limit the number of discretised poles. We use Lanczos algorithms to project the energy denominators onto smaller Krylov spaces.

As a result of such calculations we can calculate spectral strengths in nuclei that are not easily accessible by other theoretical means. As an example, we show below the spectral strength in 40Ti, with a cut-off ranging up to 13 major shells.



2014 Highlights

2014 was another productive year for our PPAN theory community researchers and some of the Science Challenges they tackled are highlighted below.

2014 list of publications


UKMHD Consortium:
The origins of magnetism in the quiet sun


Figure 1. A volume rendering of the magnetic field distribution in a simulation of a region of quit Sun.

In the outer layers of the solar interior, vigorous convective motions interact with complex magnetic field structures. Regions of quiet Sun (which, by definition, contain no large-scale magnetic features such as sunspots) are characterized by a spatially intermittent distribution of small-scale magnetic fields. Explaining the origin and evolution of quiet Sun magnetic fields is one of the most important challenges in modern solar physics.

It is believed that a significant fraction of quiet Sun magnetism is generated in the surface layers by a convectively-driven dynamo. To simulate this dynamo process, it is necessary to solve the governing equations of Magneto HydroDynamics (MHD) for a compressible, electrically-conducting gas in three spatial dimensions and time. This is a computationally challenging problem that can only be tackled by using massively parallel simulations on the Dirac High Performance Computing facilities.


Figure 2. The temperature distribution (left) and vertical magnetic field distribution (right) from a simulation of a region of quit Sun.

Using simulations of this type, we have confirmed that a dynamo mechanism could indeed account for the observations. In the absence of magnetic fields, the simulated convective flows are solar-like in many respects, with a “granular” pattern of broad warm upflows, surrounded by a network of cooler, narrow downflows. Furthermore, as suggested by the observations, these granules are organized on a larger (“mesogranular”) scale. When a seed magnetic field is introduced into this system, the convective motions drive a dynamo with a complex, intermittent field distribution that is comparable to that observed in the quiet Sun, with the most persistent magnetic structures accumulating preferentially at the mesogranular boundaries.


Kinetic Plasma Turbulence Highlight

Plasma turbulence is believed to be responsible for the fact that the solar wind – the supersonic expansion of the Sun’s corona – retains a high temperature throughout the heliosphere, even when it is expected to cool due to expansion. The solar wind may be heated by releasing magnetic energy to accelerate particles at thin magnetic discontinuities, or current sheets. In some models these current sheets are formed at the same time as the solar wind leaves the Sun, or they may form spontaneously as part of the turbulence seen in the solar wind.


Figure 1. Isosurfaces of current sheet density showing rippling due to ion drift-kink instability (right), and (left) virtual spacecraft data for magnetic field for the crossing of the current sheet shown in the top panel. (From Gingell et al., 2014, arXiv: 1411.4422.)

Using DiRAC we have carried out self-consistent, particle kinetic plasma simulations of the evolution of thin current sheets in a three-dimensional geometry. We find that there are two important instabilities which control energy release. The ion tearing instability releases energy via magnetic reconnection, and the drift-kink instability which, as it name implies, produces a rippled current. The drift-kink instability can reduce the rate of energy release, and lead to thicker current sheets which are less liable to reconnect due to the tearing instability. The competition between these two instabilities can only be studied with three-dimensional simulations, and these require large memory and processor numbers which would not be possible without DiRAC resources.

Our results indicate that the importance of thin current sheets to solar wind heating might be less than predicted by two-dimensional simulations which only capture the ion tearing instability. Energy release at current sheets could be important if they are actively produced at a high enough rate by large scale turbulence. We have also created virtual spacecraft data which can be used to test our simulations against actual data from spacecraft.


Merging Supermassive Black Holes


Figure 1. Numerical simulation of a gas cloud accreting on to a SMBH binary, performed on the DiRAC-2 Complexity cluster. The simulation followed the evolution of a turbulent, self-gravitating cloud of gas as it falls towards the black holes and forms an accretion disk around the binary.

Almost all galaxies have supermassive black holes (SMBHs) in their centres. These cosmic giants are millions to billions of times more massive than the Sun, and play a key role in shaping the formation and evolution of galaxies. Galaxies grow through collisions and mergers of smaller galaxies, and when pairs of galaxies merge their SMBHs slowly sink towards the centre of the merged galaxy. As they get closer together the “friction” dragging them inwards (actually gravitational encounters with stars) becomes progressively less and less efficient, and this process ultimately stalls when the SMBHs reach separations of around a parsec. However, no pairs of binary SMBHs are seen at these separations (despite many years of searching), suggesting that the process(es) that drive SMBH mergers are much more efficient than theory suggests. This long-standing discrepancy between theory and observations is known as the “last parsec problem”.

The most popular solution to this problem is to appeal to a gaseous accretion disc around the SMBH binary. These discs are commonly observed around single SMBHs (where they appear as quasars), and they allow SMBHs to grow by funnelling material down towards the event horizon. However, it is well-known that if the binary and disc are aligned (i.e., they orbit in the same plane and rotate in the same direction), then the disc cannot shrink the binary’s orbit quickly enough. The Theoretical Astrophysics Group in Leicester has recently pioneered the idea that misaligned accretion discs may be the key mechanism driving SMBH mergers. The interaction between an accretion disc and a misaligned SMBH binary is very complicated, and can only be studied using large computer simulations. The power of DiRAC-2 allowed us to build detailed simulations of how accretion discs form and evolve around SMBH binaries (see Figure 1, adapted from Dunhill et al 2014. Animations from these simulations are available here). These are the first models to study the disc formation process self-consistently (previous studies have simply assumed that a disc is present), and to look in detail at how the gravity of the disc shapes the evolution of the binary. As with previous studies, we find that prograde discs (i.e., disc which rotate in the same direction as the binary) do not shrink the binary’s orbit efficiently. However, when discs form in retrograde configurations (i.e., with the gas orbiting in the opposite direction to the binary) their interaction with the binary is much more efficient. Our simulations demonstrated that repeated retrograde accretion events can drive mergers between SMBH binaries on short time-scales, and this process offers an elegant potential solution to the last parsec problem.


UKQCD Beyond Standard Model

Novel strong interactions might provide an explanation for the mechanism of electroweak symmetry breaking. In order for a strong dynamics to be able to explain the electroweak breaking, the theory must be near the onset of the conformal window and possess an anomalous dimension of the chiral condensate of order one. Furthermore, after the experimental observation of a light Higgs and no unexpected particle, a crucial requirement for a Beyond the Standard Model (BSM) theory is the existence of a light scalar in the spectrum.

The a priori interesting theories can be divided into two classes: infrared conformal theories, which are inside the conformal window and are scale-invariant in the limit of infinite distance, and walking theories, showing confinement at large distance, but behaving as infrared conformal theories for a wide range of intermediate scales. Theories in both categories can be used to construct BSM models without violating the existing constraints from precision measurements.


Figure 1. Selected spectral states of the theory in lattice units.

Our investigations of the SU(2) with one adjoint fermion model show that this theory displays crucial dynamical features attributed to phenomenologically viable models of electroweak symmetry breaking (see figure 1 of the spectrum from arXiv:1311.4155).

Indeed simulations performed for values of the couplings in the region connected with the continuum indicate that the system is close to the edge of the conformal window, with an anomalous dimension close to one and a scalar particle much lighter than the other particles in the spectrum.

At the present stage this is the only system showing such intriguing features. Moreover, our calculation provides the first evidence that concrete models with the non-perturbative features mandated by experimental constraints do exist.



Searching for new physics in the magnetic moment of the muon


Figure 1. The interaction between a muon and a photon is complicated by the production of virtual particles.

A vacuum is never completely empty space but teems with particles that are created fleetingly by quantum fluctuations in energy and then disappear. The heavier the particle, the less time its brief existence can last. In that time, however, the particle can interact with the much lighter particles in our everyday world and thereby reveal its existence through tiny discrepancies in the properties of these particles from that expected in the Standard Model. The magnetic moment of the muon shows such a discrepancy, a tantalizing 25±9 parts in 1010. To pin this down more accurately a new experiment, E989 at Fermilab near Chicago, will reduce the experimental uncertainty by a factor of 4. An improved theoretical uncertainty from the Standard Model to match this needs lattice QCD calculations. This year HPQCD has developed a new method using DiRAC that will enable the accuracy needed to be achieved.


Figure 1. Our results for the strange quark HVP contribution to aμ as a function of lattice spacing.

The muon is a heavier cousin of the electron with the same electric charge and spin, so that it carries a magnetic moment. The ‘gyromagnetic ratio’, g, measures the ratio of magnetic moment to spin (in units of e/2m). Naively, g=2 but this ignores the interactions of the muon with the cloud of virtual particles discussed above. The anomalous magnetic moment, aμ = (g – 2)/2, can be measured very accurately from the difference between the circulation and spin precession frequencies as muons fly round a ring transverse to a magnetic field.

The biggest effect in the Standard Model result for aμ comes from Quantum Electrodynamics where very high order calculations can be done. The largest theoretical uncertainty is from Quantum Chromodynamics and the process of Fig. 1 in which a muon radiates a photon that briefly becomes a quark-antiquark pair. This ‘hadronic vacuum polarisation’ (HVP) contribution needs to be calculated to better than 0.5% in lattice QCD to reduce the theoretical uncertainty. Our new method (arXiv:1403.1778, Phys. Rev D89:114501) makes this possible, having achieved 1% for the first time for the strange quark HVP contribution. Our method has low systematic errors and DiRAC enabled high statistics on realistic gluon field configurations. The up/down quark calculation is now underway. Errors will be reduced by a further factor of 3 as we increase statistics by a factor of 10 in 2015.



To find out what remote planets orbiting other stars are made of, astronomers analyse the way in which their atmospheres absorb starlight of different colours and compare it to a model, or ‘spectrum’, to identify different molecules.

In collaboration with the University of New South Wales we have developed a new spectrum for ‘hot’ methane which can be used to detect the molecule at temperatures above that of Earth, up to 1,500K/1220°C – something which was not possible before. Current models of methane are incomplete, leading to a severe underestimation of methane levels on planets. We anticipate our new model will have a big impact on the future study of planets and ‘cool’ stars external to our solar system, potentially helping scientists identify signs of extraterrestrial life. Our study, published in PNAS, describes how we used some of the UK’s most advanced supercomputers, provided by the Distributed Research utilising Advanced Computing (DiRAC) project and run by the University of Cambridge, to calculate nearly 10 billion spectroscopic lines, each with a distinct colour at which methane can absorb light. The new list of lines is 2000 times bigger than any previous study, which means it can give more accurate information across a broader range of temperatures than was previously possible. The comprehensive spectrum we have created has only been possible with the astonishing power of modern supercomputers which are needed for the billions of lines required for the modelling. We limited the temperature threshold to 1,500K to fit the capacity available, so more research could be done to expand the model to higher temperatures still. Our calculations required about 3 million CPU (central processing unit) hours alone; processing power only accessible to us through the DiRAC project. The new model has been tested and verified by successfully reproducing in detail the way in which the methane in failed stars, called brown dwarfs, absorbs light.


CLOVER: Baryon Mass Splittings

Light baryons such as the familiar proton and neutron make up most of the matter in the visible universe. These baryons have constituents of three types of ‘flavour’ of quarks: up, down and strange or u, d and s. Particles group together in families or multiplets, the octet multiplet is shown on the left-hand side of the figure.

The mass splitting between the proton and neutron is a very delicately balanced quantity, partly caused by the mass difference between the u and d quarks, and partly by QED effects. Small changes in this mass difference would have profound effects on the way the universe looks today. The initial balance between hydrogen and helium, established in the first half-hour after the Big Bang, depends strongly on the neutron half-life, and so on the p-n mass splitting. Later, the production of carbon and oxygen in stars also depends strongly on the proton-neutron splitting.


Figure 1. The octet multiplet (left) and a compilation of recent lattice determinations of baryon mass splittings (right).

The strong force binding the quarks together is described by quantum chromo-dynamics or QCD, which has to be numerically simulated (via lattice QCD). A compilation of recent lattice determinations of baryon mass splittings is given in the right panel of the figure. In particular there is also mass splitting between the particles at the centre of the octet the Sigma and Lambda. This is more complicated case as the states mix. We have now extended our previous results to include this case and determined the mixing angle and mass splitting. While the effects of mixing on the masses are very small (second order in the angle), it can be much larger for decay rates.

While the main force between the quarks and antiquarks comes from QCD there is also a contribution from the electromagnetic force, QED, which is usually left out of lattice calculations. We are also doing calculations with both forces included, to see how important the effects of QED are.


Black Holes

Firm observational evidence indicates that supermassive black holes are present at the centre of the majority of galaxies already from very early cosmic times all the way to the present day Universe. Feedback from these black holes in the form of large scale outflows is believed to be one of the key ingredients shaping the evolution and properties of galaxies in our Universe.

Recent observations (Cicone et al. 2014) have detected the emission from spatially extended cold gas around a bright quasar that existed when the Universe was less than 10% of its current age. This cold gas is moving at a very high velocity of the order of 1000km/s and has been detected up to 90,000 light years away from the rapidly accreting black hole that is powering the observed quasar. While this high velocity gas has been interpreted as the signature of a powerful black hole outflow, this observation is in tension with simple theoretical expectations which suggest that while rapidly moving, gas should be very hot instead of cold.

Figure 1. A galaxy hosting a black hole. The hot and rapidly moving outflow originating from the black hole is shown with a black contour, cold gas pockets containing up to a billion Solar masses and moving together with the hot outflow are depicted with grey pixels, and inflowing cosmic web filaments are illustrated in orange hues.

Using the moving-mesh code AREPO, researches at IoA/KICC, Cambridge have performed some of the most realistic cosmological simulations that follow black hole growth and feedback in the early Universe. In simulations, a galaxy hosting a bright quasar undergoes extremely high levels of star formation as it is located at the intersection of cosmic web filaments which deliver copious amount of gas to it. This eventually drives a “starburst phase” where a large number of stars explode as supernovae and their joint effect triggers a galaxy-scale wind. Thus, as the even more powerful black hole-driven outflow leaves the central region of galaxy and propagates outwards, it sweeps over the gas that has been polluted by metals, initially produced within stars and subsequently released by supernova blast waves. These “pockets” of metal enriched gas cause part of the hot quasar-driven outflow to cool to low temperatures. This is illustrated in the figure above where the hot and rapidly moving outflow originating from the black hole is shown with a black contour, cold gas pockets containing up to a billion Solar masses and moving together with the hot outflow are depicted with grey pixels, and inflowing cosmic web filaments are illustrated in orange hues. The black hole and its host galaxy are situated at the centre of the image.

Thanks to the simulations performed at the DiRAC-2 facility in Cambridge, researchers at IoA/KICC have found that self-consistent modelling of both supernovae and black hole outflows lead to the formation of cold gas pockets moving with 1000 km/s which are spatially extended over 90,000 light years (30kpc) in remarkable agreement with the puzzling observations of Cicone et al. 2014.




Figure 1. The image above is a slice through the simulation volume, with the intergalactic gas colour coded from blue to green to red with increasing temperature. The inset images show details in a small region of the simulation centered on a spiral galaxy seen face-on.

The EAGLE simulation project is a Virgo flagship program aimed at creating hi-fidelity hydrodynamic simulations of the galaxy population. It has been a major investment of intellectual effort, manpower and computing resources, with game-changing results, opening a window on the role and physics of baryons in the Universe.

EAGLE uses the latest developments in SPH techniques, with state of the art modelling of sub-grid processes such as star formation, feedback and black hole accretion. Unlike previous projects, our feedback scheme does not involve “de-coupling” feedback particles from hydrodynamic forces, or artificially suppressing cooling. The largest simulation, with a volume of 106 Mpc contains 1000s of well-resolved galaxies similar to the Milky Way and100s of galaxy groups and a handful of galaxy clusters. This huge statistical sample allows us to understand the variety of galaxy formation histories, the impact of galaxy environment and to study rare objects such as highly star forming sub-mm galaxies.

Obviously, a simulation of this size is of no use unless it is able to accurately reproduce the properties of the local galaxy population. EAGLE has been spectacularly successful: for example, the model correctly predicts the evolution of the galaxy mass function and the rising trend of galaxy specific star formation rates (Schaye et al. 2014 at z=0, Furlong et al. 2014 showing predictions at higher z). Numerous papers are in advanced stages of preparation comparing additional galaxy properties, such as HI mass functions (Bahe et al), galaxy colour distributions and luminosity functions (Trayford et al), OVI absorbers (Rahmati et al), AGN luminosity functions (Rosas-Guevara et al). The fidelity of the simulation also makes it a powerful tool for understanding the biases in observational surveys.

Having shown that the simulation’s fidelity is excellent, we can now exploit it as a tool for understanding the physical mechanisms by which galaxies form and evolve. The image above is a slice through the simulation volume, with the intergalactic gas colour coded from blue to green to red with increasing temperature. The inset images show details in a small region of the simulation centred on a spiral galaxy seen face-on.


Extreme QCD: Towards Quantitative Understanding of the QCD Phase Diagram

There are four fundamental forces that describe all known interactions in the universe: gravity; electromagnetism; the weak interaction; and the strong interaction, which is the topic of our research. The strong force binds quarks into hadrons such as protons and neutrons, which in turn form the nuclei of atoms, making up more than 99% of all the known matter in the universe.

Normally, the strong interaction binds quarks so tightly that they never escape from within hadrons. However, at temperatures greater than the deconfining temperature, TC (which is several trillion Celsius!) it undergoes a substantial change in character, becoming considerably weaker, and quarks become “free”. This new phase of matter is called the “quark-gluon plasma” (QGP) and is presumed to have occurred in the first few microseconds after the Big Bang. Despite the QGP being crucial to the development of the Universe (it was born in this state!) it is poorly understood.

Physicists re-create a mini-version of the QGP by colliding large nuclei (such as gold or lead) in particle accelerators at virtually the speed of light. These experiments have been performed in the Large Hadron Collider at CERN and at the Brookhaven National Laboratory in the USA. The region of QGP created in these experiments is incredibly small – just the size of the colliding nuclei. The QGP “fireball” formed expands and cools very rapidly, quickly returning to normal matter where quarks are tightly bound inside hadrons.

Most hadrons melt in the QGP meaning that the quarks that were inside them become free, like satellites escaping the earth’s gravity. However it was proposed 30 years ago that some, heavy “mesons” may remain bound up to quite high temperatures. Our collaboration has calculated the spectral functions of these bottomonium mesons (comprised of two bottom quarks) and we show the results above. Each pane has the results from two nearby temperatures, with the coldest pair in the top-left, and the hottest bottom-right. The strong peak at around 9.5 GeV shows a bound state which gets weaker as the temperatures increase, but remains in existence up to around 2TC. This confirms the decades-old hypothesis that these mesons do not melt as soon as the QGP is formed, and that measuring their features is an effective way of determining the system’s temperature, i.e. they can be used as a “thermometer” of the QGP.



COSMOS consortium researchers have been exploiting the DiRAC HPC Facilities to make progress towards ambitious milestones in five key inter-related areas: (i) extreme universe, (ii) cosmic microwave sky, (iii) dark energy and (iv) galaxy formation and (v) black holes.


Figure 1. CMB temperature (left) and E-mode polarization (right) bispectrum reconstruction obtained from the Planck Full Mission Planck data using the MODAL estimator on COSMOS. Isosurfaces are shown both positive (red) and negative (blue).

Planck Satellite Science with Polarization:

The COSMOS Consortium flagship project to analyse and interpret data from ESA’s Planck Satellite has gone forward, building on the success of the 2013 First Data Release, to analyse the Planck Full Mission data with CMB polarization in results announced in December 2014. DiRAC resources were used to derive accurate estimates for cosmological parameters, showing interesting deviations from earlier WMAP results, checking consistency with temperature using the polarization data. Planck data was used to impose world-leading limits on cosmic non-Gaussianity that substantially constrain inflationary Universe paradigms. The three-point correlator (bispectrum) was evaluated to high precision for the first time in both temperature and polarization (and mixed bispectra) with a wide-ranging model survey undertaken, which yielded tight constraints on standard local, equilateral and orthogonal non-Gaussianity, as well as a measurement of the lensing bispectrum.

Extreme Universe – Blackholes:

Consortium members continue with world-leading black hole simulations, developing new methods to study black-hole collisions – events so violent their output eclipses the entire electromagnetic universe. Our simulations on Cosmos demonstrate that up to 50% of the total energy can be converted into gravitational waves in such collisions, supporting present assumptions in the analysis of collision search experiments at the LHC. We have also been preparing gravitational wave templates for the Advanced LIGO experiment which will begin operating in 2015. In addition, we have developed and tested a numerical relativity code GRChombo, the first full adaptive mesh refinement code which can be used to simulate black hole collisions and gravitational wave generation in early universe cosmology.

Galaxy Surveys, Dark Energy and Dark Matter:

Consortium resources have been deployed to analyse galaxy surveys through N-body simulations to better understand the properties of the universe and its perturbations. Much of the focus has been on preparing for The Dark Energy Survey (DES), as well as ambitious future surveys with DESI, Euclid and SKA. Power spectrum measurements from Baryon Oscillation Spectroscopic Survey (BOSS) data and mock catalogues corresponding to the Data Releases 10-12 samples were performed using COSMOS.


Domain Wall Highlight

We have pursued the first ever Quantum Chromodynamics (QCD), or strong force, lattice simulations of quarks and gluons with continuum limit results from chiral lattice quarks at their physical masses. Only left handed quarks couple to the electroweak forces. Our chiral approach reproduces the handedness symmetry of continuum QCD and is particularly good for determining the rates of weak decay processes involving left handed quark operators. This preference for left handed over right handed shows that the weak forces surprisingly break reflection (or parity) symmetry. Our results confirm the strong force as the confining theory of hadrons (particles made from quarks). The mass spectrum and pion decay constant agree with experimental determinations at the fraction of a percent level, and allow determination of the Vus quark flavor mixing parameter of the standard model. These and the following achievements are crucial ingredients in interpreting the results from the large experimental facilities (e.g. at CERN).


Figure 1. The figure above left shows the fit to simulation data for the decay constant from arXiv: 1411.7017 and right the action density for one of the gauge fields generated on the DiRAC facility.

Quark Masses:

We have determined the up/down and strange quark masses – fundamental constants of nature – to a fraction of a percent.

Neutral Kaon oscillations:

In the Standard Model (SM) of particle physics, weak interactions allow quarks to change from one type or flavour to another in a highly constrained pattern. Quark flavour-changing processes with no accompanying change in electric charge are very rare and present a small background against which to search for new physics. The SM allows electrically-neutral kaons to oscillate into their own antiparticles in a way which breaks the combined symmetry of particle-antiparticle exchange (charge) and mirror reflection (parity), known as CP symmetry. CP violation is essential to enable the dominance of matter over antimatter in the Universe and was first discovered in the neutral kaon system. We have calculated the neutral Kaon oscillation amplitude BK to 0.3% accuracy.


Vigorous activity by UKQCD has leveraged STFC investment in DiRAC with around a 200x speedup from code and algorithmic improvements:

  • UKQCD scientists worked for five years with IBM research to help design the BlueGene/Q chip (being responsible for 8% of the chip area);
  • UKQCD have developed new multi-grid algorithms that adaptively accelerate the calculation of quark propagators;
  • A volume averaging technique developed by RBC-UKQCD, known as AMA, has lowered our statistical errors to around the 0.2% level.



Large Scale Triggering of Star Formation and Molecular Clouds:

This project has received the bulk of our effort over the past 12 months. We have been running 127 million particle SPH runs of a spiral shock to study the formation of molecular clouds and establish the first self-consistent initial conditions for simulations of star formation. These simulations follow the formation of molecular clouds along a 4kpc length of a spiral arm with a total mass of 20 million solar masses and can resolve the formation of self-gravitating structures down to 10 solar masses. We have been studying the importance of the thermal physics in triggering star formation as well as several tests to ensure that the dynamics of the shocked regions are properly captured. A companion study using grid-based simulations has been completed which investigates the dynamics of single clouds entering into a spiral arm. This study showed that the gravitational potential, plus shock induced hydrodynamical instabilities, can drive turbulence into the cloud. This re-enforces the conclusions from our Smoothed Particle Hydrodynamics simulations.

We are now developing full galactic models of the ISM including live potentials. This work will form a significant part of our simulation work in 2015.

The Galactic Centre:

We have concluded studies on star formation in the Galactic centre and have expanded our scope to study the dynamics of the Central Molecular Zone including the “Moilinari” ring. These simulations show how a single cloud can reproduce the overall features. A detailed comparison with observations is ongoing. We have also used larger scale simulations of the inner 500pc of the galaxy to study how the non-axisymmetric structures combine with infalling gas interact, in order to probe the relationship between the CMZ and the inner star formation activity.


2015 Highlights

At DiRAC we are proud to have made some major contributions to the areas of theoretical modelling and HPC-based research in the STFC Fontier Science agenda. Below you can find information on the Science Highlights from some of the projects that DiRAC is involved in…

2015 list of publications



Stress Testing the Standard Model

A vacuum is never completely empty space but teems with particles that are created fleetingly by quantum fluctuations in energy and then disappear. The heavier the particle, the less time its brief existence can last. In that time, however, the particle can interact with the much lighter particles in our everyday world and thereby reveal its existence through tiny discrepancies in the properties of these particles from that expected in the Standard Model. The magnetic moment of the muon shows such a discrepancy, a tantalizing 25±9 parts in 1010. To pin this down more accurately a new experiment, E989 at Fermilab near Chicago, will reduce the experimental uncertainty by a factor of 4. An improved theoretical uncertainty from the Standard Model to match this needs lattice QCD calculations. This year HPQCD has developed a new method using DiRAC that will enable the accuracy needed to be achieved.


Figure 1. Determination of meson decay constants, compared to experimental results, where available. HPQCD, arXiv: 1503.05762, 1408.5768.

The muon is a heavier cousin of the electron with the same electric charge and spin, so that it carries a magnetic moment. The ‘gyromagnetic ratio’, g, measures the ratio of magnetic moment to spin (in units of e/2m). Naively, g=2 but this ignores the interactions of the muon with the cloud of virtual particles discussed above. The anomalous magnetic moment, aμ = (g – 2)/2, can be measured very accurately from the difference between the circulation and spin precession frequencies as muons fly round a ring transverse to a magnetic field.

The biggest effect in the Standard Model result for aμ comes from Quantum Electrodynamics where very high order calculations can be done. The largest theoretical uncertainty is from Quantum Chromodynamics and the process of Fig. 1 in which a muon radiates a photon that briefly becomes a quark-antiquark pair. This ‘hadronic vacuum polarisation’ (HVP) contribution needs to be calculated to better than 0.5% in lattice QCD to reduce the theoretical uncertainty. Our new method (arXiv:1403.1778, Phys. Rev D89:114501) makes this possible, having achieved 1% for the first time for the strange quark HVP contribution. Our method has low systematic errors and DiRAC enabled high statistics on realistic gluon field configurations. The up/down quark calculation is now underway. Errors will be reduced by a further factor of 3 as we increase statistics by a factor of 10 in 2015.


Kinetic Plasma Turbulence:  Determining the shape of Type Ia supernova explosions

Type Ia Supernovae (SNe Ia) are among the most energetic phenomena in the Universe and are important for many different astrophysical research fields (e.g. theories of binary stellar evolution, chemical enrichment of galaxies, and cosmology). These spectacular events are believed to be associated with thermonuclear explosions of white dwarf stars but answers to the questions of when, why and how the explosions are triggered remain unclear.


Figure 1. Flux and polarization spectra (black) predicted 7 days before maximum for the merger model of Pakmor et al. (2012) viewed from a specific orientation. For comparison, grey lines show observed spectra of SN 2004dt at the same epoch.

In the past few decades spectropolarimetry has 
played a leading role in the search for a comprehensive picture of SNe Ia (Wang & Wheeler 2008). 
The particular value of this technique is its ability to probe the geometry of the explosion: linear polarization, attributed to electron scattering, has been detected in SNe Ia, suggesting that asymmetries 
in the progenitor system and/or explosion mechanism are 
present. However, interpreting polarimetric data is challenging and careful modelling is required to make the connections between observed spectropolarimetry and specific ejecta geometries. Our current research aims to address this challenge by making polarization prediction from modern multi-dimensional explosion models.

We recently developed a new Monte Carlo technique for use in our radiative transfer code (Kromer & Sim 2009) that allows us to calculate polarization signatures for hydrodynamical explosion models (Bulla et al. 2015). Compared to simpler techniques, this method leads to a substantial reduction in the Monte Carlo noise and thus in the required computing time (a factor of ~ 50). The combination of this new technique and access to the super-computing facilities provided by DiRAC has now made it feasible to compute polarization spectra for fully three-dimensional explosion models using simulations that run for only a few days.

In a recent paper (Bulla et al. 2016), we presented polarization calculations for the violent merger of two carbon-oxygen white dwarfs. The violent merger model has received considerable attention in the past few years, but our study is the first to investigate its spectropolarimetric signatures in detail. We demonstrated that the particular merger model of Pakmor et al. (2012) provides a remarkably good match with some highly-polarized SNe Ia (see Fig. 1), but is too asymmetric to reproduce the low levels of polarization commonly seen in many other SNe Ia. We are now extending our studies to alternative explosion scenarios: this will allow us to determine how spectropolarimetry can be most effectively used to distinguish and test explosion theories.

Bibliography: Bulla et al. 2015, MNRAS, 450, 967; Bulla et al. 2016, MNRAS, 455, 1060; Kromer & Sim 2009, MNRAS, 398, 1809; Pakmor et al. 2012, ApJL, 747, L10; Wang & Wheeler 2008, ARA&A, 46, 433.


HORIZON UK-Consortium:  Galaxy alignments from cosmological hydrodynamics simulations

Galaxies come in a variety of shapes. Nevertheless, in a simplified view of galaxy formation, one can classify them either as spirals or ellipticals. However, the orientation and ellipticity of these spirals and ellipticals is not random; it depends on their formation history and their environment. In particular, tidal forces from the large-scale structure of the Universe (see figure below) play a role in determining correlations between ellipticity, orientation and the underlying (dark) matter density field. Such correlations are referred to as ‘intrinsic alignments’.

On the other hand, future imaging surveys from the Euclid satellite and the Large Synoptic Survey Telescope, aim to measure per cent level perturbations on galaxy ellipticities caused by a different effect: the deviation of photons from their original path due to the gravity of these same large-scale structures. This ‘gravitational lensing’ effect constitutes a powerful probe of the nature of the mysterious dark energy that drives the accelerated expansion of our Universe.


Figure 1. Left panel: 28 Mpc slice through the Horizon-AGN simulation cube (140 Mpc on a side), at redshift z=0.5. Right panel: 10x zoom on a region located in the bottom left part of the cube. The different colours indicate gas density (in green), gas metallicity (in blue) and stars (in red). Galaxies are identified as clusters of stars. Their shapes are measured to investigate intrinsic ellipticity correlations that contaminate the weak gravitational lensing signal used by astronomers to constrain the nature of dark energy.

Therefore, the quest for improving our understanding of intrinsic alignments to avoid them contaminating dark energy observables is paramount. Thanks to the DiRAC facility, our group (Horizon-UK) lead an analysis of unprecedented statistical scope and detail of galaxy shape and spin alignments, mining the huge dataset from the state-of-the-art hydrodynamic cosmological simulation Horizon-AGN. These pioneering results are reported in Chisari et al, 2015, MNRAS, 454, 2736.


Leicester: First pre-infall mass estimate of a Milky Way satellite galaxy

We have used the DiRAC-2 Complexity cluster to obtain a mass estimate of the Carina dwarf spheroidal (dSph) satellite of the Milky Way at the time when it first fell into the gravitational field of our Galaxy (about 6 Gyr ago). This is the first primordial mass estimate that has been obtained directly from observed data without recourse to matching with cosmological simulations.


Figure 1. Comparison of our mass estimate for Carina (black and red data points with error bars) with predictions for the relationship between stellar mass (y-axis) and dynamical mass (x-axis) obtained via abundance matching of observed galaxies and simulated halos. (Figure reproduced from Ural et al., 2015.)

The dSphs which surround the Milky Way are vital laboratories in which to test theories of galaxy formation. Not only are they are the lowest mass stellar systems known to contain dark matter but they also populate the faint end of the galaxy luminosity function. Over the past 15 years, several issues have been identified which appear to place cosmological galaxy formation simulations in tension with observations: (1) simulations which contain only dark matter over-predict the numbers of objects around a Milky Way-like galaxy by at least an order of magnitude (the “missing satellites” problem; Moore et al., 1998); (2) simulated haloes exhibit central density cusps in their dark matter profiles, while observations are often claimed to be more consistent with uniform density central cores (the “cusp-core problem); (3) simulations predict the existence of dark haloes whose masses should have allowed them to form stars but which are apparently missing from the census of observed galaxies (the “too big to fail problem”; Boylan-Kolchin et al., 2012). The resolution of all three issues requires knowledge of the masses of the dwarf satellites both at the present epoch and at the time of their formation.

Despite the availability of large observed data sets containing measured velocities for 1000s of stars in several of the brighter Milky Way satellites, previous analyses of their mass distributions have made many simplifying assumptions. In particular, most models assume that the dwarf galaxy is in dynamical equilibrium and isolated from external perturbations. For small Milky Way satellite galaxies, both these assumptions are of questionable validity. N-body simulations provide a natural way to relax these assumptions and explore the impact of the Milky Way on the evolution of a dwarf galaxy. We have combined a Markov-Chain-Monte-Carlo algorithm with an N-body simulation code to explore the parameter space of models of the Carina dwarf galaxy using 20,000 medium-resolution simulations.

We identified the Carina dSph as an interesting target for this initial study due to the large amount of available kinematic data and the fact that there were already hints that its outer regions are being disturbed by the Milky Way (e.g. Muñoz et al., 2006). Our algorithm was able to constrain the present-day mass of Carina to be (7±3)×107 solar masses. More interestingly, we were also able to constrain Carina’s mass at the time it first fell into the Milky Way to be 3.9 (−2.4;+3.9)×108 solar masses. Our mass estimate is significantly lower than the halo mass generally associated with dSphs of comparable luminosity to Carina. In Ural et al. (2015), we suggested that this may be evidence that galaxy formation becomes stochastic in low-mass haloes.

The calculations for this project consumed approximately 1M core hours on the Complexity cluster of DiRAC-2, as a very significant number of simulations were required to verify the efficacy of the method using artificial data sets before it could be applied to the observed data with confidence. By using the Complexity cluster we were able to run multiple MCMC chains in parallel which greatly reduced the overall time required to complete the project. The work is reported in Ural, Wilkinson, Read & Walker, 2015 (Nature Communications, 6, 7599).


Black Holes

Quasars are extremely bright objects that reside in the centres of galaxies. They are believed to consist of a supermassive black hole (some as large as a billion times the mass of our own Sun) that is surrounded by gas. These black holes grow in size by swallowing their reservoir of gas and, in doing so, release an enormous amount of energy into their host galaxies. This energy affects the growth of each galaxy, changing where and when stars form and playing an important role in the different appearances of galaxies that we observe in the Universe today.


Figure 1.

Such quasars are very rare in the present day Universe, but were much more active in the past. Because of their brightness, some of the most distant objects we can see are some of the most massive quasars, existing barely 1 billion years after the Big Bang. The presence of these objects raises many questions: how were these objects able to grow to such a large mass so quickly? What do the galaxies that host such objects look like? How does the growth of such a massive black hole affect the surrounding environment?

Using the DiRAC supercomputers at Universities of Cambridge, Leicester and Durham, researchers at the Institute of Astronomy and Kavli Institute for Cosmology, Cambridge simulated the growth of a galaxy containing a quasar similar to those we observe. These results are summarised in a recent Letter to MNRAS by Curtis and Sijacki, accepted on December 10, 2015. Vital to the work was the use of a new computational technique to significantly increase the resolution of simulations around the central black hole, which Curtis and Sijacki developed over the last year (2015, MNRAS, 454, 3445) thanks to unique computational resources provided by DiRAC. This novel technique enabled the capture of the flow of gas around black holes with unprecedented accuracy both in term of mass and spatial resolution.

The work by Curtis and Sijacki provides new insights into what we might hope to observe in the near future around these distant quasars with the James Webb space telescope and the Atacama Large Millimeter/submillimeter Array. Long thin structures, called filaments, transport gas from the outskirts of the galaxy all the way to the central region. These filaments grow a disc of gas that orbits around the black hole. The gas in this disc is relatively cold, which allows it to become dense enough to form stars at a dramatic rate. A fraction of the gas escapes the disc and falls onto the black hole, releasing large quantities of energy. This drives a powerful flow of super-heated gas out of the central galaxy, oriented in such a way that it does not destroy the surrounding disc, which is the key novel result of the model.

In the coming years there will be many observations of quasars in the early Universe. The ways in which they compare with the predictions of Curtis and Sijacki will provide clues to the physical processes that we do not fully understand, allowing deeper insight into the role black holes play in shaping our Universe.


VIRGO: Modern cosmological test of gravity

A hot topic in cosmology is to test gravity on scales well beyond our Solar system. 2015 marks the centenary of Einstein’s publication of his General theory of Relativity (GR), which is still our standard gravity theory. Although GR has been tested rigorously in the Solar system, its application in cosmology is still largely an extrapolation. Previous cosmological observations did not have the precision to test it accurately, but the situation is changing quickly in light of the future galaxy and cluster surveys such as Euclid, LSST, DESI and eROSITA, all of which aim to test GR as a primary science objective. Over the next decade, these observations will determine key cosmological parameters with percent-level precision, and bring the cosmological tests of GR to a similar precision for the first time. They will also address another critical question in modern cosmology – whether the mysterious accelerated expansion of our Universe is due to some unknown exotic matter (dark energy) or a different behaviour of gravity on cosmological scales.


Figure 1. The velocity field in a slice of the simulation box of the standard (left) and non-standard (right) gravity model. Colours are used to illustrate the amplitudes of the velocity field. Filamentary structures, towards which matter fall, are clearly visible, as well as the model differences around these filaments. It is by differences such as shown here that different gravity models can be tested against observational data.

To fully exploit those billion-dollar surveys and to achieve their scientific potential of testing gravity, it is crucial to be able to accurately predict the observable signatures of non-standard gravity theories. This is a numerically intensive problem, which depends critically on powerful supercomputing resources such as DiRAC-2. Indeed, DiRAC has enabled us to make significant progresses over the past few years [1-11]. These efforts have led to developments of state-of-the-art algorithms and simulation codes such as the first ever parallelised N-body code for such theories, ECOSMOG [1-3], as well as a recent code [8] implementing a novel algorithm to trace light rays across cosmological simulations on the fly. These codes have been used widely in studies of this field, and ruled out popular non-standard models such as Galileon gravity [7]. More recently, tests on DiRAC led an improvement of the performance of ECOSMOG by over 10 times for some theories [9], making it possible to study such theories by large and high-resolution simulations in the future.

The numerical studies can also have implications to other fields. For example, the Galileon model features nonlinear differential equations similar to the famous Monge-Ampere equation in the optimal transportation problem, and its study has led to a new stable algorithm to solve this type of equation. These previous studies have fully equipped us [8-10] with efficient tools to make accurate predictions for a number of cosmological observables, including weak gravitational lensing, redshift space distortions and clusters of galaxies, for which we expect precise observational data in the next decade. They have prepared us for accurate cosmological tests of gravity and will help to shed light on the understanding of the origin of the accelerated expansion of our Universe. More exciting progresses are expected for the next few years.

Bibliography: [1] Li B., Zhao G., Teyssier R., Koyama K., JCAP, 01, 051 (2012), [2] Li B., Zhao G., Koyama K., JCAP, 05, 023 (2013), [3] Li B., et al., JCAP, 11, 012 (2013), [4] Li B., et al., MNRAS, 428, 743 (2013) [5] Jennings E., et al., MNRAS, 425, 2128 (2012); Hellwing W. A., et al., Phys. Rev. Lett., 112, 221102 (2014) [6] Shi D., et al., MNRAS, 452, 3179 (2015); Cai et al., MNRAS, 451, 1036 (2015) [7] Barreira A., et al., JCAP, 08, 028 (2015); JCAP, 08, 059 (2014) [8] Barreira A.,et al., JCAP, submitted, [9] Barreira A., Bose S., Li B., JCAP, submitted, [10] He J., Li B., Phys. Rev. Lett., submitted



Accretion is an important process relevant to various fields in astrophysics, from star formation to the study of compact binaries and Type I supernovae. Continuous theoretical and numerical efforts have been devoted to understanding the interaction between the inner edge of an accretion disk and the stellar surface. Complementary to these works, there are also many studies devoted to the effect of accretion on the structure and evolution of objects, using either simple semi-analytic approaches or simulations based on one dimensional (1D) stellar evolution codes. But no multi-dimensional numerical study has ever been devoted to the effect of accretion on the structure of accreting objects. Because of this lack of sophisticated modelling, great uncertainties still remain regarding the fraction of accretion luminosity imparted to an accreting object or radiated away, and how the accreted material is redistributed over the stellar surface and the depth it reaches. Early accretion history may impact the properties of stellar objects (luminosity, radius, and effective temperature) in star formation regions and young clusters even after several Myr when the accretion process has ceased. These effects could provide an explanation for several observed features in star-forming regions and young clusters, having important consequences on our general understanding of star formation.


Figure 1. Distribution of tracer particles accreted on a young sun for cold (e1) and hot accretion (e9) cases respectively. Color scale is the radial velocity in units of cm/s and gray scale points show the position of the tracer particles. The cold accretion case shows rapid mixing of the accreted particles within the convective envelope. The hot accretion case shows the formation of a hot surface layer where particles accumulate and do not sink.

The DiRAC Resource, Complexity, has been used to perform the first hydrodynamics simulations describing the multi-dimensional structure of accreting, convective low mass stars. The hydrodynamical simulations were computed using our fully compressible time implicit code MUSIC (Multidimensional Stellar Implicit Code) recently improved with the implementation of a Jacobian-free Newton-Krylov time integration method, which significantly improves the performance of the code. These developments and the benchmarking of the code were made using Complexity. These new accretion calculations have enabled us to understand the effect of so-called “hot” accretion, when the accreted material is characterised by a larger entropy than that of the bulk of the stellar material.

The multi-D simulations show an accumulation of hot material at the surface that does not sink and instead produces a hot surface layer, which tends to suppress convection in the stellar envelope. This enables us to derive an improved treatment of accretion in 1D stellar evolution models, based on an accretion boundary condition. Such treatment of accretion in a 1D stellar evolution code is more physical than the standard assumption of redistribution of the accreted energy within the stellar interior. These new results will be published in Geroux, Baraffe, Viallet et al. (2016, A&A, submitted). The impact of this more physical accretion boundary condition on the evolution of young low mass stars and brown dwarfs and on observations in young clusters will be explored in a follow-up studies. These pioneering simulations performed on the DiRAC facility shows the potential of using time implicit multi-D simulations to improve our understanding of stellar physics and evolution.


2016 Highlights

2016 was another productive year for our PPAN theory community researchers and some of the Science Challenges they tackled are highlighted below. 

2016 list of publications


Gravitational waves:  DiRAC simulations play a key role in gravitational-wave discovery


Figure 1. Wave signal detected by LIGO (top), waveforms predicted by general relativity (middle) and the residual from matching these (bottom).

On February 11 2016, the LIGO collaboration announced the first direct detection of gravitational waves and the first observaton of binary black holes. Accurate theoretical models of the signal were needed to find it and, more importantly, to decode the signal to work out what the source was. These models rely on large numbers of numerial solutions of Einstein’s equations for the last orbits and merger of two black holes, for a variety of binary configurations. The DiRAC Data Centric system, COSMA5, was used by researchers at Cardiff University to perform these simuations. With these results, along with international collaborators, they constructed the generic-binary model that was used to measure the masses of the two black holes that were detected, the mass of the final black hole, and to glean some basic information about how fast the black holes were spinning. Their model was crucial in measuring the properties of the gravitational-wave signal, and The DiRAC Data Centric system COSMA5 was crucial in producing that model. More information on the detection of gravitational waves can be found at the LIGO collaboration website.

In the figure to the right, the top plot shows the signal of gravitational waves detected by the LIGO observatory located in Hanford, USA whist the middle plot shows the waveforms predicted by general relativity. The X-axis plots time and the Y-axis plots the strain, which is the fractional amount by which distances are distorted by the passing gravitational wave. The bottom plot shows the LIGO data matches the predications very closely. (Adapted from Fig. 1 in Physics Review Letters 116, 061102 (2016))



Lattice QCD and the Search for New Physics


Figure 1. The interaction between a muon and a photon is complicated by the production of virtual particles such as quarks and gluons.

If you could see deep into the subatomic world with slow-motion glasses, you would see empty space teeming with particles, appearing and disappearing in a tiny fraction of a second. If you were very lucky you might glimpse a particle so heavy and rare as to have escaped detection so far at CERN’s Large Hadron Collider. Such particles would be harbingers of new physics going beyond our existing Standard Model, but are proving very hard to find.

One way to demonstrate their existence indirectly is through the impact they have, as part of the teeming vacuum, on the properties of particles we can study directly. A new experiment will start shortly at Fermilab near Chicago to measure the magnetic moment of the muon (a heavier cousin of the electron) to an astonishing accuracy of 1 part in 1010. If we can calculate the effect of all the particles in the Standard Model on this magnetic moment, then a discrepancy with experiment would indicate the existence of new particles. An improved theoretical uncertainty from the Standard Model needs numerical calculations for the theory of the strong interaction, QCD. This is because the largest source of uncertainty comes from quarks, which are strongly interacting (see Figure 1); this contribution is known as the hadronic vacuum polarization (HVP). HPQCD has developed a new lattice QCD method using DiRAC to determine the HVP with improved accuracy


Figure 2. A summary of results for the hadronic vacuum polarization contribution to the magnetic moment of the muon. Our new result (purple square), shows significant improvement on earlier lattice QCD results (pink square). Results in the lower half are derived from experimental data from electron-positron collisions. The green circle shows the value inferred from the current experimental result for the magnetic moment. The discrepancy may be a first sign of new physics.

We used our method to determine the strange and charm quark contributions to the HVP (arXiv:1403:1778) for the first time from lattice QCD, and then the very small bottom quark contribution (1408.5768). In the past year have determined the up and down quark contribution (1601.03071) and estimated the effect of disconnected diagrams (lower picture in Figure 1) (1512.03270). Our final result, summing all of these contributions, has a total uncertainty of 2% (see Figure 2). The error is dominated by possible systematic effects from ignoring the electric charge of the quarks and taking the masses of the up and down quarks to be the same. Work is now underway to include both of these effects in the calculation to reduce the uncertainty further, ahead of the new experimental results in 2018.


Post-periapsis Pancakes: Sustenance for Self-gravity in Tidal Disruption Events

A Tidal Disruption Event (TDE), which occurs when a star is destroyed by the gravitational field of a supermassive black hole, produces a stream of debris, the evolution of which ultimately determines the observational properties of the event. Typically, investigations of this process resort to predicting future accretion of material assuming the debris orbits are Keplerian or simulating restricted parameter regimes where the computational cost is a small fraction of that for more realistic parameters. In a series of papers that took advantage of Complexity@DiRAC, we simulated the long-term evolution of the debris stream for realistic TDE parameters, revealing new effects that can significantly affect the observable properties of these events (Coughlin & Nixon 2015; Coughlin, Nixon et al. 2016a,b). The figure to the right is taken from Coughlin, Nixon et al. (2016a: MNRAS 455, 4, 3612). In this work, we showed that a post-periapsis caustic – a location where the locus of gas parcels comprising the stream would collapse into a two-dimensional surface if they evolved solely in the gravitational field of the hole – occurs when the pericentre distance of the star is of the order of the tidal radius of the hole. We showed that this ‘pancake’ induces significant density perturbations in the debris stream, and these fluctuations can be sufficient to gravitationally destabilize the stream, resulting in its fragmentation into bound clumps.


Figure 1. Column density rendering of the stream of debris produced by a Tidal Disruption Event, showing both the full stream (left) and a zoom-in (right). In this case stream is strongly self-gravitating, and fragments into bound clumps (Coughlin, Nixon et al., 2016).

As these clumps fall back towards the black hole their accretion can result in strong bursts of radiation, while the low-density regions between clumps can yield relatively little. This makes the luminosity of the event highly time variable, and may be an explanation for the rapid time variability of systems such as Swift J1644. However, the detailed behaviour of the clumps is not yet well understood, as it depends on their internal thermodynamics and how quickly they can cool and condense. If they can cool fast enough, they may be able to survive a second pericentre passage and potentially form planets orbiting the black hole, otherwise they will be re-disrupted at pericentre and join the accretion flow. These processes will be the subject of future DiRAC simulations. Over the next few years new observing missions will provide a substantial increase in the number of events for which we have data. Determining how the luminosity of a TDE varies with time is critical to interpreting these events and discerning the progenitor properties, such as black hole mass and spin.



To find out what remote planets orbiting other stars are made of, astronomers analyse the way in which their atmospheres absorb starlight of different colours and compare it to a model, or ‘spectrum’, to identify different molecules. One of such objects is Kepler-69c, which is 70 percent larger than the size of Earth. Its orbit of 242 days around a sun-like star resembles that of our neighboring planet Venus, however the composition of Kepler-69c is still unknown.


Figure 1. Artists impression of a super-Venus Kepler-69c. Image credit: NASA Ames/JPL-Caltech/T.Pyle

In collaboration with NASA AMES we have developed molecular data to model (hot) super-Venuses, yet to be detected class of object with the sulphur dominated chemistry based on Volcanic activity. According to the atmospheric chemical models, among the main constituents of the atmospheres of the rocky super-Earth or super-Venus are sulphur dioxide (SO2) and sulphur trioxide (SO3). These spectroscopically important molecules are generally not included in terrestrial exoplanet atmospheric models due to the lack of the laboratory data. We anticipate our new data will have a big impact on the future study of rocky super-Earths and super-Venuses. The recently discovered system of exoplanets TRAPIST-1 is one of the existing examples where our data will play important role in future atmospheric retrievals.

Our SO3 data set, named UYT2, is one of the biggest line lists in the ExoMol database. It contains 21 billion vibration-rotation transitions and can be used to model transit IR spectra of volcanic atmospheres for temperatures up to 800 K. In order to accomplish this data intensive task, we used some of the UK’s most advanced supercomputers, provided by the Distributed Research utilising Advanced Computing (DiRAC) project and run by the University of Cambridge. Our calculations required millions CPU hours, hundreds thousands GPU hours, and up to 5 TB of RAM, the processing power only accessible to us through the DiRAC project.


COSMOS Consortium

COSMOS consortium researchers exploiting DiRAC HPC Facilities have made progress towards ambitious milestones in five key inter-related areas: (i) extreme gravity, (ii) inflation and the early universe, (iii) cosmic microwave sky, (iv) dark energy and (iv) galaxy archaeology. In the limited space available, we highlight recent breakthroughs in general relativistic simulations of black holes and inflation.


Figure 1: GRchombo simulation of gravitational waves from a black hole merger (using LIGO GW150914 parameter values). COSMOS IPCC visualization movie using Intel OSPRay ray-tracing through AMR dataset (submitted to SC16 Scientific Visualization Showcase).

On 11 February 2016, the LIGO Scientific Collaboration announced the ground-breaking discovery of a gravitational wave signal (GW150914) representing the first incontrovertible observation of a black hole binary system, as well as the most energetic event ever observed in the Universe. Given recent advances such as our innovative GRChombo AMR code, COSMOS Consortium researchers (several also LSC members) are well-placed to exploit these new scientific opportunities in gravitational wave astronomy. We have continued our long-standing industrial collaboration through the COSMOS Intel Parallel Computing Centre (IPCC) to improve our research codes in cosmology and general relativity; we have particularly focussed on GRchombo and improving performance on multi- and many-core (Xeon Phi) systems, following in the wake of our 2015 HPCwire Award. Intel has also responded to the demands of ever increasing dataset size by developing a powerful software defined visualisation framework, including the OpenSWR software rasteriser and the OSPRay parallel ray tracer and volume renderer, a program in which COSMOS IPCC is directly involved. These tools hves allowed us to create unprecedented real-time visualisations of our science results using several TB datasets. In 2016 we demonstrated the adaptive mesh capabilities of ParaView powered by OSPRay and OpenSWR by creating videos of GRChombo black hole mergers for two of the largest HPC conferences, International Supercomputing 2016 and Supercomputing 2016, as publicised in insideHPC

Inflation is now the paradigmatic theory of the early universe, providing a dynamical mechanism to explain the many features of our current cosmology. In particular, it explains why the universe is homogenous over cosmological scales. Nevertheless, can inflation begin in the first place, if the initial conditions are not homogenous? To answer this question requires a numerical approach, as solving the full GR equations is not possible for generic inhomogenous initial conditions. We attacked this problem with the state-of-the-art numerical relativity code GRChombo and the results are reported in arXiv:1608.04408. We found that high scale inflation is generically robust – it will inflate despite large inhomogeneities. More interestingly, we found that low scale inflation – preferred by many string theory phenomenological models – is not robust, failing to inflate in the presence of large scale but subdominant inhomogeneities.


Figure 2. Spinning black hole in 5D.

One of the biggest unresolved problems in Einstein’s theory of gravity is the weak cosmic censorship conjecture, which posits that spacetime singularities are always hidden inside a black hole horizon. Since nothing can escape a black hole, the breakdown of Einstein’s theory at singularities would be ‘censored’ from the outside world. In our numerical work with GRChombo, we find that the conjecture appears to be false if space has four or more dimensions. For example, the late stages in the evolution of a rapidly spinning black hole in six spacetime dimensions is shown (left); this is a disk-like structure with a great deal of fractal microstructure – the thinner rings are connected by ever thinner membranes that ultimately lead to a ‘naked singularity’. Given these results in extra dimensions, it is clear that, if cosmic censorship is true, then it must be a property specific to the three-dimensional space that we live in; without this property, Einstein’s theory of general relativity could lose its predictive power.


Understanding the Properties and Stability of Matter


Figure 1. Snapshot of the QCD plus QED vacuum.

The fundamental constituents of the strong force are quarks and gluons, which themselves bind together to form the familiar building blocks of nuclear physics, protons and neutrons. The two most common forms of quarks are the up quark and the down quark. The electric charge of the up quark is +2/3, whereas the down quark carries charge -1/3. A proton is composed of two up quarks and one down quark, adding to a net charge of +1, whereas the neutron has two down and one up quark, producing a chargeneutral object. Lattice QCD simulations have reached a precision now, where isospin breaking effects become important. This has two sources, the mass difference of up and down quarks, and the electromagnetic interactions. Both effects are of the same order of magnitude, so a direct calculation from QCD and QED is necessary.

Most important for our understanding of the salient features of QCD, such as quark confinement and spontaneous chiral symmetry breaking, is the understanding of the properties of the vacuum. Simulations of QCD and QED allow us to study the effect of dynamical quarks on the vacuum. A snapshot of the vacuum fields is shown in Figure 1, in form of a three-dimensional slice of space-time. The topological charge density of the QCD fields is rendered with the magnetic field of QED. The positive topological charge lump at the lower left of the image is accompanied by large magnetic field strength, presenting an opportunity to observe the chiral magnetic effect which separates right- and left-handed quarks. It has been predicted theoretically, but never observed.


Figure 2. The allowed region for the quark mass ratio.

Isospin breaking effects are crucial for our existence. The Universe would not exist in the present form if the neutron — proton mass difference were only slightly — — different. If it would be larger than the binding energy of the deuteron, no fusion would take place. If it would be a little smaller, all hydrogen would have been burned to helium. Knowing the mass of neutron and proton and how it depends on the mass and charge of the individual quarks, we can express the allowed region in terms of the u and d quark masses and the electromagnetic coupling, as shown in the Figure 2. It turns out that both αEM and the ratio of light quark masses must be finely tuned.

[1] R. Horsley, Y. Nakamura, H. Perlt, D. Pleiter, P.E.L. Rakow, G. Schierholz, A. Schiller, R. Stokes, H. Stüben, R.D. Young and J.M. Zanotti, J. Phys. G43 (2016) 10LT02.
[2] R. Horsley, Y. Nakamura, H. Perlt, D. Pleiter, P.E.L. Rakow, G. Schierholz, A. Schiller, R. Stokes, H. Stüben, R.D. Young and J.M. Zanotti, JHEP1604 (2016) 093


Black Holes as the Nemesis of Galaxy Formation


Figure 1. An image of a galaxy undergoing an AGN outburst in the Eagle simulation. The image shows gas density, coulored by temperature, and captures the moment of galaxy transformation. A small merger triggers the growth of a black hole, generating a huge outburst of energy that heats the surrounding gas and disperses the cold gas inflows.

Galaxies fall into two clearly distinct types: active, blue-sequence galaxies that are rapidly forming young stars, and passive red-sequence galaxies in which star formation has almost completely ceased. These sequences are closely related to the visual galaxy classification system, based on the prominence of spiral arms first suggested by Edwin Hubble.

Numerical simulations by the EAGLE collaboration shows that these sequences, and thus the Hubble sequence of galaxies, are created by a competition between star formation-driven outflows and gas accretion onto the supermassive black hole at a galaxy’s centre.

In galaxies containing fewer than 30 billion solar-like stars, young stars, exploding as supernovae, are able to drive buoyant outflows that prevents high density gas building up around the black hole. As the galaxies increase in stellar mass, they convert a larger fraction of the inflowing gas into stars and grow along the blue sequence.

More massive galaxies, however, are surrounded by a hot corona and this has important ramifications. The supernova-driven outflow is no longer buoyant and star formation is unable to prevent the build up of gas in the galaxy, and around the black hole in particular. This triggers a strongly non-linear response from the black hole. Its accretion rate rises rapidly, heating the galaxy’s corona and disrupting the incoming supply of cool gas. The galaxy is starved of the fuel for star formation and makes a transition to the red sequence, and further growth predominantly occurs through galaxy mergers. Interestingly, our own Milky Way galaxy lies at the boundary between these two galaxy sequences suggesting that it will undergo a transition to a red and `dead’ early-type galaxy in the near future.

This model has deep implications for understanding the Universe. Far from being exotic predictions of General Relativity, the growth of supermassive black holes sets a fundamental limit to the mass of star forming galaxies.


Figure 2. Formation timescales of galaxies as a function of stellar mass in the Eagle simulation (points) and observational data (Ilbert 2015).

The figure on the left shows formation timescales of galaxies as a function of stellar mass in the Eagle simulation (points) and observational data (Ilbert 2015). The separation of galaxies into blue (rapidly star forming) and red (passive) sequences is clearly seen. Most low mass galaxies follow the star forming blue galaxy sequence, doubling their stellar mass every 3 billion years, but more massive galaxies have much longer star formation growth timescales (a horizontal dotted line shows the present day age of the Universe; galaxies with longer star formation timescales are shown above the line). The transition between the sequences occurs at a stellar mass of round 30 billion solar masses. While it is not possible to reliably measure the masses of black holes in observed galaxies, it is possible to do this for galaxies in the EAGLE cosmological simulation. The simulated galaxies follow the observed data closely and points are coloured by the mass of the black hole relative to that of the host halo. In red-sequence galaxies, black holes account for around 0.01% of the halo mass, but the fraction is a factor of 100 smaller in blue-sequence galaxies. Around a galaxy mass of 30 billion solar masses, there is considerable scatter in the relative mass of the black hole and in the star formation growth timescale. However, at a fixed galaxy mass, systems with a higher black hole mass have substantially longer growth timescales, implying the existence of an intimate connection between black hole mass and galaxy type.


Black Holes:  Illuminating the Early Universe with Radiative Transfer Simulations

State-of-the-art observations show that within approximately 1 billion years after the Big Bang, all of the primordial hydrogen that permeated the immense space between galaxies had been destroyed by high energy photons in a process known as reionization. This time period in the history of the Universe, also known as ‘the dawn of galaxies’ remains one of the most active frontiers in modern astrophysics from both an observational and theoretical point of view.

Much remains unknown about the sources which provided the high energy photons needed for reionization. The standard picture suggests it was likely the first generation of galaxies with a possible additional contribution coming from radiating supermassive black holes. More exotic solutions have been proposed as well, including annihilating dark matter. Identifying the sources responsible for reionization as well as understanding their properties is key to constraining the conditions which governed all subsequent galaxy formation in the Universe.


Unfortunately, our current telescopes only probe the tail end of this epoch. For this reason, theoretical and computer models are relied upon for insight into the physics during the reionization. Simulating this regime is both complicated and computationally expensive; however, over the past two years, researchers at the Institute for Astronomy (IoA) and Kavli Institute for Cosmology Cambridge (KICC), have designed a new, state-of-the-art computer algorithm which they are exploiting on DiRAC supercomputers at the Universities of Cambridge, Leicester and Durham to better understand the epoch of reionization. This new algorithm (presented in a recently submitted paper by Katz et al. 2016) follows the flow of high energy photons which are directly and on-the-fly coupled to the chemical and thermodynamic state of the gas at different speeds depending on the density of the medium. This has overcome many issues relating to standard algorithms in the field at much reduced computational cost, allowing us to simulate significantly more complex systems than previously possible.

The team at IoA and KICC has thus for the first time generated a multi-scale view of galaxy formation during this epoch – from the large scale cosmic web all the way down to individual star forming regions inside of primordial galaxies. Because the flow of radiation is followed explicitly, the researchers have used an inhomogeneous radiation field to compute the distribution of emission lines coming from ionized metals (such as [CII] and [OIII]) which can be observed by some of our most sensitive telescopes. Likewise, they have self-consistently tracked the location and mass of ionized, neutral, and molecular gas to make testable predictions about these quantities which control star formation in the Early Universe.

In the coming years, new observations from the James Webb Space Telescope, Atacama Large Millimeter Array, and Square Kilometer Array will shed light on this exciting frontier and be compared to these simulated models to give insight into the physical processes which drive ‘the dawn of galaxy’ formation.


Hadron Spectroscopy:  Baryon Mass Splittings

In Cheung et al [JHEP 1612 (2016) 089, arXiv:1610.01073] we computed spectra of highly excited charmonia and charmed mesons with light (up and down) quark masses, corresponding to mπ ≈ 240 MeV, closer to their physical masses than in our earlier study where mπ ≈ 390 MeV. A highlight of the results was the presence of hybrid mesons (where the gluonic field is excited), some of which had exotic quantum numbers (cannot be solely a quark-antiquark pair). The resulting phenomenology of hybrid mesons gave insight into the effective low-energy gluonic degrees of freedom. One main conclusion was that the spectra showed only mild dependence on the light-quark mass, and the pattern of states, in particular the hybrid mesons, did not change significantly as the light-quark mass was reduced. This is an important systematic check and suggests that the interesting phenomenology of excited and hybrid mesons we found is not an artefact of working with unphysically-heavy light-quark masses.


Figure 1. The spectrum of excited D mesons (containing a charm quark and a light antiquark).

Figure 1 shows the spectrum of excited D mesons (containing a charm quark and a light antiquark) labelled by JP (J is spin, P is parity): green and red boxes are the computed masses and one-sigma statistical uncertainties with red highlighting states identified as hybrid mesons; black lines are experimental values.


Figure 2. The various S-wave (orbital angular momentum = 0) cross sections: upper (lower) panel shows quantities proportional to the diagonal (off-diagonal) cross sections.

In Moir et al [JHEP 1610 (2016) 011, arXiv:1607.07093], we presented the first ab-initio study of coupled-channel scattering involving charm mesons (and the first lattice QCD study of three coupled scattering channels). Working with light-quark masses corresponding to mπ ≈ 390 MeV, from computations of finite-volume spectra we determined infinite-volume Dπ,Dη,Ds K isospin-1/2 scattering amplitudes. Figure 2 shows the various S-wave (orbital angular momentum = 0) cross sections: upper (lower) panel shows quantities proportional to the diagonal (off-diagonal) cross sections. The singularity structure of the amplitudes showed a near-threshold bound state in JP=0+ corresponding to the physical D0*(2400) resonance, a deeply bound state with JP=1 corresponding to the D*, and evidence for a narrow JP=2+ resonance. This study represents a significant step toward addressing some puzzles in the spectroscopy of charmed mesons.



Initial conditions for galaxy formation simulations are specified within the Λ-CDM scenario by a Gaussian random field. This leads to a troubling lack of control: we often cannot ask precisely what leads to certain outcomes for a galaxy. What is it about a galaxy’s mass, environment or history that gives it its shape or its colour? Imagine we could control the initial conditions of a galaxy in cosmological simulations and tweak them to alter one aspect of its formation while keeping the other aspects the same.


The projected density of dark matter at redshift 3 (top row) and 2.3 (middle row) for three simulations with nearly identical initial conditions, but “genetically modified” to produce slightly different merging histories of the main galaxy in the center of the dark matter images. Bottom row shows the main galaxy in the IVU rest frame colours. The galaxy becomes increasingly quiescent, red and spheroidal as as the merger ratio increases from left to right.

Thanks to the DiRAC facility, our group has run the first generation of high resolution cosmological simulations that alters the “DNA” of a galaxy (Roth et al. 2016, MNRAS, 455, 974; Pontzen et al., arXiv:1608.02507). Roth et al. 2016 sets out the “genetic modification” approach and Pontzen et al. showcases its power (see figure) to understand what quenches star formation in a galaxy. By minimally altering the linear overdensity in the cosmological initial conditions, Pontzen et al changes the merger history of a galaxy while leaving its final halo mass, large scale structure and local environment unchanged. The changing merger history influences the impact of feedback from Active Galactic Nuclei leading to three very different fates for the galaxies: star forming, temporarily quenched, permanently quenched.


Heavy Elements:  Linking the Nuclear Interaction to the Structure of Heavy elements


Figure 5 from Phys. Rev. Lett. 117, 052501 (2016) (Editor’s suggestion.)

A recent international collaboration between the UK, France, US and Canada have performed systematic studies of both nuclear radii and binding energies in (even) oxygen isotopes from the valley of stability to the neutron drip line. Both charge and matter radii are compared to state-of-the-art ab initio calculations along with binding energy systematics. Experimental matter radii are obtained through a complete evaluation of the available elastic proton scattering data of oxygen isotopes. The theoretical ab initio calculations were possible thanks to exploiting High Performance Computing resources and showed that, in spite of a good reproduction of binding energies, conventional nuclear interactions derived within chiral effective field theory fail to provide a realistic description of charge and matter radii. A novel version of two- and three-nucleon forces leads to considerable improvement of the simultaneous description of the three observables for stable isotopes but shows deficiencies for the most neutron-rich systems. Thus, crucial challenges related to the development of nuclear interactions remain. Phys. Rev. Lett. 117, 052501 (2016) – Editor’s suggestion.