Ionisation by non-thermal particles in thermonuclear supernovae

Luke Shingles, Stuart Sim, Christine Collins
Astrophysics Research Centre, Queen’s University Belfast, Northern Ireland

Type Ia supernovae (SNe Ia) originate from the thermonuclear explosions of white dwarf stars. Understanding these events is important to astronomy since their brightness means they can be used to study cosmological expansion, and they are important contributors to the chemical evolution of galaxies. Currently, however, our theoretical understanding of these events is limited and the mechanisms responsible for these events are hotly debated. Our DiRAC project involves using computer simulations of radiative transfer to predict the spectrum of light that would be emitted by theoretical models of these supernovae. By comparing these simulations to observations, we aim to better understand the physical conditions in these supernovae and ultimately understand their origins.

By 100 days after explosion, the ejecta of SNe Ia have expanded sufficiently that light can escape even from the inner most regions of the explosion. This makes it possible to study the most centrally located material, which is likely where the explosion initiated. Thus these “nebular” spectra place important observational constraints on candidate SNe Ia explosion scenarios.

The nebular spectra of SNe Ia consist mainly of emission lines from singly- and doubly-ionised Fe-group nuclei. However, theoretical models for many scenarios predict that non-thermal ionisation leads to multiply-ionised species whose recombination photons ionise and deplete Fe+, resulting in negligible [Fe II] emission.

In our project, we recently investigated the scope for how the treatment of non-thermal energy deposition in the simulations may reconcile over-ionised theoretical models with observations. To quantify the magnitude of additional heating processes that would be required to sufficiently reduce ionisation from fast leptons, we investigated increased rates of energy loss to collisions with free electrons. Our Figure shows the comparison of the resulting spectra from these studies to the observed nebular spectrum of a real Type Ia supernova. We find that the equivalent of as much as an eight-times increase to the plasma loss rate would be needed to reconcile the model simulations with observed spectra, which places an important constraint on the effective densities that must be reached in the ejecta. This work will soon be published (Shingles et al. 2022, submitted) and further studies might be able to distinguish between reductions in the non-thermal ionisation rates and increased recombination rates, such as by clumping (as suggested by Wilk et al. 2020).

This work also used the Cambridge Service for Data Driven Discovery (CSD3), part of which is operated by the University of Cambridge Research Computing on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk).

Figure caption: An observed infrared spectrum of a Type Ia supernova at 247 days after explosion (SN 2013ct, Maguire et al. 2016), versus synthetic spectra calculated with several variations to the treatment of ionisation by high-energy leptons.

References:

  • Maguire et al. 2016, MNRAS, 457, 3254
  • Shingles et al., 2020, MNRAS, 492, 2029
  • Wilk K. D., Hillier D. J., Dessart L., 2020, MNRAS, 494, 2221

HPQCD: the charming strangeness of the W boson

Figure 1. When the D meson emits a W boson to become a K meson, the underlying process is a charm quark transition to a strange quark. The coupling between c, s and W is called Vcs.

Theorists in the HPQCD collaboration have pinned down Vcs, a key parameter of the Standard Model, using STFC’s DIRAC Data Intensive supercomputer at Cambridge. Vcs is determined from combining the theoretical calculation with results from particle physics experiments around the world for the proportion of D mesons that decay to a K meson in a process akin to nuclear beta decay (Figure 1). This rate depends on the coupling strength Vcs between the W boson of the weak interaction and the charm-strange quark pair, but also on the strong interaction physics, encoded by ‘form factors’, that binds the quarks inside the mesons while this process happens.

 The numerical techniques of lattice QCD allow the form factors to be calculated, but in the past their uncertainty has limited the precision of Vcs. Using improved methods for handling quarks, developed by HPQCD, physicists in Cambridge and Glasgow have now obtained a value for Vcs of 0.9663(80), three times more accurate than previous work (arXiv:2104.09883). This allows Vcs to be distinguished from 1 for the first time (see Figure 2), giving tighter constraints on the possibilities for new physics beyond the Standard Model.

Figure 2. A comparison of HPQCD’s new result for Vcs compared to earlier values, showing the big improvement in accuracy obtained. It is also now clear, for the first time,  that the value of Vcs is below 1.

HPQCD has also improved the determination of the charm quark mass, achieving an accuracy of 0.5% and including the effect of the charm quark’s electric charge for the first time. The mass (in the MSbar renormalisation scheme and at a scale of 3 GeV) is 0.984(5) GeV/c2, a bit heavier than the proton (arXiv:2005.01845).  In a linked paper, HPQCD also worked out accurately the ratio of masses for bottom and charm quarks (arXiv:2102.09609). These will be important for detailed experimental tests of whether the Higgs boson decays to different types of quarks with the rate that we expect in the Standard Model.

DiRAC’s Data Analytic system in Cambridge has once again proved ideal for the numerically efficient methods HPQCD has developed for precision lattice QCD. 

The first cosmological simulations of a dwarf galaxy with magnetic fields, radiative transfer and cosmic rays

Amongst many different galaxy types, dwarf galaxies are certainly some of the most mysterious and intriguing systems. They lie at the heart of multiple unanswered questions in galaxy formation and have fuelled a number of controversies, the so-called `small-scale’ challenges to our standard cosmological model, the ΛCDM. These challenges concern, for example, a large disparity between the number of observed dwarfs with respect to the predicted number of dark matter haloes that may be hosting these systems, observed density profiles with central cores rather than the cuspy profiles predicted by CDM haloes and too few massive observed satellites.

These problems have prompted prolific and intense research into what is now known as the near-field cosmology and were motivation for state-of-the-art observational efforts by e.g. DES, VISTA, GAIA, LSST facilities, which aim to give us a more complete and unbiased census on the dwarf properties of our Local Group.

Clearly with such major observational campaigns in place it is mandatory to have a detailed theoretical understanding of the formation and evolution of our Local Group to be able to interpret the observed data and to link the observed luminous matter to the underlying overall matter distribution which can inform us about the cosmology.

In fact, a large body of theoretical work has identified two important processes which may fundamentally affect dwarf properties and largely alleviate, if not solve, all of the aforementioned discrepancies, indicating no departures from ΛCDM are necessarily needed. The first one regards energetic feedback by supernovae (SN) which can dramatically affect the baryon fraction, star formation and even the dark matter distribution especially in low mass haloes. The second one concerns the reionization of the Universe where energetic photons emitted by high redshift sources can photo-heat the gas suppressing the infall into low mass galaxies thus drastically reducing the number of luminous low mass haloes.

Star formation with its associated SN feedback in low mass haloes and reionization are tightly intertwined processes and the upcoming bounds on the faint-end of the galaxy luminosity function at high redshifts by JWST, together with the first observational probes of reionization and its evolving patchiness to be delivered by LOFAR, HERA and SKA will shed light on the progenitors of present-day dwarfs.

Bearing these important physical processes in mind, as a part of the DiRAC dp012 project, we have performed the first simulations where realistic SN feedback in conjunction with state-of-the-art, on-the-fly radiative transfer is included together with constrained transport MHD and cosmic ray feedback accounting for both their streaming and diffusion processes (Martin-Alvarez et al. in prep). Our simulations demonstrate that all these physical processes acting together are needed to reproduce realistic dwarf galaxies. Additionally, we are now able to predict self-consistently the HI component of these systems and to generate detailed radio synchrotron synthetic observations, for next-generation radio facilities such as SKA (see Fig. 1). This work serves as a pathfinder for our upcoming suite of simulations featuring larger cosmological zoom-in volumes that will explore the role played by cosmic rays, radiative transfer and magnetic fields in the formation of galaxies across cosmic time.

Figure 1: Projections of our full-physics simulated dwarf galaxy. (Inset box) Mass projection of the entire simulated box, zooming into the galaxy halo. (Large panel) gas density (blue), gas temperature (orange) and stellar density (yellow). (Right-hand panels) Synthetic observations of the galaxy, artificially positioned at 2Mpc from the Earth. From top to bottom: radio synchrotron as would be observed by VLA and SKA, optical for SDSS, and near-IR for JWST. (Bottom row) From left to right, these four panels are projections of the magnetic field, hydrogen ionisation fraction, cosmic ray energy density, and gas density (grey) separated into inflowing (blue) and outflowing (red) gas. Reproduced from Martin-Alvarez et al. in prep.

Quantum Electrodynamics meets Quantum Chromodynamics

The Standard Model (SM) of Elementary Particles has been extremely successful in correctly predicting and describing the properties of elementary particles studied at experimental facilities all around the world. The SM itself covers elementary particles such as quarks and electrons, for instance, and their anti-particles, which interact by means of force carriers like photons and gluons. All the visible matter surrounding us is made up of fundamental particles described by the SM.

The theory of strong interactions that describes how quarks are  bound together by the strong nuclear force, mediated by the exchange of gluons, is called Quantum ChromoDynamics (QCD). We are now able to make very precise predictions starting from the rather compact set of mathematical formulae defining QCD. For instance, the mass of the proton but also many other quantities [1] can now be predicted with sub-percent precision. Solving the equations of QCD to make these predictions requires large-scale simulations on high-performance computers such as the ones provided by DiRAC. It is really remarkable that we find such highly complicated predictions to be compatible with the experimental data – our formulae agree with nature!

However, over the past decades physicists have collected ample observational and experimental evidence for the limitations of the SM – it is now very clear that the SM does not cater for certain phenomena, like dark matter or neutrino masses. It is therefore our goal to improve SM predictions and to make them more precise. It is then expected that theory predictions and experiment or observation will start disagreeing. When this happens we will have discovered new physics. The way in which the discrepancies arise will help us understand the nature of the new physics.

QCD is only one of three theories included in the SM, the other theories being weak interactions and Quantum ElectroDynamics (QED). The force-carriers of QED are the photons, and they couple to electrically charged matter with a strength that is often much weaker than the strong coupling of gluons to quarks. The strong coupling is in fact so strong that it binds quarks into bound states called hadrons (e.g. the proton or the pion). Until recently we therefore simply neglected QED contributions in our simulations. However, with the results of our computer simulations of QCD now reaching sub-percent-level precision, QED contributions can no longer be neglected.

Before including QED contributions into the simulations a number of formal problems first had to be addressed, to which our collaboration contributed substantially [2]. For instance, our simulations are taking place in a small simulation volume  that is just large enough such that the existence of its boundary constitutes only a small systematic effect (finite-size effect). While gluons are confined to this small volume by the underlying physics, the massless photons aren’t. As a result they feel the presence of the boundary much more severely. We have now developed a thorough analytical understanding of these finite-size effects and have thereby developed ways to control them. There are however still a number of conceptual problems to be addressed in the future, like the treatment of infrared effects originating from the fact that photons are massless.

While QED contributions are often small relative to QCD their importance is huge. It is known, for instance, that while the mass of the proton is mainly determined by QCD, the contributions of QED to it, which are responsible for the tiny mass difference between proton and neutron, are of immense importance for our existence: If the interplay between QCD and QED was just a tiny bit different from what we find in nature, all matter that surrounds us might not even exist!

With our current DiRAC allocation, a team of researchers from the Universities of Edinburgh and Southampton, with international collaborators from Switzerland and the US, has set out to compute how mesons (which are hadrons containing two quarks) decay into charged leptons and neutrinos, while taking into account QED effects [3]. The computation in QCD is very well understood but the computation including QED effects has never before been achieved in simulations where all parameters have been tuned such that the result can readily be compared to experiment. The project was executed by a team of PhD students, postdocs and senior researchers and comprised a considerable amount of software development, code-performance improvement, development of novel algorithms and novel ideas in theoretical physics, and eventually sophisticated data analysis techniques. Our results are currently being prepared for publication. They confirm that with the precision now achieved in pure QCD calculations, QED effects can indeed  no longer be neglected. Our calculation therefore constitutes a milestone in precision physics towards unravelling a bit more the fundamental physics governing our universe.

Going forward, we are thinking about developing the methods enabling QCD+QED computations for more complicated processes, in this way increasing the range of high-precision tests of the SM. These efforts have to be seen in the context of world-wide efforts to increase precision at experimental facilities (e.g. at CERN in Switzerland, Fermilab in the USA, KEK in  Japan).

Figure: Feynman diagrams for the decay of charged kaons and pions decaying into a pair of charged and neutral leptons (here for the case of the muon). The green lines indicate how photons couple to the electric charge of the quarks and charged lepton, respectively.

References:

  • [1] FLAG Review 2019: Flavour Lattice Averaging Group (FLAG), Y. Aoki et al., Eur.Phys.J.C 80 (2020) 2, 113, 1902.08191
  • [2] Relativistic, model-independent determination of electromagnetic finite-size effects beyond the point-like approximation, M. Di Carlo et al., 2109.05002; Theoretical aspects of quantum electrodynamics in a finite volume with periodic boundary conditions, Z. Dvoudi et al, Phys.Rev.D 99 (2019) 3, 034510, 1810.05923
  • [3] Near-Physical Point Lattice Calculation of Isospin-Breaking Corrections to Kl2/pil2, A Yong et al., 2112.11823

SIBELIUS-DARK: a galaxy catalogue of the Local Volume from a constrained realisation simulation

Stuart McAlpine, John Helly, Matthieu Schaller, Till Sawala, Guilhem Lavaux, Jens Jasche, Carlos Frenk, Adrian Jenkins, John Lucey and Peter Johansson

Monthly Notices of the Royal Astronomical Society, 2022, tmp.348M

Over the past thirty years cosmologists have developed a standard model of cosmology — Lambda Cold Dark Matter (LCDM) which explains a plethora of astronomical data, from the properties of the microwave background radiation (the heat left over from the Big Bang) to the number and spatial distribution of galaxies in the Universe. Computer simulations lie at the heart of this development: they allow predictions to be made of the distribution of dark matter and galaxies, especially in the non-linear regime, the best observed.

Cosmological simulations usually follow a “typical” patch of a LCDM Universe. But the simulations of the Sibelius project are different: using advanced statistical techniques, they are conditioned to reproduce, by the present day, the structures that we see in the local universe, specifically in the 2MASS+ galaxy survey.

The volume simulated in Sibelius is a sphere around us of radius 200 Mpc. Galaxies in this dark matter-only simulation are followed with the semi-analytic galaxy formation model GALFORM. Structures familiar to astronomers, such as the Virgo, Coma and Perseus clusters of galaxies, the “Great Wall” and the “Local Void” — our cosmic habitat – emerge from LCDM initial conditions and are faithfully reproduced in the simulation. At the centre there is a pair of galaxies, the virtual counterparts of our own Milky Way and our massive neighbour, the Andromeda galaxy. Sibelius enables novel tests of LCDM and of galaxy formation theory. In addition it offers the possibility of studying in detail the formation paths and physical properties of objects such as the galaxy clusters in our local neighbourhood.

Left: dark matter distribution in a 50x50x50 Mpc region centred on the Milky Way, coloured by the projected density and velocity dispersion of the particles. Our two most massive neighbours, the Virgo cluster and the Fornax/Eridanus groups, are highlighted. Middle and right: zooms into a 15x15x15 Mpc and 5x5x5 Mpc regions, respectively. The right panel highlights the location of the Milky Way and Andromeda (M31). Images are shown in 𝑦–𝑧 equatorial coordinates, projected down the 𝑥 axis.

(π0,η,η’) mass splittings and mixings

A particularly challenging topic in the phenomenological study of QCD is the 0,η,η’) particle mass splittings and mixings. With exact SU(3) flavour symmetry of the u, d and s quarks, the decomposition of the familiar eigenstates of the π0,η,η’ is known. However, when SU(3) and isospin are broken due to differing quark masses and electromagnetic charges, these states mix to form the experimentally observed mass spectrum. Previous works have employed isospin-symmetric simulations and hence have focussed on the η,η’ mixing. In [1], we make use of the natural breaking of isospin that occurs when electromagnetic effects are incorporated into a lattice simulation and have performed full Nf = 1 + 1 + 1 simulations to investigate the mass splittings and mixings of these states.

Figure 1: Left panel: Masses for the π0,η,η’ mesons along a trajectory holding the down quark mass fixed. The avoided level crossing shows the dynamical interaction between the two light pseudoscalars, π0 and η. Right panel: Plot of the evolution of the decay amplitudes of the flavour composition for the π0,η,η’ states along this trajectory.

In the study, [1], we have chosen quark masses to be as close to the QCD flavour symmetric point as possible in order to maximise our ability to amplify the QED part of the splitting. While our starting point has only approximate SU(3) symmetry, it does have exact U-spin symmetry (down and strange quarks are exactly the same). Hence in this study, it is natural to consider mass splittings and mixings between U-spin-zero states, see Fig. 1. In this study we were able to reproduce phenomenological results, for example the η – η’ mixing angle.

[1] Z. R. Kordov et al., [CSSM/QCDSF/UKQCD], Phys. Rev. D 104 (2021) 114514.

FLAMINGO — A large suite of state-of-the-art galaxy cluster simulations for emulations for high-precision cosmology

Leads: Ian McCarthy, Joop Schaye, Matthieu Schaller, Willem Elbers, and the Virgo II Team 

A diverse set of new observatories are about to map the large-scale structure in the Universe down to smaller scales than was possible before, which will greatly improve our ability to find inconsistencies in the standard cosmological model. However, the scientific potential of these surveys can only be realised if the model predictions are at least as accurate and precise as the observations. 

Although the clustering of the dark matter is well understood, mainly because it is collisionless and therefore only subject to gravity, this is not yet the case for the ordinary, baryonic matter. Unlike the dark matter, baryons are subject to pressure force and are hence redistributed by galactic winds driven by supernova explosions and accreting supermassive black holes. Unless these effects are taken into account, they will cause systematic errors in the measurements of cosmological parameters of tens of per cent for upcoming weak gravitational lensing studies, thus eliminating most of the foreseen improvements on the current constraints. 

The FLAMINGO project from the Virgo Consortium addresses this urgent problem by providing a suite of hydrodynamical simulations of volumes large enough to predict the large-scale structure with a resolution that is sufficiently high to model the gas flows caused by feedback processes 

associated with galaxy formation. Because these feedback processes cannot yet be predicted from first principles, the free parameters of the model are calibrated to key observables using machine learning techniques. 

The need to model large volumes at relatively high resolution necessitates massive simulations using state-of-the-art codes. Indeed, FLAMINGO includes the largest ever hydrodynamical simulation of the entire history of the Universe. The simulations vary the uncertain astrophysical parameters as well as the mass of the elusive neutrino particle, which we expect to be able to measure from upcoming observational surveys. 

The distribution of matter in the past light cone of an observer at the centre of the map. The expansion of the Universe has been scaled out. Circles indicate the redshift z, where (1+z) is the factor by which the Universe has expanded since the time light crossed the circle on its way to the observer. Looking further out, i.e., further away from the centre of the image, implies looking back further in time to an era when the matter was distributed more smoothly. The inset zooms in on a massive galaxy cluster (10^15 times more massive than the Sun). The ruler in the inset indicates the scale (5 Mpc = 16 million lightyears).

Model independent modified gravity N-body simulations

The assumption that General Relativity is the fundamental law of gravity requires an extrapolation of many orders of magnitude from solar system length scales where the theory is well tested. This uncertainty undergirds explorations of modified gravity theories, which are further motivated by the possibility of explaining dark energy and dark matter that have hitherto eluded detection.   However, model-independent constraints on modified gravity models to date exist mainly on linear scales.  

In work using DiRAC resources, we present the first N-body simulations of modified gravity models based on a consistent parameterisation that is valid on all length scales. We investigate the impact of a time-dependent modification of the gravitational force on the matter power spectrum and consequently on weak-lensing observables, with a particular focus on the constraining power gained by including non-linear scales. We also examined how well existing fitting functions reproduce the non-linear matter power spectrum of the simulations.  

In the figure below, we show the performance of our ReACT formalism on the weak lensing observables which are used to interpret galaxy surveys. We have employed our DiRAC simulations to forecast modified gravity effects at the level required for future surveys such as Euclid and the Vera Rubin Telescope (LSST). This paves the way for full model-independent tests of modified gravity using the huge influx of data that will become available from these upcoming surveys. 

Reference: 

Sankarshana Srinivasan, Daniel B. Thomas, Francesco Pace and Richard Battye, “Cosmological gravity on all scales. Part II. Model independent modified gravity N-body simulations”, Journal of Cosmology and Astroparticle Physics, Volume 2021, June 2021, https://doi.org/10.1088/1475-7516/2021/06/016