HPQCD: hints of new physics in rare B decays

Figure 1. A possible decay pathway for 𝑩+ → 𝑲+𝒍+𝒍−in the Standard Model. The b quark in the B meson on the left undergoes a transition to an s quark, forming a K meson on the right. This can only happen via a loop containing W bosons and top quarks in the SM and has very low probability. Theories beyond the SM can have additional particles that appear instead of the W-t loop.

 A key aim of the worldwide particle physics programme is to find evidence of new physics beyond our current Standard Model (SM) that would allow us to develop a more complete theory of fundamental physics. B meson decays are a good place to look because some of these are very rare in the SM but the presence of new particles could boost their rates. The process in which a B meson (containing a b quark) decays to a K meson (containing an s quark) and a lepton/anti-lepton (electron, muon or tau) pair is a good example. It must proceed in the SM via a loop made of W bosons and top quarks, as in Fig. 1, and is highly suppressed. New particles could shortcut this loop and give a very different rate (smaller or larger depending on the combination with the SM process). Theorists in the HPQCD collaboration have been spearheading the international effort to calculate B meson SM decay rates from lattice QCD. Our efficient method for handling quarks on a spacetime lattice makes the DiRAC CSD3 supercomputer at Cambridge ideal for our calculations and enables us to achieve world-leading accuracy. 

Fig. 2 below shows our results (arXiv:2207.13371, 2207.12468) for the fraction of 𝐵+ that decay to 𝐾+𝑙+𝑙− compared to the experimental data. The LHCb experiment at CERN has the most accurate data, and we see that there is a significant difference between their values and the lattice QCD calculation, especially at low values of 𝑞2, where 𝑞2 is the invariant mass-squared of the 𝑙+𝑙− pair. Between 𝑞2 = 1 and 6 GeV2 the difference exceeds 4 times its uncertainty, which could be a hint of new physics in this decay process. The low 𝑞2 region is challenging for lattice QCD because the K meson has large momentum there. HPQCD’s calculation represents the first time that this has been successfully tackled. As we can see in Fig. 2, the low 𝑞2 region is important because experimental results are often better there. 

Figure 2. The blue band shows HPQCD’s results (with their uncertainty) for the fraction of B+ mesons that decay to 𝑲+𝒍+𝒍− as a function of the invariant mass-squared (𝒒𝟐) of the 𝒍+𝒍− pair. The points show experimental results, including those from the LHCb experiment at CERN. 

HPQCD was also able to predict the SM rate for 𝐵 → 𝐾𝜈𝜈̅ (𝜈 is a neutrino). This process has not been seen by experiment yet but should be visible at the BelleII experiment at superKEKB in Japan in future. It provides further exciting opportunities for new physics searches. 

The statistical properties of stars at redshift, z = 5, compared with the present epoch

Matthew Bate (University of Exeter)

The distribution of stellar masses, known as the initial mass function (IMF), is of central importance in astrophysics, due to the fact that the radiative, chemical and mechanical feedback from a star depends strongly on its mass. Despite many observational studies, there is little evidence for variation of the IMF in our Galaxy (Bastian et al. 2010 ARA&A 48 339). During the past decade, radiation hydrodynamical simulations of star cluster formation have been able to produce stellar populations that match the typical stellar properties of Galactic stellar populations quite well (Bate 2012 MNRAS 419 3115). In agreement with observations, they have found that the distributions of stellar masses produced by present-day Galactic star formation are surprisingly invariable. For example, star cluster formation calculations have shown that varying the metallicity of star-forming gas between 1/100 and 3 times solar metallicity has little effect on stellar properties for present-day star formation (Myers et al. 2011 ApJ 735 49; Bate 2014 MNRAS 442 285; Bate 2019 MNRAS 484 2341).  The only effect of metallicity for present-day star formation that has yet been identified is on the frequency of close binary systems – a triumph of the simulations of (Bate 2014, 2019), performed using DiRAC, is that they do produce the recently observed anti-correlation of close binary frequency with metallicity (Badenes et al. 2018 ApJ 854 147; El-Badry & Rix 2019 482 L139; Moe, Kratter & Badenes 2019 ApJ 875 61). 

The strongest evidence for variation of the IMF is from stellar populations that formed much earlier in the Universe (Smith 2020 ARA&A 58 577). Recently, DiRAC has been used to perform the first radiation hydrodynamical simulations of star cluster formation at high redshift: z = 5 (Bate 2023 MNRAS 519 688). The main result from this study is that although the IMF does not depend significantly on metallicity for present-day star formation (z = 0), it does depend on metallicity at redshift z = 5: high metallicity gives a `bottom-light’ IMF in which low-mass stars are much rarer than in present-day star formation (Fig. 1). The difference at high-redshift is that the cosmic microwave background radiation is much warmer. High-metallicity gas is unable to cool to as lower temperatures at z = 5 as at z = 0, resulting in less fragmentation and larger typical stellar masses.

Figure 1: The cumulative stellar initial mass functions (IMFs) obtained from radiation hydrodynamical simulations of star cluster formation performed at redshifts z = 0 (left; Bate 2019) and z = 5 (right; Bate 2023). The IMFs at z = 0 are quite insensitive to the metallicity of the molecular gas (ranging from Z = 0.01 − 3 Z), but at redshift z = 5 the IMFs become metallicity dependent, with high metallicity gas producing a bottom-light IMF.

Gargantuan black holes at cosmic dawn

High-redshift surveys have so far discovered over a hundred quasars above redshift of six. This number will likely increase significantly in the coming years, due to ongoing and planned deep, wide-field surveys, such as eROSITA in X-rays, the Vera Rubin Observatory at optical wavelengths as well as Euclid in infrared. Such observations will also push these discoveries to lower luminosities too, giving a more complete picture of the build up of the black hole population in the early Universe. In fact it could even be possible to detect 105 solar mass black holes at z = 10 with mega second exposures from NIRCam on JWST or from the proposed Lynx X-ray telescope.

Focusing on the most extreme examples, supermassive black holes in excess of billion solar masses have been detected above z = 7, challenging theoretical models of the growth of such objects. The current record holder in terms of luminosity, and hence black hole mass is SDSS J010013.02+ 280225.8, with an inferred mass exceeding 1010 solar masses at z = 6.3. While there have been a number of theoretical studies of these objects, they still struggle to form some of the most massive observed black holes at z > 6.

As a part of our DiRAC project dp012 (Bennett et al., MNRAS, to be submitted), using the computational facilities in Cambridge and Durham, we have investigated the most promising scenarios for building up the most massive known supermassive black holes in the early Universe. Allowing for mildly super-Eddington accretion and earlier seeding redshift which is still entirely compatible with the direct collapse seeding model, we have found that it is possible to assemble a 1010 solar masses black hole by z = 6.

As shown in Fig. 1 the mass growth of our simulated black hole is consistent with observations, hinting that episodic super-Eddington accretion may be required in the early Universe to grow the most extreme mass black holes within a Gyr of cosmic time. Importantly we found that the feedback impact of this black hole is very significant.

This is shown in Fig. 2 where we plot the combined thermal and kinetic SZ decrement (top) and gas radial velocity (bottom) for the original FABLE model (left) and our new simulations (right). Hot, fast outflows are pushing gas way beyond the virial radius, diminishing the SZ signal on small scales and enchanting it on large scales. Moreover, by studying the redshift evolution of the simulated hot halo we make detailed predictions for mock maps, from radio to X-rays, that should help us disentangle how mass is accumulated onto these gargantuan black holes over cosmic time e.g. an early (sustained) growth and feedback scenario vs. a late and rapid growth and feedback scenario.

High-precision QCD: Quantifying effects from photon exchanges in weak decays

New discoveries in particle physics can be made in different ways. One can let particles collide at higher and higher energies and look for direct evidence of new particles and interactions, or search instead for tiny but measurable discrepancies between experimental measurements and theoretical predictions, at the so-called precision frontier. This project is pushing the latter by studying with high precision the decays of mesons like pions and kaons, which are the lightest particles made of two quarks and gluons, into leptons. This is a process where the fundamental weak interaction allows the annihilation of the two constituent quarks and the consequent emission of an electron-neutrino or muon-neutrino pair (the muon being a heavier copy of the electron).

Figure 1.  Decay of a charged pion into a muon and a neutrino, including QED corrections.

The decay of these particles is dominated by the effect of the strong interactions, also called QCD, inside the meson, but the newly aimed-for precision makes it necessary to take into account also other subleading effects. This is the case of those from QED interactions, which result in the exchange of photons between any of the charged elementary particles involved in the process – see e.g. Figure 1, where the decay of a charged pion into muon and neutrino is depicted.

Figure 2.   Possible scaling trajectories to the “infinite-volume” limit of our finite-volume estimate of δR.

One way to compute QCD-governed processes at the conditions of our current universe is to simulate them on a finitely sized four-dimensional grid or lattice, corresponding to a discretised and finite version of space and time. In our calculation we have been able to simulate particles with masses that match the values as found in nature. This is generally very challenging, and it has been possible thanks to recent algorithmic developments and the DiRAC high-performance computing resources. Photons, however, are massless particles and make the calculation more difficult: since they can travel large distances before interacting with other particles, simulating QED interactions in a finite box will result in sizeable finite-volume effects. The DiRAC Extreme Scaling facilities allow us to simulate QCD+QED on very large grids, but even with this cutting-edge technology the finite-volume effects are still large and have to be studied carefully. In our finite-volume simulation we are able to compute with very high precision the rate of the decay of the light mesons into muon-neutrino pairs including QED corrections, but when comparing with experimental results – which are performed in our vastly larger universe – the final precision of our results is dominated by these finite size effects. This is illustrated in Figure 2, where we show our result for the QED correction to the ratio of the kaon and the pion decay rates, called δR. The point on the right is the one we obtain at the volume of our simulation through an extensive data analysis, while the point on the left shows how much larger our uncertainty gets when we try to extrapolate our result to the “infinite” volume limit. This is due to our limited knowledge of how this quantity varies when changing the size of the box. In red, green, and blue we show some scaling trajectories on which our determination of δR could lie on, although with our current study we cannot discriminate between them. This means dedicated simulations on multiple and larger boxes are required to reduce the uncertainty on our infinite volume estimate.

Once we will reduce such a large uncertainty with further simulations on DiRAC resources, the precision of our simulations will become comparable to that of experimental measurements, and we will be able to test the validity of our current theory of particles and interactions with unprecedented accuracy.

Compton Amplitude and Nucleon Structure Functions via the Feynman-Hellmann theorem

Understanding the internal structure of hadrons from first principles remains one of the

foremost tasks in particle and nuclear physics. It is an active field of research with important

phenomenological implications in high-energy, nuclear and astroparticle physics. The

structure of hadrons relevant for deep-inelastic scattering are completely characterized by the

Compton amplitude (see LH figure). A direct calculation of the Compton amplitude within

the framework of lattice QCD [1] provides an opportunity to investigate the non-perturbative

effects at low scales which are less well-understood and might have significant implications

for global QCD analyses. In the RH figure [2] we show the lowest moment of the F2

structure function for the proton versus Q2. There is clearly an effect from higher twist terms

(terms proportional to1/Q2), following the experimental results.

Left panel: Deep inelastic scattering where a proton is broken up by a highly energetic electron

emitting a photon which strikes a quark or parton in the hadron. The measured cross section is the

square of this and is described by the Compton amplitude shown in the figure.

Right panel: The computed lowest moment of the F2 structure for the proton. The experimental

Cornwall-Norton results are shown as black stars. The fit is based on the twist expansion.

Several phenomenologically relevant quantities, e.g. low- and high-x regions of parton

distribution functions, power corrections, subtraction function, and generalised parton

distributions in the zero-skewness region to name a few, are accessible. Our long term

objective is improving particle and nuclear physics community’s overall understanding of

non-perturbative effects in hadron structure from first principles.

The simulations and analysis were performed on the DiRAC-3 Extreme Scaling service

(Edinburgh) and Data Intensive service (Cambridge).

[1] K. U. Can et al. [QCDSF/UKQCD/CSSM], Phys. Rev. D 102 (2020), 114505.

[2] M. Batelaan et al. [QCDSF/UKQCD/CSSM], Phys. Rev. D 107 (2023), 054503.

Extreme QCD: Quantifying the QCD Phase Diagram

Project: dp006

Science Highlights 2022

The FASTSUM collaboration uses DiRAC supercomputers to simulate the interaction of quarks, the

fundamental particles which make up protons, neutrons and other hadrons. The force which holds

quarks together inside these hadrons is Quantum ChromoDynamics, “QCD”. We are particularly

interested in the behavior of QCD as the temperature increases to billions, and even trillions of Kelvin.

These conditions existed in the first moments after the Big Bang, and are recreated on a much smaller

scale in heavy ion collision experiments in CERN (near Geneva) and the Brookhaven laboratory (near

New York).

The intriguing thing about QCD at these temperatures is that it undergoes a substantial change in

nature. At low temperatures, QCD is an extremely strong, attractive force and so it’s effectively

impossible to pull quarks apart, whereas at temperatures above the “confining” temperature Tc, it is

much weaker and the quarks are virtually free and the hadrons they once formed “melt”.

We study this effect by calculating the masses of protons and other hadrons and their “parity partners”,

which are like their mirror-image siblings. Understanding how these masses change with temperature

can give deep insight into the thermal nature of QCD and its symmetry structure.

Our most recent results are summarized in the plots below. On the left we show the temperature

variation of the masses of the D and D* mesons (which are made up of a charm and a light quark).

This shows that they become nearly degenerate at the deconfining temperature indicated by the vertical

red line. On the right we show the R parameter which measures the degeneracy of the positive and

negative parity states of particular baryons. Results are plotted for the N (nucleon, i.e. proton/neutron)

as well as three other baryons which contain strange quark(s). This shows that the two parity states

become near degenerate (corresponding to R→ 0) in the high temperature regime above the vertical

lines.

FLAMINGO — A large suite of state-of-the-art galaxy cluster simulations for emulations for high-precision cosmology

Leads: Ian McCarthy, Joop Schaye, Matthieu Schaller & Virgo II

The worldwide observational cosmology community is gearing up to receive a real deluge of new data from large survey campaigns currently mapping the sky. This data will allow, for the first time, to put tight constraints on some key properties of our standard cosmological model as well as measure the mass of the neutrino. However, to interpret this data, equally precise and accurate models of the universe must be available. Furthermore, model variations have to be designed so as to encompass the whole parameter range in which our Universe lies. 

The Virgo Consortium’s FLAMINGO project is designed to provide exactly the virtual twins of our Universe that will be used alongside these modern surveys. The project includes the largest cosmological simulation with gas ever run (shown above left) as well as the largest simulation ever run including neutrinos, one of the key parameters we hope to constrain. However, as argued, having one single simulation is not sufficient to encompass the real of plausible universes. FLAMINGO has been extended, using DiRAC time, to include many variations. In these,  we vary the neutrino masses and the background cosmology.

Besides targeting the largest ever simulations, the other feature that sets the project apart is the variations of the galaxy formation aspects. The details of how galaxies form, and especially how their central supermassive black hole interact with their environment are still too poorly understood for the requirement of the era of precision cosmology. In this project, we thus decided to vary wildly the model by scaling the effect of the black holes up and down by large fractions. This will be of crucial importance to teams attempting to marginalise over this effect in the data.  In the right hand figure we show the relative effects on the matter power spectrum due to these variations compated to a dark matter only universe. Our library of runs shows large multi-percent variations, which will be crucial to take into account when attempting to measure the key parameters of our Universe with sub-percent accuracy.

Virgo-I: The Milky Way’s plane of satellites is consistent with ΛCDM

Till Sawala, Marius Cautun, Carlos Frenk, John Helly, Jens Jasche, Adrian Jenkins, Peter Johansson, Guilhem Lavaux , Stuart McAlpine, Matthieu Schaller

                    Nature Astronomy, in press, 2022NatAs.tmp.273S (arXiv.2205.02860)

In the 1970s, the great Cambridge astronomer, the late Professor Donald Lynden-Bell, noted that the 11 bright satellites orbiting the Milky Way seem to be arranged in an implausibly thin plane piercing through our galaxy – the Milky Way’s “plane of satellites”. To add to the mystery, it was later argued that these galaxies are circling the Galaxy in a coherent, long-lived disk. These observations became known as the “plane of satellites problem” of the standard cosmological model, LCDM, wherein the Galaxy is surrounded by a roughly spherical, dispersion-supported dark matter halo.

Positions and orbits of the 11 classical satellite galaxies of the Milky Way seen “edge-on”, integrated for 1 billion years into the past and future. The right panel is a zoom-in of the left panel. The black dot marks the centre of the Milky Way, arrows mark the observed positions and the directions of travel of the satellites. While they currently line up in a plane (indicated by the grey horizontal line), that plane quickly dissolves as the satellites move along their orbits.

We have shown that the reported exceptional anisotropy of the Milky Way satellite system is strongly contingent on its lopsided radial distribution, combined with the close but fleeting conjunction of the two most distant satellites, Leo I and Leo II. Using Gaia proper motions, we show the plane of satellites to be transient rather than rotationally supported.

One of the new high-resolution simulations of the dark matter enveloping the Milky Way and its neighbour, the Andromeda galaxy. The new study shows that earlier, failed attempts to find counterparts of the plane of satellites which surrounds the Milky Way in dark matter simulations was due to a lack of resolution.

We carried out 202 high-resolution cosmological zoom-in constrained simulations on COSMA-8, based on initial conditions designed to reproduce Local Group analogues within the observed large-scale structure. We show that the failure of previous simulations to find thin, seemingly rotationally-supported satellite planes is entirely due to their limited resolution. We address this shortcoming by using the GALFORM semi-analytic galaxy formation model to identify “orphan” satellites whose dark matter halos have been artificially disrupted. In this way, the radial distribution of the satellites in our simulations is consistent with the Milky Way data. Our simulations demonstrate that satellite alignments are short-lived, just as inferred for the Milky Way. Finally, the simulations reveal that planes of satellites as thin as that of the Milky Way and whose orbital poles have a similar degree of spatial coherence as in the Milky Way are not uncommon in LCDM. Rather, the failure to find them in previous simulations was due to resolution limitations in the very dense central regions of halos.