Some of the Research Highlights from our users in 2019:


B meson oscillations

Figure 1. Oscillations between Bd or Bs mesons and their anti-particles occurs because of coupling via a ‘box diagram’ with W bosons and top quarks.

Watching the oscillations of two coupled pendulums (for example, attached to the same not-too-rigid support) provides insights into simple but rather counter-intuitive physics. Set one pendulum oscillating and, after a while, the other one will start up as the amplitude of the first dies down. Later still the amplitude of the second pendulum will die down and the first, rather astonishingly, start up again. The energy of the motion sloshes back and forth between the two pendulums because the swinging of either single pendulum is not a simple (‘normal’) mode of oscillation of the coupled system. Setting one pendulum in motion in fact sets both normal modes oscillating and these combine the motion of both pendulums in different ways. As time progresses the interference between the normal modes, which have different frequencies, results in this puzzling (at first sight) behaviour.

Figure 2. Values obtained for Vts and Vtd from experimental results for Bs/Bd oscillations and lattice QCD (solid lozenges) or assuming CKM unitarity (dashed). HPQCD’s (black) is the most accurate direct result to date.

An analogous phenomenon occurs in the physics of particles that are bound states of a bottom quark and either an anti-strange or anti-down quark, known as a Bs or Bd meson respectively. We can distinguish these particles from their anti-particles by their decay processes in our particle detectors. However the B and anti-B are coupled together through the weak interaction process shown in Figure 1. The particles with simple time behaviour (the ‘normal modes’) are then mixtures of the B and anti-B. A particle produced as a B may oscillate back and forth into an anti-B (and decay as one in our detector) at a later time with a frequency that can be measured very accurately in experiments at the Large Hadron Collider at CERN. Calculation of the coupling induced by Figure 1 requires the numerical solution of the theory of the strong force (that holds the B and anti-B together) using the method known as lattice QCD. HPQCD has recently done the most accurate calculations of this to date (arXIv:1907.01025) using DiRAC. Comparison to the experimental results enables us to determine the coupling between W bosons, top quarks and strange or down quarks (Vts and Vtd) as shown in Figure 2. Our results agree well with the Standard Model expectation that the CKM matrix of W-boson couplings is unitary, and limit the room for new physics. We also predict the annihilation rate of Bs and Bd mesons to a µ+µ- pair, a key mode for new physics searches at LHC. HPQCD has also done the first lattice QCD calculation of the difference of decay rates of the two normal modes of the Bs (arXiv:1910. 00970).


The external photo-evaporation of planet-forming discs

Planets form within flattened discs of material around young stars. These young stars themselves form in clustered groups of up to hundreds of thousands of stars. Most planet-forming discs therefore find themselves in a situation where they are being shone upon by ultraviolet (UV) radiation from these nearby young stars. This has the effect of stripping material from the disc in what is called a photoevaporative wind. This could play an important role in influencing how the discs evolve and the planets that they form.


Figure 1: An example disc external photo-evaporation model projected. The colour scale is the density and the arrows represent the flow structure of the material

There are two serious issues that limit our understanding of how external photoevaporation of discs. The first issue is that disc evaporation is very hard to model. To get the temperature correct, which determines the structure of the wind and the rate that mass is lost, one has to solve for the chemistry of the gas and how that is affected by the incoming UV radiation. Furthermore, each cell tries to cool by having “line photons” escape and carry away energy. Each region of the calculation therefore depends on every other region, since if the surroundings are very dense these photons won’t escape and vice versa. We have therefore only had 1D models of the process in the past, except in the strongest UV environments where thee modelling is easier.

T. J. Haworth, as part of the DiRAC project dp100, has developed the TORUS-3DPDR code (Bisbas et al. 2015, Harries et al. 2019), to tackle this problem in 2D models for the first time (Haworth & Clarke 2019). This has vastly improved our understanding of how discs are evaporated, including the rate at which they lose mass and where the mass is actually lost from the disc.

The other key challenge to understanding of how external photoevaporation is observing it in action. 1D models are insufficient for this purpose, but with these new 2D models, predictions can be made as to what should be observed (Haworth & Owen 2020)

References

  • Bisbas et al. (2015), MNRAS, 454, 2828
  • Harries et al. (2019), A&C, 27, 63
  • Haworth & Clarke (2019), MNRAS, 485, 3895
  • Haworth & Owen (2020), MNRAS, 492, 5030

Understanding the fundamental physics of the Earth’s radiation belts

Electromagnetic waves interact strongly with charged particles in the Earth’s inner magnetosphere. It is important to be able to model the evolution of these particles, since we rely on the many satellites that orbit within this hazardous radiation environment. Our current approaches to modelling the effect of wave-particle interactions in the outer radiation belt, over macroscopic time and length-scales, rely heavily upon quasilinear wave theory (e.g. see [1]). This theory describes wave-particle interactions as diffusive processes in the plasma, flattening out gradients and slowly moving a small fraction of electrons to higher energies (leading to acceleration), or different pitch-angles (leading to loss). However, the quasilinear formalism relies upon a number of assumptions that may not hold for all important wave types in the radiation belts. Use of the DiRAC facilities has enabled us to carry out important studies into the efficacy of the quasilinear formalism.

In [2], we used the EPOCH particle-in-cell code (see [3]) to conduct numerical experiments to investigate electron interactions with an incoherent spectrum of whistler-mode waves. Our novel approach directly extracts diffusive characteristics across all energy and pitch angle space. This benchmarking work establishes a framework for future investigations on the nature of diffusion due to whistler-mode wave-particle interactions, using particle-in-cell numerical codes with driven waves as boundary value problems.

In [4], we use the techniques developed in [2] to investigate the electron response to whistler-mode waves as a function of increasing wave amplitude. We find that whistler-mode waves with amplitudes of order (dB/B)2 ~ O(10-10) – O(10-6) in a uniform B give diffusive and advective dynamics. Over appropriately short timescales the diffusive component of the dynamics agrees with quasilinear theory even for the highest amplitude waves. These timescales range from thousands to tens of gyroperiods for the lowest and highest amplitude waves respectively. This provides very helpful insight into the fundamental nature of electron dynamics in energy and pitch-angle space due to the wave-particle interaction under the uniform background-field assumption, used in the basic quasilinear theory. This also motivates a variety of future numerical experiments and theoretical investigations on the applicability (or otherwise) of quasilinear diffusion theory to electron dynamics in the Earth’s radiation belts, using the particle-in-cell method established in [2].

References

  • [1] Glauert, S. A., and Horne, R. B. (2005), doi:10.1029/2004JA010851.
  • [2] Allanson, O. et al. (2019), doi:10.1029/2019JA027088
  • [3] Arber, T. D. et al. (2015), doi:10.1088/0741-3335/57/11/113001
  • [4] Allanson, O. et al (2020), doi:10.1029/2020JA027949

Exploring fundamental fields with strong gravity

Einstein’s theory of general relativity is both complex and beautiful; even after 100 years, there is still much it can reveal to us about the nature of gravity. It may also help us answer other fundamental questions, such the nature of the dark matter component of the universe. So far, this component is only known to interact gravitationally with other matter, and so its behaviour in strong gravity regimes may be the only way to probe key properties, such as its mass.

This project used numerical simulations of fields in gravitational backgrounds, to explore the behaviour of light dark matter particles around black holes. We demonstrated how accretion of light dark matter onto a black hole leaves a distinctive pattern in the density (above left) and began to study how its behaviour around spinning black holes might probe its self interactions (below right). Future work will extend these studies to the binary case, where such dark matter imprints may be observable by future detectors.


UKMHD: How is the solar corona heated?

The only viable mechanism to heat the outer solar atmosphere must involve the magnetic field. Two main approaches have been adopted. One is to model the heating along individual magnetic field lines and, by specifying the spatial and temporal form of the heating term in the thermal energy equation, try to compare predicted temperatures and densities with the observed values. In this way, constraints on the form of the heating are determined. However, the heating used is not determined from physical processes. The advantage of this approach is that the rapid spatial variations in the transition region can be adequately in 1D. The other approach is to solve the full MHD equations, without any assumptions on the form of the heating and see how heating occurs due to Ohmic and viscous effects. This second approach, while physically more realistic, requires the numerical resolution in 3D of large but narrow current sheets in which magnetic reconnection can occur. In the nanoflare heating model of Parker, a large number of intense current layers are required and this requires extremely large numerical grids to resolve the sharp current gradients. Hence, Dirac is the only UK computing facility that can cope with the computational requirements. However, it remains extremely difficult to resolve the transition region fully when the thermodynamics is included.

In a unique set of numerical experiments, the evolution of the solar coronal magnetic field is modelling by slowly stressing it, through slow photospheric motions. Once a significant amount of free magnetic energy has built up, and instability triggers a large release of energy. The important point is that the magnetic field does not release all the stored energy but tries to relax towards an non-potential state. After this large event, the imposed photospheric motions continue and the magnetic field releases more energy in the form of heat and kinetic energy in a series of smaller but more frequent events. The energy release is spread across the whole region covered by the magnetic field and this is known as an MHD avalanche process. For the first time, Reid et al, A&A 633, 2020, have used Dirac to investigate in detail how the heating process occurs and where the large number of heating are spatially located. Hence, the form of the coronal heating function is determined, without any prior assumptions, and can be used with the field aligned modelling to determine the plasma temperature and density for comparison with observations.

Figure 1 Left: Heating on a field line as a function of position and time. Middle: Heating on a field line, averaged in time, as a function of distance along the field line. Right: Heating on a field line, averaged in space, as a function of time. Distance s is normalised to the field line length and time is in units of the Alfven time.

Figure 1 (left) shows a contour plot of the heating along one field line as a function of distances along the field and time. Red indicates strong heating events. They are localised in space and time. Figure 1 (middle) shows the time averaged heating along the same field line. There is one large heating event about 0.4 of the distance along the field line but this is due to the very first event. After that there is no preferred location for the heating. Figure 1 (right) shows the spatially averaged heating as a function of time. Again, there is one large event that start off the avalanche process and then the heating comes in regular bursts. For more details see Reid et al Astron. & Astrophys., 633, id.A158.


The Simba Cosmological Galaxy Formation Simulations

Creating accurate models of our Universe is one of the great quests of humankind, going back to the ancient Greeks, through to Copernicus, Galileo, and Einstein. In recent years, huge progress has been made in developing physical models that connect the early conditions we see shortly after the Big Bang with the present-day population of galaxies, clusters, and intergalactic gas that we see around us today. Owing to the complex and highly nonlinear nature of gravitational collapse and radiative processes, supercomputer simulations play an increasingly critical role in developing models to explain why the Universe looks the way it does.

The Simba simulations are a next-generation suite of galaxy formation models, run on Cosma-7 at Durham. Simba includes unique physical modules that address some of the most pressing questions in modern galaxy formation theory, such as the growth of supermassive black holes and their impact on their host galaxy, and the formation and evolution of dust grains within a cosmological context. Simba predictions are in remarkably good agreement with a wide range of galaxy demographic properties, and have elucidated new ways in which galaxies and black holes interact with their surrounding cosmic ecosystems to shape the observable properties of both. Simba represents our most comprehensive and faithful view yet of the so-called baryon cycle that drives the formation and evolution of galaxies like our own Milky Way over cosmic time.

Simba has had a major impact on the community since its release in July 2019, with 18 publications as of June 2020 based primarily on Simba data, and over 30 active projects currently listed on the Simba pbworks page. The first annual Simba team meeting was held in February 2020, with over 30 attendees representing 17 institutions from 5 countries, demonstrating the global impact of Simba. Simba is gearing up for a public data release in the Fall, which will further widen the user base and establish it among the pre-eminent cosmological galaxy formation simulations today.


Understanding the Milky Way’s metallicity distribution

An important aspect of understanding the formation of bulges of galaxies, including that of the Milky Way, is understanding their metallicity trends. The most notable feature of the metallicity distribution is that when the bulge is viewed edge-on, it appears more vertically pinched than the density. In the Milky Way this manifests as an X-shape when viewed from the Solar perspective. Simulations that include gas, star formation and the chemistry of stars have managed to reproduce the pinched appearance of the metallicity (Debattista et al. 2017). However these simulations are expensive and do not permit an easy detailed matching to the Milky Way. In the past this has forced researchers to paint metallicity on particles in N-body simulations (i.e. without any gas or star formation). This type of painting has assigns a metallicity based on the position of a star (both its distance from the galactic centre and its height above the mid-plane). However this is equivalent to assuming that all stars at a particular position are born at the same time with the same metallicity, which is highly implausible. Using a suite of N-body simulations, we have shown that where a star ends up on a bulge depends strongly on its initial dynamical actions, which measure its angular momentum as well as its radial and vertical random motion. This allowed us to develop a novel technique for painting metallicity onto star particles using the dynamical actions. We showed that this technique produces metallicity maps comparable to those of star forming simulations. This more realistic metallicity assignment allows us to much more rapidly model the formation of the bulge of the Milky Way. The figure shows an example of such a metallicity mapping: the metallicity distribution (colours and black contours) is more pinched along the minor axis than is the density (white contours), as observed in the Milky Way and in external galaxies.

Moreover, using the actions to understand bulge formation has lead us to a number of important new insights. We showed for instance that the vertical gradient of the vertical action is largely erased by the formation of the bulge, whereas the very weak gradient of the radial action is vastly enhanced during the same time. This means that the vertical gradients observed in galaxies are not the result of the superposition of a metal-rich thin disc and a metal-poor thick disc. Another, quite surprising, result of our use of dynamical actions is that there are populations that vertically cool during bulge formation. We are investigating the properties of these populations and hope to be able to use them to constrain what the Milky Way looked like before the bulge formed.


Supermassive black hole jets stir old cosmic puzzles

Galaxy clusters are the most massive gravitationally bound systems and encode unique information on the composition of our Universe. They comprise of a massive dark matter halo and up to a few thousand rapidly moving galaxies enveloped by a very hot X-ray emitting plasma, known as the intracluster medium (ICM). Of particular importance for regulating the heating and cooling of the ICM, the decades long “cooling flow” problem, are the relativistic jets produced by supermassive black holes with masses in excess of a billion solar masses. Notwithstanding massive theoretical and observational effort, a major puzzle left to crack is how the energy from these powerful jets is transferred to the ICM to stop the catastrophic cooling.


Figure 1. Panel A shows a volume-rendered image of galaxy cluster and panel B shows the volume-rendered jet material as well as the gas velocity field (arrow vectors) in the cluster center. Panel C shows a cold disc structure which surrounds the SMBH. Finally, panels D and E show a 2D reconstruction of the Voronoi grid used and a velocity streamline map of the lower-right lobe-ICM interface, respectively. Credit: Bourne et al. 2020, MNRAS.

Using the moving-mesh code AREPO, we performed state-of-the-art simulations with the highest-resolution jets to-date within a large scale, fully cosmological cluster environment to shed light on this issue. The simulations make use of a novel refinement technique (see Figure 1) that not only allows high resolution close to the central black hole but also within the jet lobes themselves. We found that the mock X-ray observations of the simulated cluster revealed the so-called “X-ray cavities” and “X-ray bright rims” generated by supermassive black hole-driven jet, which itself is distorted by motions in the cluster remarkably resemble those found in observations of real galaxy clusters (see Figure 2). While it is well accepted that the vast amount of energy injected into the jet lobes (which would be enough to meet the Earth’s power usage for more than 1031 years) is sufficient to offset the cooling losses within the ICM and prevent the formation of a cooling flow, how this energy is effectively and isotopically communicated to the ICM is still an open debate. Our simulations showed that the ICM motions or “weather” are crucial to solving this problem. Through the cluster weather action, the jet lobes can be significantly moved and deformed, stirring them around the cluster, which ultimately leads to jet lobe disruption and effective energy transfer to the ICM which could be the long sought-after solution to combat the “cooling flow” puzzle.


Figure 2. This shows a mock observation made from the simulation on the right and an actual observation of the galaxy cluster MS 0735.6+7421 on the left. Both images show cavities excavated by the lobe inflation surround by X-ray bright rims of dense gas (blue), which are filled by distorted jet material (pink). Credit: Hubble and Chandra Image: NASA, ESA, CXC, STScl, and B. McNamara (University of Waterloo); Very Large Array Telescope Image: NRAO, and L. Birzan and team (Ohio University). Simulated Cluster Image credit: Hubble and Chandra Image (background): NASA, ESA, and B. McNamara (University of Waterloo); Simulated Observation Data: M. A. Bourne (University of Cambridge).

This work has been published in the Monthly Notices of the Royal Astronomical Society: “AGN jet feedback on a moving mesh: lobe energetics and X-ray properties in a realistic cluster environment” by Martin A. Bourne, Debora Sijacki and Ewald Puchwein. Royal Astronomical Society press release: “Stormy cluster weather could unleash black hole power and explain lack of cosmic cooling”, 14 October, 2019.


The effects of ionising star formation on clouds in galactic spiral arms

The formation and life-cycle of giant molecular clouds is a crucial part of understanding both the evolution of galaxies and the varying environments in which the star formation process occurs. The origins of the physical phenomena that govern giant molecular clouds and the star formation within them include external effects caused by the galactic environment (such as galactic potentials and external radiation fields) as well as more local effects such as stellar feedback and turbulence. Recently, we performed simulations of a galactic spiral arm consisting of complexes of giant molecular clouds that included ionising radiation from young stars. Unlike previous work in this area, which typically simulated isolated clouds, our models include a number of clouds along a spiral arm, and thus ionisation fronts propagate in a much more realistic environment. The initial conditions were achieved by extracting a 500 pc2 section of spiral arm from a galaxy scale simulation by Dobbs & Pringle (2013) and increasing the resolution.

Figure 1: The two left panels show gas density in the galactic plane after 3.3 Myr – the first with no stellar feedback, the second including photoionisation. The ionised gas fraction is shown by the contours. The dots represent sites of star formation – red if a massive ionising star is present and black otherwise. The panels on the right are equivalent but show the simulation after 4.24 Myr. Comparison between 3.3 and 4.24 Myr shows that when dense pockets are compressed by HII regions from multiple sides, triggered star formation often occurs. Gas that only has an HII region on one side may or may not undergo modest triggered star formation.

Using the simulations we examine the effect of photoionising feedback caused by the most massive stars forming within the clouds. Lyman continuum photons emitted from these massive stars ionise the surrounding gas creating bubbles – HII regions – which are bounded by expanding shocks. Photoionisation has the highest energy budget of the early onset stellar feedback mechanisms. Comparison of simulations with and without photoionisation shows that the timing and location at which stars form is significantly affected (Fig. 1). While the final fraction of gas converted to stars remains relatively unchanged, the time taken is reduced by a factor of nearly 2. This is due to star formation in nearby dense gas being compressed by shocks from the surrounding ionised regions. The result is an acceleration of the star formation rate and a much wider distribution of star formation across the region.


How stars form in the smallest galaxies

It has been a long-standing puzzle how the smallest galaxies in the Universe have managed to continuously form stars at such a remarkably low rate – tiny galaxies like Leo P (pictured above) form approximately one star every 100,000 years! Making use of the DIRAC DIaL supercomputer, our research team has established how such low-levels of star formation arise because small dwarfs can stay quiescent for several billions of years, slowly accumulating gas in their centres. When this gas eventually collapses, star formation resumes at rates consistent with observations.

Figure 1: Despite its extremely low-mass, Leo P contains a significant number of blue stars. Our simulated galaxies also present these tell-tale signs of recent star formation activity. Image credit: K. McQuinn and Hubble Space Telescope.

Through high-resolution computer simulations, we demonstrate that star formation in the very smallest dwarf galaxies is shut down in the early Universe as a result of heating and ionisation from the strong light of new-born stars, exploding stars and stellar winds. However, we find that this quiescent state can be temporary. Some of our simulated dwarf galaxies are able to slowly accumulate fresh fuel in the form of hydrogen gas, over many billions of years. Once a sufficient amount of gas has piled-up to overcome any residual heating, star-formation re-ignites and proceeds continuously until today. This scenario explains, for the first time, the puzzlingly low star formation rates observed in faint dwarf galaxies such as Leo P, Leo T, and others. The computational power provided by DIRAC DIaL was crucial to establish this result, allowing us to afford detailed modelling of the star formation and heating processes in these tiny galaxies, over the full lifetime of the Universe.

By deepening our understanding of dwarf galaxies, we gain new insights into how astrophysical processes, such as stellar explosions and the heating and cooling of cosmic gas, regulate galaxy formation. This work also opens the door to harnessing these galaxies as a new test of the “Big Bang” cosmological model. Further work is currently underway to predict how many such star-forming dwarfs exist in our Universe and how many will be discovered by the next generation of astronomical telescopes.


A steeply-inclined trajectory for the Chicxulub impact

Figure 1. The development of the Chicxulub crater. Frames from the simulation of a 17-km diameter asteroid striking Earth at a speed of 12 km/s.

Simulations performed on the DiRAC High Performance Computing Facility have revealed the asteroid that doomed the dinosaurs struck Earth at the ‘deadliest possible’ angle.

The simulations reveal that the asteroid hit Earth at an angle of about 60 degrees, leaving a signature of its trajectory in the rocks beneath the Chicxulub Crater in Mexico.

Such a strike would have maximised the amount of climate-changing gases thrust into the upper atmosphere, including billions of tonnes of sulphur that blocked the sun and triggered the KPg mass extinction. It is thought that up to 75% of plant and animal species on Earth became extinct, including dinosaurs.

The new simulations are the first ever fully 3D numerical calculations to reproduce the whole impact event, from the moment of impact to the moment the final Chicxulub crater was formed. Results of the simulations were compared with geophysical data from the site of the impact crater to diagnose the direction and angle of impact.

The results from these simulations were published in Nature Communications (Collins et al., 2020), and attracted widespread international media attention [2, 3]. The article was also featured on the STFC website [4].


Identifying and Quantifying the Role of Magnetic Reconnection in Space Plasma Turbulence

Plasma is the fourth state of matter after the solid, liquid, and gaseous states. Almost all of the visible matter in the Universe is in the plasma state. This includes all the stars, the material between the stars, the intergalactic medium, and also the gas in the Earth’s cosmic neighbourhood: the solar wind and the geospace environment in Earth’s magnetosphere. Very often, these plasmas have a low viscosity. Therefore, we expect that these plasmas are highly turbulent. The goal of our project is to understand the role of a process called “magnetic reconnection” in the omnipresent plasma turbulence in space.

Magnetic reconnection is a multiscale plasma phenomenon in which the magnetic field re-organises and transfers energy into the plasma particles. It occurs in laboratory plasmas, such as fusion or confinement experiments, and in astrophysical plasmas, such as the solar wind, coronal mass ejections, solar flares, explosive events in planetary magnetospheres, accretion discs, and star-formation regions. Although reconnection has been studied for over 60 years, there is still no consensus about a complete theory to describe magnetic reconnection at all scales involved.

Figure 1. Volume rendering of the electric current density J in one of our DiRAC simulations. The turbulence forms elongated structures, which are called “magnetic flux ropes”. The flux ropes are unstable and magnetic reconnection results from their non-linear interaction. Figure from Agudelo Rueda et al. (2020, in preparation).

Current theories and numerical simulations suggest that turbulence and magnetic reconnection are closely related. Turbulence transfers energy across the scales of the plasma. Energy is injected at large spatial scales and then transported to the very small scales where it dissipates as heat. This transfer is known as the turbulent energy cascade. It is proposed that the ultimate dissipation of energy at small scales strongly depends on a link between turbulence and reconnection.

We study this link between turbulence and reconnection with large computer simulations of plasma turbulence. The computational power provided by high-performance computing facilities at DiRAC enables us to study plasma processes from first principles using three-dimensional cutting-edge particle-in-cell (PIC) simulations. This type of simulation resolves the smallest plasma scales and accounts for phenomena that only reveal themselves in kinetic theory. Even with the enormous power of DiRAC, full PIC simulations are unable to cover the whole range of scales involved in plasma turbulence and reconnection in a real plasma, since these simulations are very expensive in terms of computing memory and time. With DiRAC’s capabilities, however, we have taken a big step forward to understanding turbulence and magnetic reconnection in plasmas.

Figure 2. Side view of our three-dimensional simulation box. The different colours represent various markers for magnetic reconnection. We use these markers to identify reconnection regions and to analyse the details of magnetic reconnection in plasma turbulence. Figure from Agudelo Rueda et al. (2020, in preparation).

We initialise our computations through the collision of counter-propagating plasma waves in our simulation box. These waves interact with each other and create turbulence, which is consistent with the observed turbulence in the solar wind. We find that, after some time, the turbulence itself generates current sheets and forms regions of magnetic reconnection. We define a set of criteria to identify where in the simulated plasma reconnection occurs. With these criteria, we find one extensive reconnection site in our simulation and study its properties in great detail.

One huge advantage of our simulations is that we can fly an artificial spacecraft through our simulation box. This spacecraft records data in the same way as a real spacecraft would record data in space plasmas like the solar wind. The measurements from our artificial spacecraft will serve as predictions for measurements of reconnection in the turbulent solar wind by ESA’s and NASA’s latest solar-system spacecraft Solar Orbiter and Parker Solar Probe which have the appropriate instruments onboard to test our predictions.


Extreme Gravity and Gravitational Waves

When two black holes inspiral and merge, they emit gravitational waves that have famously been detected for the first time by LIGO in 2015 [1]. These gravitational waves carry energy as well as momentum away from the black-hole binary system. The energy loss is responsible for the inspiral and eventual merger. The linear momentum radiated away, on the other hand, can impart a recoil on the black-hole system, just like the firing of a bullet imparts a recoil on the shooter. This effect follows from conservation of momentum; for black holes, the gravitational waves carry net momentum in some direction, so the black-hole emitter has to respond by moving accordingly in the opposite direction.

Figure 1. Snapshot of the gravitational-wave signal generated by an eccentric black-hole binary in the so-called superkick configuration where the black-hole spins point inside the orbital plane but in opposite directions. This snapshot shows the cross section of the wave signal around the time of the black-hole merger inside the plane perpendicular to the orbital plane. Note that the waves are stronger in the upper half than the lower half. This asymmetry corresponds to a net emission of linear momentum and a corresponding recoil of the black hole resulting from the merger. This recoil or “superkick” is illustrated in the three panels on the right which display a zoom-in of the central region at different times: the time Tm around merger (as in the large left panel) as well as 60 and 120 time units later (measured in units of the black-hole mass M). The (nearly circular) black line in these panels represents the black-hole horizon. Note that the black hole moves downwards as time progresses; this motion is the kick effect.

One of the most dramatic results of numerical relativity has been the discovery of “superkicks”, very large recoil velocities of up to 3,700 km/s that occur when the black holes start their inspiral with specific spin directions [2,3]. Such large kicks are sufficient to eject black holes from even the most massive host galaxies; the escape velocities from giant elliptic galaxies are about 1,000 km/s. This has sparked a lot of interest among astrophysical observers to look for ejected black holes, because this would have major implications for the formation history of supermassive black holes. Several candidates have been found but alternative explanations (without resorting to kicks) cannot be ruled out; see [4] for a review.

As part of our DiRAC project, we have found that even larger recoil velocities, up to 4,300 km/s are realized when the black holes inspiral on moderately eccentric (rather than quasi-circular) orbits. In general, moderate eccentricity can amplify the recoil effect by up to 25% with possible effects on the retention rate of black holes in globular clusters, second-generation populations of black-hole merger events, as well as the black-hole occupation fraction of galaxies.

  • [1] B. P. Abbott et al, Phys.Rev.Lett. 116 (2016) 061102, arXiv:1602.03837 [gr-qc]
  • [2] J. A. Gonzalez et al, Phys.Rev.Lett. 98 (2007) 231101, gr-qc/0702052
  • [3] M. Campanelli et al, Astrophys.J. 659 (2007) L5-L8, gr-qc/0701164
  • [4] S. Komossa, Adv.Astron. 2012, 364073, arXiv:1202.1977 [astro-ph]
  • [5] U. Sperhake et al, Phys.Rev.D 101 (2020) 024044 arXiv:1910.01598 [gr-qc]

Spontaneous Symmetry Breaking in the 3d Thirring Model

The Thirring Model describes relativistic fermions moving in a two-dimensional plane and interacting via a contact term between covariantly conserved currents. The physical system it most resembles is that of low-energy electronic excitations in graphene. For free electrons at half-filling on a honeycomb lattice, conduction and valance bands form cones just touching at their vertices at two “Dirac points” lying within the first Brillouin zone. Since the density of states vanishes, and the effective fine structure constant is boosted by a factor vF/c ≈ 1/300, where the pitch of the cones vF is the Fermi velocity, the resulting physics is described by a strongly-interacting relativistic quantum field theory, with equal likelihood of exciting electrons or holes.

Besides possible applications in layered condensed matter systems, the Thirring model is an interesting theory in its own right, and possibly the simplest theory of fermions requiring a computational solution. A major question is whether or not for sufficiently strong self-interaction a bilinear condensate <ψψ> forms, restructuring the ground state and causing a gap to form between the conduction and valence bands resulting in a phase transition from semimetal to insulator – this process is precisely analogous to spontaneous chiral symmetry breaking in QCD. It is anticipated that this can occur if the number of fermion species N lies below some critical Nc: determination of Nc requires non-perturbative quantum field theory.

Figure 1.

In lattice field theory, the problem seems to require a precise rendering of the correct U(2N) fermion global symmetries. We have been using the Domain Wall Fermion formulation, in which U(2N) Ginsparg-Wilson symmetries are recovered in the limit that the domain wall separation Ls to ∞. Simulations have been performed using the RHMC algorithm on 123 and 163 systems with Ls ranging from 8 to 48. It turns out that at strong couplings g2 and light fermion masses m recovery of GW symmetry is slow – nonetheless at weak-to-moderate couplings our results for <ψψ> as a function of g2, m obtained at Ls = 48 are compatible with an extrapolation to the large Ls limit based on the entire dataset. They are well-fitted by an equation of state which assumes a continuous symmetry-breaking transition with non-mean field theory critical exponents at a critical 1/g2 ≈ 0.28 (see figure). The fit is less convincing at the strongest couplings (lying within the broken phase), which we attribute to the Ls to ∞ limit not yet being reached.

A major outcome of the project to date is the confirmation that the critical number of flavors for symmetry breaking in the Thirring model Nc>1. This is a significant step towards understanding the formulation of fermion quantum fields in strongly-interacting theories.


The universal structure of dark matter halos over a mass range of 1020

Cosmological models in which the dark matter consists of cold elementary particles predict that the population of dark matter haloes in the Universe should extend to masses comparable to the Earth’s, many orders of magnitude below those where galaxies can form. Using a novel multi-zoom technique, we carried out a consistent cosmological simulation of the formation of present-day haloes over the full mass range populated when dark matter is assumed to be a Weakly Interacting Massive Particle (WIMP) of mass ~100 GeV. The simulation has a dynamic range of 30 orders of magnitude in mass and resolves the internal structure of hundreds of Earth-mass haloes in as much detail as that of hundreds of rich galaxy clusters. We find that halo density profiles are universal over the entire mass range and are well described by simple two-parameter fitting formulae, such as the well-known Navarro-Frenk-White profile. Halo mass and concentration are tightly related in a way that depends on cosmology and on the nature of the dark matter. At fixed mass, concentration is independent of local environment for haloes less massive than those of typical galaxies. Haloes over the mass range (103 – 1011) Mo contribute about equally (per logarithmic interval) to the dark matter annihilation luminosity, which we find to be smaller than all previous estimates by factors ranging up to one thousand. This was a hugely challenging project that took seven years to complete, during which we uncovered and solved a number of technical problems in N-body simulations revealed only in the extreme conditions of our calculation. It was carried out on DiRAC’s Cosmology Machine (COSMA) at Durham.

Figure 1. Projected dark matter density maps at three simulation levels: base level – cube side L = 740 Mpc; dark matter particle mass, mp = 1.6×109 Mo; top inset – L = 170 kpc, mp = 5.5×10-3 Mo; bottom inset – L= 170 pc, mp=1.6×10-11 Mo. In the base panel the largest haloes have a mass similar to that of a rich galaxy cluster, whereas in the bottom subpanel the smallest clearly visible haloes have a mass comparable to that of the Earth. The largest halos at all levels are resolved with at least 105 particles.

References:

  • Jie Wang, Sownak Bose, Carlos Frenk, Adrian Jenkins, Liang Gao, Volker Springel, and Simon White, to appear in Nature.

How primordial magnetic fields shrink galaxies

Little is known about how primordial magnetic fields are generated nor about their initial configurations. Their exact primordial normalization, coherence length and spectral index are undetermined but they are widely believed to be many orders of magnitude lower in strength (3 x 10-20 G < B0 < 3 x 10-10 G) than their microGauss values in present-day galaxies. What is known is that magnetic fields play as large a role in galaxies as turbulence, thermal and cosmic ray energy densities and that therefore they have a crucial role to play in shaping a galaxy’s evolution. To understand how different configurations of primordial magnetic fields might affect the global morphological and dynamical properties of galaxies, we used DiRAC to perform a suite of high-resolution constrained transport magneto-hydrodynamic cosmological zoom simulations where we varied the initial magnetic field strength and configuration along with the prescription for stellar feedback. In Martin-Alvarez et al, 2020, MNRAS, 495, 4475 we report our findings that strong primordial magnetic fields delay the onset of star formation and drain the rotational support of the galaxy, diminishing the radial size of the galactic disk and driving a higher amount of gas towards the centre. A possible mechanism behind such a reduction in angular momentum is magnetic braking. The figure above shows the effect of increasing primordial magnetic field strength on the stars and gas of a galaxy from our simulations. For the strongest primordial magnetic field studied (B0 = 3 x 10-10 G ), the gas radial scale length is halved compared to the simulation where the primordial magnetic field is weakest. New Scientist magazine reported on our work: https://www.newscientist.com/article/2244840-the-milky-way-may-have-been-shrunk-down-by-ancient-magnetic-fields/

Figure 1: (Top) Having run the same cosmological Adaptive Mesh Refinement magneto-hydrodynamical zoom simulation with different primordial magnetic field strengths, in the top panel we show how the stars in our simulated high redshift (z = 2) “NUT” galaxy would appear to the James Webb Telescope. The strength of the initial magnetic field B0 increases by ten orders of magnitude from B0 = 3 x 10-20 G to B0 = 3 x 10-10 G. Each panel is centred on the galaxy. On both the mock images and the gas density projections, we include golden circles of radius 3 physical kpc to aid the visual comparison of their dimensions (see also the scale bar on the Figure). (Bottom) Gas density projections ρg of the galaxy in a few selected simulations. For the strongest primordial magnetic field studied (B0 = 3 x 10-10 G), the gas radial scale length is halved compared to the simulation where the primordial magnetic field is weakest (B0 = 3 x 10-20 G). Figure adapted from Martin Alvarez et al. 2020.

Stellar Hydrodynamics, Evolution and Nucleosynthesis

The stellar hydrodynamic group of Prof Raphael Hirschi at Keele University and their international collaborators computed a series of 3D hydrodynamical simulations of stellar convection during carbon burning (see figure 1). These 3D implicit large eddy simulations were computed with the PROMPI code on the COSMA supercomputer in Durham. The simulations were analysed on-the-fly with PROMPI’s built-in mean-field analysis based on the Reynolds-averaged Navier–Stokes framework (RA-ILES, see Arnett et al 2019 and references therein). Both the vertical convective velocities within the convective region and the bulk Richardson number of the boundaries were found to scale with the driving luminosity as expected from turbulence theory: v ∝ L1/3 and RiBL-2/3, respectively. The positions of the convective boundaries were estimated through the composition profiles across them, and the strength of convective boundary mixing was determined by analysing the boundaries within the framework of the entrainment law. The entrainment was found to be approximately inversely proportional to the bulk Richardson number, RiB (∝ RiB-a, with a ~0.75). Although the entrainment law does not encompass all the processes occurring at boundaries, the results support the use of the entrainment law to describe convective boundary mixing in 1D models and new 1D models including entrainment show promising results. These large-scale simulations also inform the team’s ongoing theoretical efforts to overcome the shortcomings of the widely used but outdated mixing-length theory (Arnett et al 2019).

Figure 1. From left to right: simulations of a convective carbon burning shell (box-in-a-star set-up) with increasing energy generation rate boosting (eps boosting factor from 1000 to 33,000). The increase in the energy generation rate leads to stronger turbulence and faster growth of the convective region (fig. 6 in Cristini et al 2019).

The first observational evidence that planets
migrate?

The fact that young planets migrate as a result of interaction with their natal disc is well established theoretically and is widely applied when explaining the orbital properties of exoplanets. To date however the very long timescales for planetary migration have meant that there is no direct evidence for this. Moreover theoretical estimates for the rate of planetary migration are rather uncertain.

Figure 1. Left: ALMA image of Elias 24; Right: FARGO3D calculation by Richard Booth
illustrating the dependence of emission profile on planet migration rate. Clearly
a migrating planet better explains the depth of the emission gap.

We have proposed the first potential observational test of planetary migration, pointing out, on the basis of our hydrodynamical simulations of dust and gas in discs, that a migrating planet has a different signature from a stationary planet in terms of the way that it sculpts the disc dust. We have modeled the case of the young disc Elias 24 where the Atacama Large Millimetre Array (ALMA) has demonstrated a pronounced gap (presumed to contain a planet) and a bright emission ring outside. We have shown that the dust emission profile of the disc is better fit by models where the planet migrates . In order to test this hypothesis further we are seeking to acquire data at a range of wavelengths that are sensitive to a range of dust grain sizes. We predict that smaller grains (which are probed by shorter wavelength observations) should be preferentially concentrated in a ring interior to the planet whereas larger grains should instead congregate outside the planet; this therefore changes the emissivity profile at different wavelengths. The predicted profiles depend on how fast dust grains drift inwards relative to how fast the planet is migrating. Thus multi-wavelength observations provide a way to estimate the planet’s migration rate.


Computational spectroscopy of exoplanets

ince the first discovery of a planet orbiting a distant star 25 years ago (Nobel Prize in Physics 2019) over 4000 of exoplanets have been discovered. Now we want to know what these exoplanets are made of, what weather is like there, and whether they are habitable. These questions can be answered via characterisation of their atmospheres, which requires detailed models of the planetary atmosphere and large quantities of laboratory data on how possible atmospheric species absorb light.

Figure 1. Artist’s impression of the exoplanet K2-18b (ESA/Hubble, M. Kornmesser).

In 2019, a UCL team comprising Dr Angelos Tsiaras, Dr Ingo Waldmann, Prof Giovanna Tinetti, Prof Jonathan Tennyson, and Prof Sergey Yurchenko made the first detection of water in the atmosphere of an exoplanet, called K2-18b orbiting within the star’s habitable zone. This publication in Nature Astronomy, 3, 1086 (2019) was the highest ranked astronomy paper of 2019 with some 4000 worldwide media reports.

An atmospheric retrieval requires significant input on the spectral properties of the individual atoms and molecules which comprise this atmosphere. The ExoMol group led by Profs. Jonathan Tennyson and Sergey Yurchenko has pioneered quantum mechanical-based theoretical procedures which allow them to compute the appropriate lists of absorption lines. The ExoMol database provides comprehensive and accurate opacities for the key molecules which are likely to exist in the atmospheres of exoplanets. ExoMol line lists are used by exoplanetary models world-wide, to model the behaviour of molecules in the atmospheres of exoplanets. Because the atmospheres of most observable exoplanets are rather hot (often well over 1000 K) the molecular line list become huge. The calculation of such big data can be only accomplished on modern HPC facilities and DiRAC has been a reliable partner for ExoMol in providing these facilities since 2012.

ExoMol database contains extensive line lists for 80 molecules, 190 isotopologues. Most its 700 billion transitions have been produced using DiRAC resources, with 10 line lists resulted from the new thematic project “Spectroscopy of hot exoplanets”. One of these data is the line lists for SiO2 consisting of over 32 billion transitions (Owens et al., MNRAS, 495, (2020)). SiO2 is believed to be an important absorber in atmospheres of hot “lava planets” which orbit so close their host star that their rocky surface must be molten. Until our calculations, there existed no laboratory data on the absorption properties of this key molecule.

Owing to the success of ExoMol and the demand for these data ERC have just funded a follow-up project, ExoMolHD, led by Prof Tennyson, on precision spectroscopic data for studies of exoplanets and other hot atmospheres which will run from 2020 to 2025.


Exploring Jupiter’s Magnetic Field

The Juno spacecraft reached Jupiter in 2016 and has since been taking measurements of the gas giant’s magnetic and gravitational fields. Many interesting features on Jupiter’s surface, such as the intense isolated magnetic flux patch near the equator dubbed the “Great Blue Spot”, have been observed. Juno’s measurement also extends the Lowes spectrum—magnetic energy per spherical harmonic degree at the planet’s surface— considerably further than old data allowed. Using our numerical model of Jupiter’s dynamo, we investigate the connection between the shape of magnetic energy spectrum and the dynamo radius—the depth below the planetary surface at which dynamo action begins (Yue-Kin Tsang and Chris Jones, Earth Planet. Sci. Lett., vol. 530, 115879, Jan 2020).

Figure 1. Jupiter’s surface radial magnetic field from our numerical model, showing a broadly dipolar field with some intense flux spots

Our dynamo model solves the MHD equations describing the behaviour of an electrically conducting fluid in a spherical shell driven by convection. A key point is that the electrical conductivity increases sharply but smoothly with depth so it is not known a priori where the dynamo radius is. The density of Jupiter varies greatly with depth, so an anelastic model is required to represent the convection accurately. Furthermore, the model parameters needs to be in the strong dynamo regime where the magnetic field strongly affects the flow. Both features are essential for realistic Jupiter models, but they increase computational cost significantly, making the use of the DiRAC facilities essential for our work.

Figure 2. Radial magnetic field from the Juno mission (Connerney et al. 2018), showing the equatorial great blue spot

The numerical model produces Jupiter-like dipolar dominant magnetic field. The Lowes spectrum at the surface, as well as the magnetic energy spectrum at different depths, are calculated. We find that within the region where magnetic field is being generated, the magnetic energy spectrum is shallow and maintains more or less the same shape. Outside this dynamo region, the spectrum steepens. This transition enables us to identify the dynamo radius in the model. We can compare our numerical spectrum with the spectrum observed by Juno, which shows that the dynamo radius of Jupiter cannot be deeper than 0.83 Jupiter radius.

Although Juno has revealed the intense flux patches predicted by our simulations, such as the great blue spot, the observed field is nevertheless smoother than expected at the high magnetic Reynolds numbers inside Jupiter. One possible explanation is the existence of a stably stratified layer just under the upper non-conducting molecular layer due to “helium rain-out”, high pressure making the helium in the gas giant form droplets which fall under gravity. This stable layer might also help explain the fierce zonal winds seen on Jupiter. We are currently including a stably stratified layer in our models, to see if this can give a dynamically consistent picture fitting the Juno data.


‘Fast and furious’ planets around tiny stars

New astronomy research from the University of Central Lancashire suggests giant planets could form around small stars much faster than previously thought. As published in the “Astronomy and Astrophysics Journal” (https://www.aanda.org/10.1051/0004-6361/201936954), Dr Anthony Mercer and Dr Dimitris Stamatellos’ new planet formation research challenges our understanding of planet formation.

Figure 1. Computer simulation of planets forming in a protoplanetary disc around a red dwarf star.

Red dwarfs, the most common type of stars in our Galaxy, are small stars, 10% to 50% the size of our Sun. Despite their small mass, they are found to host giant planets up to 10 times bigger than Jupiter, the largest planet in our Solar System. The formation mechanism of these big planets remains an unsolved mystery. Giant planets around stars, like our Sun, are thought to have formed by the gradual build-up of dust particles to progressively bigger bodies. However, red dwarfs are tiny when compared to the Sun, and they do not seem to have enough material around them to form such big planets.

The research team used the UK Distributed Research using Advanced Computing (DiRAC) supercomputing facility to simulate the evolution of protoplanetary discs around red dwarf stars. Protoplanetary discs are rotating structures of dense gas and dust found around all newly-born stars. The researchers found that if these young discs are big enough they can fragment, i.e. break up into pieces, forming gas giant planets. This theory predicts that the formation of giant planets happens within a few thousand years, a timescale which is extremely fast in astrophysical terms. The researchers also found these planets are extremely hot when they form, with temperatures at their cores reaching thousands of degrees. Such hot planets would be relatively easy to observe when they are still young. They do not have an internal energy source, so they become fainter with time, and the window of opportunity to directly observe them is very small. Nevertheless, they can still be indirectly observed by their effect on their host star. Many planets like these have been discovered so far around small stars. Future observations of planets around very young red dwarf stars will test the predictions of this new theory.

A high resolution animation can be downloaded at here, and a related YouTube animation can be found at here.


Monte Carlo radiative transfer for exotic thermonuclear supernovae

The last few years have seen many exciting developments in time-domain astronomy. In particular, thanks to modern surveys that monitor large areas of the night sky on a regular basis, there has been a huge increase in the both the quantity and quality of observational data for a wide variety of astrophysical transients, including supernovae. Such data allow us to test our understanding of astrophysical phenomena in ways, and to a degree of accuracy, that was not previously possible. For example, while the basic picture that supernovae result from the explosive deaths of stars is well established, it has become clear that there is a great deal of variation within the supernova population. This applies even to classes of supernovae that have been traditionally considered rather homogenous, such as the thermonuclear “Type Ia” supernovae (Taubenberger 2017).

Our work focuses on using computer simulations to make predictions for the radiation emitted by supernovae and related stellar explosions. By comparing our results to observations, we hope to test the underlying theoretical models and to help interpret new observational data. Thus, driven by observational advances, a major component of our work in recent years has been understanding how the observed diversity can be reconciled with particular explosion scenarios.

Figure 1. Comparison of our calculated synthetic spectrum for a new double-detonation model (black line) to the observed spectrum of the unusual supernova SN 2016jhr (red) which is clearly different from the spectrum of a ‘normal’ Type Ia supernova such as SN 2011fe (shown in orange). Note that our model provides a reasonable match to both the overall shape and to many of the discreet spectral features (figure adapted from Gronow et al. 2020).

In 2020 we published new results based on simulations of a particular class of white-dwarf explosion and showed that these can provide a plausible match to certain unusual supernovae. Specifically, our work explores the “double-detonation” model in which a white dwarf star, which is composed mainly of carbon and oxygen, is able to accumulate a surface layer of helium via interaction with a companion star in a binary system. If nuclear reactions can ignite in the helium layer, it has been suggested that this may lead first to detonation of the helium which, in turn, may trigger detonation of the underlying carbon-oxygen material. This “double-detonation” scenario predicts certain key signatures owing to the structure where nuclear ash from the carbon-oxygen core is overlaid by material that originated in the helium layer. Our simulations show (see Figure; details published in Gronow et al. 2020) that the theoretical spectra predicted from such a model give a good match to particularly unusual transients, such as SN 2016jhr (Jiang et al. 2017). Such comparisons support the conclusion that transients of this sort may indeed be explainable via this class of theoretical model – one piece for the puzzle of understanding the origins of supernovae. We are now developing further such simulations to explore more fully the range of properties that double-detonation models may be able to explain – it remains to be seen whether this class of model can really account for a large fraction of the observed Type Ia supernova population or whether a mix of scenarios must be invoked to account for the variations that are observed.

This work also used the Cambridge Service for Data Driven Discovery (CSD3), part of which is operated by the University of Cambridge Research Computing on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk).

References:

  • Taubenberger S., 2017, in Handbook of Supernovae, ed. Alsabti & Murdin (New York Springer), 317;
  • Jiang J., Doi M., Maeda K., et al. 2017, Nature, 550, 80;
  • Gronow S., Collins C., Ohlmann S., Pakmor R., Kromer M., Seitenzahl I., Sim S., Röpke F., A&A, 2020, 635, 169.

COSMOS: Studying cosmic strings

COSMOS consortium members (dp002) have made good progress over the past year using DiRAC HPC Facilities, particularly implementing adaptive mesh refinement (AMR) techniques for the study of extreme gravity in a cosmological context. This year we highlight high resolution simulations of radiation from cosmic strings which were generated using GRChombo, that is, the pioneering numerical relativity code developed within the COSMOS consortium and now attracting a growing research community (see the public repository github.com/GRChombo). The improving sensitivity of the LIGO/Virgo gravitational wave detectors ultimately motivated this work because of the discovery potential of these experiments for cosmic strings, but the initial applications are actually on the origin of dark matter axions.

Figure 1. Massless radiation through the n=2 quadrupole is the dominant decay mode for global (axion) strings.

Cosmic strings – heavy line-like defects traversing through space – can arise when the rapid cooling of the hot early Universe triggers a phase transition. For this reason, cosmic strings are strongly tied to fundamental physics and so their detection could offer deep insights into the underlying structure of particle physics and spacetime itself. Accurate signal templates are essential for observational searches, but numerical simulations of cosmic strings are computationally challenging because of the wide range of physical scales involved; the ratio between the string width and the scale of the Universe across which the strings stretch is approximately 1030! This is far beyond the capability of current simulations to resolve, so different approximations are made for simulations to be computationally viable, usually either taking a zero thickness limit or a fixed comoving width; the outcome has been quite different predictions in the literature for both gravitational wave signatures and string axion (dark matter) radiation. A recent consortium paper (arxiv.org/1910.01718.pdf) addresses this controversy by using AMR to simulate global (or axion) string configurations and networks, allowing the numerical grid resolution to adapt to the scale of the problem. Figure 1 shows a simulation of an oscillating sinusoidal string, with massless radiation emanating out in a quadrupole pattern. Performing a careful quantitative analysis of this axion radiation led to the conclusion that the evolution of global strings tends towards that predicted in the thin-string limit, with additional radiation damping or backreaction. This provides a significant step towards resolving this issue and making precise predictions (in this case) of the small axion mass, which is a key parameter defining current dark matter axion searches.

Figure 2. String network decay channels into massive radiation are generically suppressed, except in nonlinear regions where there can be explosive production.

This research is also being taken forward with much larger simulations of full cosmic string networks (see Figure 2), using diagnostic tools developed to measure competing massless (axion or GWs) and massive (Higgs) decay channels. This cosmic string work with the flexible GRChombo code has proved to be an important exemplar for testing new HPC capabilities, particularly for the COSMOS Intel Parallel Computing Centre collaboration on in-situ visualization (with the Intel Viz team, TACC and Kitware); this pioneering work using the OSPRAY ray-tracing visualization package has been demonstrated by consortium members at recent international supercomputer conferences and at SIGGRAPH 2019. We have also initiated a collaboration with the Chombo team at Lawrence Berkeley National Laboratory to speed up core libraries.

Figure 3. Lead author, Amelia Drew, on stage with Intel at SIGGRAPH 2019 demonstrating HPC in-situ visualization capabilities using cosmic defect codes.

The evolution of high mass-ratio circumbinary discs

In recent years the detection of a large variety of structures in protoplanetary systems (once believed to be featureless), the detection of a high number of exoplanets with properties much different from those detected in the solar system, and the detection of gravitational waves from binary black holes have all raised a similar fundamental research question: “How does the mutual interaction between a binary and its surrounding gas shape the systems we observe?”

The conservation of angular momentum during the infall of material on to a binary (which in this context can mean two stars, two black holes, or a star and a planet) forces the material to form a disc structure that orbits the binary, and the dynamics of these so-called circumbinary discs have been studied for many years. The binary exerts a so-called “tidal torque” on the accretion disc, carving a gap or even a cavity in the disc. The disc in turn exerts a back-reaction torque on the binary, changing its orbit, causing it to migrate and/or changing its eccentricity. The traditional theoretical approach requires some simplifying assumptions, in order to make the equations analytically tractable, such as assuming that the mass of the secondary, or the mass of the disc, is much smaller than the primary. When these assumptions are not satisfied the theoretical predictions become unreliable, and we must turn to numerical hydrodynamic simulations in order to investigate the problem properly. However, the time-scales involved in this problem are long (at least hundreds of binary orbits), and high numerical resolution is required to resolve the tidal torques correctly. As a result, accurate hydrodynamical simulations of binary-disc interactions are extremely computationally challenging.

Figure 1: Disc surface density from 3-D simulations of circumbinary discs, for several different values of the secondary-to-primary mass ratio
q, performed on the DiRAC Data Intensive@Leicester system. White dots mark the location of the binary components. (Figure adapted from Ragusa et al., submitted). More massive companions invariably result in non-axisymmetric “horseshoe” structures in the disc.

By exploiting the power of DiRAC, we were able to perform a suite of 3-D Smoothed Particle Hydrodynamics simulations which follow the evolution of a wide range of different circumbinary discs for long timescales (typically 2000 binary orbits). Our main aim was to understand the physical mechanism responsible for the formation of the so called “horseshoe” structures that have been seen in a large number of recent simulations of circumbinary discs, and which have been proposed as an explanation for the asymmetric structures observed by ALMA in numerous planet-forming systems. Our simulations (shown in Fig.1) showed that horseshoe structures are associated with the rapid growth of the cavity size when the disc becomes eccentric, if the binary companion is sufficiently massive. The long-term evolution of our simulations suggests that these features are actually transient, but they can survive for thousands of binary orbits. In real systems this corresponds to a significant fraction (~10%) of the ~Myr lifetimes of protoplanetary discs, which may explain why these structures are only seen in a sub-set of observations. Our simulations have provided important new insight into an important theoretical problem, as well as raising some interesting new questions. In particular, we unexpectedly discovered a very strong relationship between the growth of the disc cavity and its eccentricity, and our future work with DiRAC will investigate the physics behind this result in much greater detail.


Astrophysical Maser Flares from Shock Impact

Figure 1. A sequence of synthetic images in maser line emission of an astrophysical cloud being compressed by a 7.5km/s shock moving vertically upwards. The percentage figures represent the fraction of the cloud compressed. Axes are in the scaled units of the model, and the colour bar to the right of each figure is the log (base 10) of the maser intensity in units of the saturation intensity; all panels are on the same scale.

One mechanism that has been put forward as a generator of maser flares in massive star-forming regions is the impact of a shock of modest speed on a cloud with a typical size of a few astronomical units. The sequence of diagrams on the left show the effect on images of such a cloud if struck by a 7.5km/s shock that moves up the z-axis (bottom to top). There is a frame change at the 50% shocked point from a view where the shock appears to move, at lower precentages, to a view where the remains of the uncompressed cloud flows into a stationary shock at the cloud mid-point (for shocked proportions > 50%). The shock is modelled as isothermal, with a compression factor proportional to the square of the Mach number. In this case, the Mach number is 3, giving a density enhancement of 9 in the shocked part of the cloud, which increases from 10% of the cloud in the top diagram to 80% in the bottom panel. Compression is applied to the node distribution of the model prior to triangulation. The colour palette is the base-10 logarithm of the maser specific intensity relative to the saturation intensity so the brightest parts of the image increase from about 0.001 of the saturation intensity in the undisturbed cloud to of order 104 times this parameter in the 80% panel, so that the shock increases the brightest pixel intensities by of order 10 million times. The effect on the flux density, averaged over the whole cloud is lower, but this impact still represents a very convincing mechanism for generating high-contrast maser flares. If one model unit on the axes of each panel is equal to half an astronomical unit, then the rise time of the flare is approximately the time taken for the shock to cover 1 astronomical unit, or 231 days. The effectiveness of the flare mechanism relies on a switch, due to the shock, from a weakly saturated initial cloud (approximately spherical) to a quite strongly saturated, and highly oblate, final cloud.
Data from DiRAC is available for (original) optical depths ranging from 0.1-30.0, and a number of different shock speeds.


Dust destruction in supernova remnants

It is well established that (sub-)micrometre sized dust grains can form in over-dense gas clumps in the expanding ejecta of supernovae remnants. However, highly energetic shock waves occur in the ejecta which can potentially destroy a large fraction of the newly formed dust grains. The gas in the ejecta is heated up to billions of Kelvin and is accelerated to a few hundreds of kilometre per seconds which causes thermal and kinematic sputtering of the dust grains. Moreover, the dust grains can collide among each other with high velocities and get fragmented or even vaporized. Previous predictions for the dust survival rate depend strongly on initial parameters and range from less than 0.1 % to 99 %. The net dust survival rate is crucial for determining whether or not supernovae significantly contribute to the dust budget in the interstellar medium.

Figure 1: Hydrodynamical simulation of the temporal evolution of the spatial gas density when the reverse shock impacts the clump. Our simulations were performed on the DiRAC @ Cambridge Service and show that the clump is disrupted within ∼60 years. (Adapted from Kirchschlager et al. 2019).

In order to model a shock wave interacting with an ejecta clump we performed hydrodynamics simulations using the grid-based code AstroBEAR (Fig. 1). Afterwards, dust motions and dust destruction rates are computed using our newly developed external, post-processing code Paperboats, which includes gas and plasma drag, grain charging, kinematic and thermal sputtering as well as grain-grain collisions. We used DiRAC HPC Facilities to determine the dust survival rates for the oxygen-rich supernova remnant Cassiopeia A for a huge range of parameters, including initial grain sizes, dust materials and clump gas densities.

Figure 2: Surviving silicate mass as a function of the initial grain size a peak and density contrast χ between clump and ambient medium (Kirchschlager et al. 2019).

We find that up to 40 % of the silicate (Fig. 2) and up to 30 % of the carbon dust mass is able to survive the passage of the reverse shock. The survival rates depend strongly on the initial grain size distribution, with ∼10−50 nm and ∼0.5−1.5 μm as the grain radii that show the highest surviving dust masses. The dust processing causes a rearranging of the initial grain size distribution. Our results show that grain-grain collisions and sputtering are synergistic and that grain-grain collisions can play a vital role in determining the surviving dust budget in supernova remnants.

References:

  • Kirchschlager, F., Schmidt, F. D., Barlow, M. J., Fogerty, E. L., Bevan, A., & Priestley, F. D. 2019, MNRAS, 489, p.4465-4496

Gaseous Dynamical Friction in the Early Universe

Figure 1: Variation in drag force with Mach number for two perturber masses, with the numerical results shown by the dots, and the analytic prediction shown by the line (Morton et al. in prep)

Dynamical friction is the process by which a massive perturber, moving through some background medium, gravitationally interacts with that medium, producing a net retarding force to its motion. When the background medium is gaseous, the pressure forces present in the gas must be included in modelling the response of the medium, and so impact the resultant force. To numerically model this process, we used the gravo-hydrodynamic code GIZMO, which uses state of the art Lagrangian hydro solvers, to test highly idealised wind tunnel-like setups against the analytic predictions for this net force. We found that the numerical results significantly under produce the force predicted by analytic treatments of the problem. These solvers did not reproduce the predicted structure of the wake, instead smoothing out predicted density profiles, and producing unexpected bow wave structures. These results were found in regimes where the analytic prediction should hold. As mentioned above, dynamical friction is critical in driving the mergers of dark matter sub-halos into their larger hosts, as it provides a crucial mechanism by which angular momentum can be transferred away from the orbiting substructure, allowing it to spiral into the central object. This is particularly important in the early Universe, when gas fractions were higher. The reduced retarding force (shown below in Figure 1 for different perturber velocity Mach numbers) suggests that modern cosmological simulations, that are run using these solvers, could be underestimating the merger rates of early dark matter structure, and so incorrectly recovering the evolution of early galaxies.


The b1 resonance in coupled-channel scattering from Lattice QCD

The vast majority of hadrons are resonances, seen experimentally through decays into a multitude of other hadrons in various angular distributions and corresponding to pole singularities in scattering amplitudes. Resonances which decay into two pseudoscalar mesons (mesons with JP=0, where J is spin and P is parity) have been studied extensively using first-principles lattice QCD calculations over the last decade. However, all but the simplest resonances have decay channels which involve hadrons with non-zero spin and, in particular, those with ‘unnatural’ spin parity, JP=0, 1+, 2, are forbidden to decay to two psuedoscalars. Calculations of these scattering channels are significantly more complicated.

Figure 1.

In a recent publication [arXiv:1904.04136, Phys. Rev. D 100, 054506 (2019)], we studied the axial-vector resonance (JP=1+). Experimentally its decay channels include πω and πφ, where π is a pseudoscalar meson (JP=0) and ω, φ are vector mesons (JP=1). The non-zero spin of the vector mesons means that the decay may feature a superposition of orbital angular momentum L=0 (S-wave) and L=2 (D-wave). Working with heavier-than-physical light quarks such that the π mass is 391 MeV, we performed the first lattice QCD computation of the relevant coupled scattering amplitudes – the resulting squared-amplitudes for πω in S-wave and D-wave, and πφ in D-wave are shown in the figure below (upper panels) labelled by 2S+1LJ. Notably, we observe a bump-like enhancement in S-wave πω, suggestive of a narrow resonance, and we indeed find a pole in the analytic continuation of the amplitudes to complex energies (lower panel) – the real and imaginary parts are related to the mass (mR) and width (ΓR) respectively, and the strengths of the coupling to each channel (|c|) are also shown. Interestingly, the absence of significant enhancement in S-wave πφ suggests no evidence for a ZS, proposed as an analogue of the enigmatic Zc(3900) observed in πJ/ψ. This work opens the door to investigating the π1(1600) exotic light resonance, nucleon resonances such as the Roper, and puzzling structures observed in the charmonium and bottomonium sectors.


Electron acceleration and transport in turbulence

Highly energetic electrons are present in many different astrophysical systems, from solar flares and supernovae to planetary magnetospheres and the intra-cluster medium. Plasma turbulence is ubiquitous in these systems, and it is fundamental for the production of energetic particles and their transport properties.

The problem of particle acceleration in astrophysical turbulence was first considered by Fermi, who considered particles interacting with randomly moving scattering centres, thereby gaining energy stochastically [1]. If the scattering centres have some coherent, large-scale motion, the energisation is more rapid [2]. Since Fermi’s original ideas, the topic has been extensively investigated.

Figure 1. (a) Magnetic field magnitude for a 2D hybrid PIC simulation of turbulence, and two typical electron trajectories showing a trapped orbit (red) and an open orbit (blue).
(b) Magnetic field magnitude and a typical electron trajectory (dots), with its trapped portion in red. The inset shows the energy history (blue) of the electron shown in the figure, with the trapped portion highlighted in red. The green and orange line show the perpendicular and parallel particle energy with respect to the local magnetic field. Figure partially adapted from Ref 3 [Trotta et al., 2020].

Using DiRAC we have studied the acceleration and transport properties of transrelativistic electrons in plasma turbulence [3] using hybrid PIC and test particle simulations. We have discovered that turbulence strength is a key parameter that controls electron acceleration, with rapid acceleration due to particle trapping happening when the turbulence level is high. Figure 1a shows two examples of open and trapped electron trajectories. Figure 1b shows a detail of electron trajectory with the trapped part highlighted in red. It can be seen, in the figure inset, that, when the electron is trapped, fast quasi-monotonic energisation is achieved. The fast energisation stage resembles a first-order Fermi process as observed by other authors in simulations of magnetic reconnection (see Ref [3] for further details).

The importance of turbulence strength to activate fast electron acceleration has important consequences for heliospheric and astrophysical plasmas, with implications for understanding, for example, the physics of solar flares and radio emission from galaxy clusters.

References:

  • [1] E. Fermi, On the origin of cosmic radiation, Physical Review 75:1169-1174 (1949)
  • [2] E. Fermi, Galactic magnetic fields and the origin of cosmic radiation, The Astrophysical Journal 119:1 (1954)
  • [3] D. Trotta, L. Franci, D. Burgess and P. Hellinger, Fast acceleration of transrelativistic electrons in astrophysical turbulence, The Astrophysical Journal, 894:2-136 (2020)

Extreme QCD: Quantifying the QCD Phase Diagram

The FASTSUM collaboration uses DiRAC supercomputers to simulate the interaction of quarks, the fundamental particles which make up protons, neutrons and other hadrons. The force which holds quarks together inside these hadrons is Quantum ChromoDynamics, “QCD”. We are particularly interested in the behavior of QCD as the temperature increases to billions, and even trillions of Kelvin. These conditions existed in the first moments after the Big Bang, and are recreated on a much smaller scale in heavy ion collision experiments in CERN (near Geneva) and the Brookhaven laboratory (near New York).

The intriguing thing about QCD at these temperatures is that it undergoes a substantial change in nature. At low temperatures, QCD is an extremely strong, attractive force and so it’s effectively impossible to pull quarks apart, whereas at temperatures above the “confining” temperature Tc, it is much weaker and the quarks are virtually free and the hadrons they once formed “melt”.

We study this effect by calculating the masses of protons and other hadrons and their “parity partners”, which are like their mirror-image siblings. Understanding how these masses change with temperature can give deep insight into the thermal nature of QCD and its symmetry structure.

Figure 1.

Our most recent results are summarized in the plot where we show the proton mass as a function of temperature up to around Tc in the top left panel. The other panels show results for hadrons containing various numbers of strange quarks. As can be seen, the mass of positive parity states have fairly constant masses, whereas their negative parity partners’ masses decrease substantially as the temperature increases until they become nearly degenerate with their positive parity partners at around Tc, as dictated by the global chiral symmetry of the QCD Lagrangian. Beyond Tc masses are difficult to determine which is consistent with the QCD force becoming so weak that the hadrons break apart. This calculation supplements our theoretical understanding of QCD with quantitative detail..


Deep inelastic structure of the proton

Hadron structure functions are ubiquitous in the description of leptonic interactions with hadrons, encoding: elastic form factors, inclusive electro- (and photo-) production of resonances, diffractive processes and Regge phenomena, and partonic structure in deep inelastic scattering. Lattice QCD, however, has really only been able to probe some limited kinematic corners of the all-encompassing structure functions primarily being limited to elastic form factors and low(est) moments of leading-twist parton distributions. The particular interest in partonic structure has motivated a number of strategies to overcome limitations in the lattice formulation, including: the Euclidean hadron tensor, lattice OPE, heavy-quark Compton amplitude, symmetry-improved operator construction, factorisable matrix elements, and quasi-PDFs and related quantities. We have embarked on a complementary program to extract the forward Compton amplitude in the unphysical region [1]. Computationally, we are able to take advantage of the efficiency of the Feynman-Hellmann approach to hadron structure and avoid the need to compute 4-point functions. Here we highlight [2,3] some recent progress towards revealing scaling and higher twist-phenomena in the low-order moments of the Compton amplitude from lattice QCD.


Figure 1: (Left) deep inelastic scattering where a hadron (usually a proton with u and d quarks) is destroyed by a highly energetic lepton (usually an electron) by emitting a photon which strikes a quark or parton in the hadron. This process is described by the Compton amplitude. (Right) the computed lowest moment of the u-d parton distribution as a function of the momentum transfer showing the need for power corrections.

References:

  • [1] A. Chambers et al., Phys. Rev. Lett. 118 (2017) 242001.
  • [2] A. Hannaford-Gunn et al., PoS (LATTICE2019) 278.
  • [3] R. Horsley et al., ibid 137.

New LIGO-Virgo gravitational waves

The third LIGO-Virgo observing run (O3) began in April 2019, and has since announced three gravitational-wave observations – the binary-neutron star merger GW190425, the binary-black-hole merger GW190412, and most recently GW190812, the merger of a 23-solar-masss black hole with a 2.6-solar-mass object, that could be either the lowest-mass black hole yet observed through gravitational-waves, or the highest-mass neutron star ever found. Measuring the properties of these sources, in particular the masses of the objects, relied on several theoretical signal models, including the family of phenomenological models that is tuned to fully general relativistic numerical simulations of black-hole binaries that are performed in this project.

Figure 1.

Over the last year, the models have been upgraded to include subdominant harmonics (which were essential to the analysis of GW190412), and to treat black-hole—neutron-star systems (of which GW190812 may be one). O3 data continue to be analysed using these models, and a full catalogue of all confirmed observations will be published when it is complete.


UNITY: Multi-wavelength simulations of relativistic cosmology

Gravitation in our Universe is best-described by Einstein’s theory of General Relativity. Cosmological simulations typically use a Newtonian approximation for the gravitational field however, which is far simpler to solve, and which proves to be an excellent approximation over a wide range of distance scales. Forthcoming surveys of the large-scale structure of the Universe (e.g. galaxy surveys such as Euclid and LSST, and intensity mapping surveys with MeerKAT and SKA) are now becoming large enough to capture the very largest distance scales in the observable Universe however, which inherently require a relativistic treatment.

To help us precisely understand the effects of the relativistic corrections on observations that will be made by these surveys, we have produced the largest ensemble of fully-relativistic N-body simulations to date, using the cutting-edge gevolution code. This includes a rigorous treatment of relativistic degrees of freedom, such as massive neutrinos, as well as solving for photon paths using the full set of relativistic fields that exist on large scales in the Universe. The result is a particularly accurate representation of a range of different cosmological large-scale structure observables that fully accounts for subtle GR effects.

Figure 1: A simulation of the redshift-space distortion fluctuation field that includes a full set of general relativistic corrections, taken from the UNITY simulations. This figure shows a slice through the lightcone at a redshift of z=0.47, produced using the latest version of the gevolution relativistic N-body simulation code on DiRAC. Another 50 simulations of this kind were produced as part of the project.

As part of the UNITY project, we ran an ensemble of 51 gevolution simulations on the DiRAC Data Intensive system in Leicester, with corresponding lightcones and tomographic maps of various observables taken from each and stored for further processing to add survey-specific observational features, such as different populations of galaxies. The simulations were divided into two sets: an ensemble containing different random realisations of the initial conditions, which can be used to study the statistical properties of observables and methods used to recover them, and a set with fixed initial conditions but systematically different values of cosmological parameters, allowing us to perform a precision study of the cosmology-dependence of the various relativistic correction terms.

Two key works are based on these simulations are currently being prepared for publication: a technical description of the simulations and study of the relativistic dipole in the galaxy distribution (L. Coates et al, in prep.), and a detailed study of GR-induced multipoles in the power spectra and two-point functions of multiple galaxy populations (C. Guandalin et al, in prep.).


Impact of Reionization on the Intergalactic Medium


Figure 1. Below a certain scale the matter distribution of the Universe is not uniform, but rather it forms a vast network of filamentary structures known as the “cosmic web”. The Lyman-α forest observations offer a unique opportunity to characterize the properties of the cosmic web at high redshifts. Image credit Jose Oñorbe.

The latest measurements of CMB electron scattering optical depth reported by the Planck satellite significantly reduces the allowed range of HI reionization models, pointing towards a later ending and/or less extended phase transition than previously assumed. During HI reionization the intergalactic medium (IGM) is photoheated to ~104 K, and owing to long cooling and dynamical times of the diffuse gas, comparable to the Hubble time, its vestiges persist in the IGM to much later times. Existing and upcoming observations of the Lyman-α (Ly-α) forest at high redshift can be used to detect these extant thermal signatures of HI reionization by comparing them with accurate hydrodynamical models of HI reionization. This can provide new independent constraints not only on the timing of HI reionization but also on the sources that reionized the universe. However to achieve these goals reliable and precise models of the Ly-α forest are needed to be compared with the observations. The challenge for these models is to reach enough dynamical range to attain the necessary precision in the Ly-α forest predictions and at the same time to obtain the number of models required to guarantee a good sampling of the parameter space.

We have run a grid of state-of-the-art cosmological hydrodynamical simulations in order to explore the physical parameter space characterized by the onset of reionization, its duration and the heat injected into the IGM. The new methodology used allowed us to take into account the complex inhomogeneities of the reionization process and to explore the full physical parameter space very efficiently. The creation of an accurate emulator of the Ly-α statistics with these simulations will be crucial to provide accurate fits to Ly-α observations.


Tidal disruption by supermassive black holes

Figure 1.

Tidal disruption events (TDEs) occur when a star passes too close to a supermassive black hole (SMBH). By “too close” we mean that the tidal field resulting from the black hole’s gravity is so strong that it can overcome the star’s self gravity and rip the star apart. Stars that pass close to the black hole typically do so on near-parabolic orbits, and thus approximately half of the debris from the disrupted star is on bound orbits, and the other half is on unbound orbits with respect to the black hole. The bound portion of the stream is thought to return to the black hole and circularise into an accretion disc which powers a luminous flare in an otherwise dormant galactic nucleus. Thus TDEs can be used to probe the SMBH mass function, the properties of individual stars, and stellar dynamics in galactic nuclei. Upcoming missions will detect thousands of TDEs, and accurate theoretical modelling is required to interpret the data with precision. In Golightly et al. (2019 ApJL 882, 2, L26) we present numerical simulations performed on Data Intensive at Leicester (DI@L) of TDEs with accurate structures for the simulated stars that are taken from the MESA stellar evolution code. This is in contrast to previous numerical simulations which employed simple polytropic models for the stars. We find that the more accurate stellar density profiles yield qualitatively-different fallback rates when compared to polytropic density profiles, with only the fallback rates from low-mass (less than the mass of our Sun), zero-age main sequence stars are well fit by a standard polytopic model. Stellar age has a strong affect on the shape of the fallback curve, and can produce characteristic timescales (e.g., the time to the peak of the fallback rate) that greatly differ from the polytropic values. We further investigate whether polytropic models for the fallback rate, which are routinely used to infer properties of observed systems from observational data, can lead to inaccurate measurements of e.g. the black hole mass. In the Figure we show the results of two simulations with different black hole masses (differing by a factor of 5) in one of which the star was modelled as a polytrope and in the other the star was modelled with an accurate structure from the MESA stellar evolution code. The similarity of the fallback curves for such a different black hole mass suggests that detailed modelling that captures realistic stellar density profiles is required to accurately interpret observed lightcurves of TDEs.


DiRAC enables prediction for matter-anti-matter asymmetry in the Standard Model

The Standard Model (SM) of Elementary Particles has been extremely successful in correctly predicting and describing the properties of elementary particles studied at experimental facilities all around the world. The SM itself covers particles like quarks and electrons, for instance, and their anti-particles. While both particles and anti-particles behave largely in the same way, the SM allows for tiny differences in their behaviour.

This is however extremely important since it is an essential ingredient for being able to describe the abundance of matter over anti-matter observed in our universe. One of the big questions in modern particle physics is therefore, whether the SM correctly describes the observed symmetry breaking, or whether new, not yet known and undiscovered physics is driving it.

DiRAC resources have now enabled a detailed study of the symmetry breaking by considering the decay of kaons into two pions, as studied experimentally at CERN and Fermilab two decades ago. Both particles are mesons, i.e. they consist of a quark and an anti-quark, respectively. Researchers from Brookhaven National Laboratory, CERN, Columbia University, the University of Connecticut, the University of Edinburgh, the Massachusetts Institute of Technology, the University of Regensburg and the University of Southampton (RBC/UKQCD Collaboration) set out to compute the amount of symmetry violation present in the SM by means of large-scale computer simulations of Lattice Quantum Chromodynamics (QCD). QCD is the part of the SM that describes the strong interactions of quarks (and anti-quarks) via the exchange of force carriers called gluons. The group of researchers have developed the required theoretical understanding and the numerical and algorithmic techniques needed to complete their ambitious and state-of-the-art program.

In Lattice Quantum Chromodynamics one performs the calculation by constructing a discrete four-dimensional space-time grid (the lattice) on which one numerically solves the QCD equations of motion. Such lattice QCD simulations are the only known method to address the above questions without having to rely on ad hoc assumptions. This type of computation crucially relies on access to the world’s fastest parallel supercomputers like DiRAC’s Extreme Scaling system.

To date RBC/UKQCD’s prediction is the only one of its kind and therefore highly anticipated. At the level of precision achieved in their calculation the result agrees with the experimental measurement. On the one hand this result is yet another piece of evidence for the unparalleled success of the SM as the theory of elementary particle physics. On the other hand, we expect the SM to be incomplete in describing all aspects of particle physics and CP violation remains an excellent place to look for limitations of the SM. The Collaboration is therefore now working on ways to improve the precision of their result.

Bibliography:

  • “Direct CP violation and the ΔI=1/2 rule in K→ππ decay from the Standard Model”, arXiv: 2004:09440
  • “Standard Model Prediction for Direct CP Violation in K → ππ Decay” Phys.Rev.Lett. 115 (2015) 21, 212001, arXiv:1505.07863

Ion-neutral decoupling in the nonlinear Kelvin—Helmholtz instability: Case of field-aligned flow

In many areas of the solar atmosphere, including prominences, and many other astrophysical systems the majority of the fluid is neutral with only a small fraction being plasma. As with fully ionized plasmas, plasmas that are only partially ionized can be subject to flow instabilities which result in turbulence developing in the plasma. However, the coupling between the neutral component of the fluid and the magnetic field is not perfect, with the coupling coming as a result of collisions between neutral and charged species. The consequence of this is the fluids are strongly coupled to each other (resulting in the neutrals strongly feeling the Lorentz force) for slow dynamics, but if dynamic motions are quick enough the neutral fluid will not feel the magnetic field and behave purely hydrodynamically.

Figure 1. Image of the density structure that develops in the neutral fluid as part of the partially ionised plasma version of the magnetic Kelvin-Helmholtz instability

In this study we performed an extremely high-resolution 2D calculation, which allowed the dynamics at both coupled and decoupled scales to be simultaneously resolved. The basic set up had a velocity and density jump and a magnetic field aligned with the flow. The fluid was only 1% ionized.

The most surprising results of this study came from the thermodynamics. We found that heat transfer in the system could occur strongly across the magnetic field because the neutral fluid

How turbulent heating occurred also threw up a surprise. In standard turbulence, it is normally the smallest scales which drive the dissipation, but we found the frictional heating in partially ionized plasma is determined by the larger vortex scale instead of the small scale.

Based on results published in “Ion-neutral decoupling in the nonlinear Kelvin—Helmholtz instability: Case of field-aligned flow” by Andrew Hillier, Physics of Plasmas (2019). (The article can be accessed at https://doi.org/10.1063/1.5103248.)


Simulation of Sp(2N) gauge theories for Composite Higgs models


Figure 1: The figure (from [1]) shows the mass squared of the lowest-lying vector particle versus the mass squared of the pseudoscalar particle in the continuum for Sp(4) gauge theory with static (blue) and dynamical (red) fermions. The bands represent the error of the corresponding best fit to the data. The effect of unquenching the fermions are clearly visible in the figure.

Understanding the nature of the standard model Higgs boson is still an open problem. An appealing possibility is that the Higgs boson be a composite particle resulting from a novel strong interaction. The lack of observation of otherwise unexplained particles interacting through this conjectured novel force and with mass comparable to that of the Higgs boson would then require a mechanism that keeps the Higgs parametrically light. Such a mechanism can be provided by the spontaneous breaking of the global flavour symmetry in the new interaction. In this framework, the Higgs boson is interpreted as a Pseudo-Nambu-Goldstone Boson (PNGB) of the novel interaction. Among candidate realisations of PNGB compositeness are Sp(2N) gauge theories. The two simplest theories in this class are Sp(2), coinciding with SU(2), and Sp(4). Given the strong nature of the novel force, in order for the former to be validated experimentally as viable theories, first-principle calculations would need to be performed that allows us to determine quantitative predictions from these models. These calculations can be executed numerically formulating the models on a spacetime lattice.

Figure 2: The figure shows the phase shift as a function the centre of mass energy of the scattering process of two Goldstone bosons in the SU(2) gauge theory. The slope of a linear fit allows to constrain the coupling of the PNGBs to the vector resonance in the composite Higgs framework.

Our project has provided the first numerical calculation in the Sp(4) gauge model with two fundamental flavours and the first determination of its lowest-lying meson spectrum and of the corresponding decay constants in the continuum limit. This has enabled us to determine the coefficients of an effective field theory that can be used to compare the predictions of the model with experimental results [1]. Our study of the Sp(2N) Yang-Mills model combined with results in the literature for SU(N) and SO(N) gauge theories has unravelled that in both three and four spacetime dimensions the ratio of the mass of the tensor glueball over the mass of the scalar glueball is independent of the gauge group [2]. This result provides further insights on the phenomenon of confinement, which characterises non-Abelian gauge models and provides striking experimental signatures such as jets in high-energy experiments involving the standard model strong force.

Our project also provided the first calculation in a composite Higgs scenario of the scattering amplitude of two PNGBs in the SU(2) model. The amplitude controls the production cross section of a new resonance expected to be produced at the LHC in a Composite Higgs scenario. The results therefore constrain the viability of the model and shed light on a key observable to test composite Higgs models using lattice simulations.

References:

  • [1] E. Bennett et al., Sp(4) gauge theories on the lattice: Nf=2 dynamical fundamental fermions, JHEP 12 (2019) 053 [arXiv:1909.12662]
  • [2] E. Bennett et al., On the colour dependence of tensor and scalar glueball masses in Yang-Mills theories, Phys. Rev. D Rapid Communications to appear [arXiv:2004.11064]
  • [3] V. Drach et al., Resonance study of SU(2) model with 2 fundamental flavours of fermions, Proceedings of Science [arxiv:1910.13847] – paper in preparation.
Categories: Highlights