2019 Highlights

Some of the Research Highlights from our users in 2019:


B meson oscillations

Figure 1. Oscillations between Bd or Bs mesons and their anti-particles occurs because of coupling via a ‘box diagram’ with W bosons and top quarks.

Watching the oscillations of two coupled pendulums (for example, attached to the same not-too-rigid support) provides insights into simple but rather counter-intuitive physics. Set one pendulum oscillating and, after a while, the other one will start up as the amplitude of the first dies down. Later still the amplitude of the second pendulum will die down and the first, rather astonishingly, start up again. The energy of the motion sloshes back and forth between the two pendulums because the swinging of either single pendulum is not a simple (‘normal’) mode of oscillation of the coupled system. Setting one pendulum in motion in fact sets both normal modes oscillating and these combine the motion of both pendulums in different ways. As time progresses the interference between the normal modes, which have different frequencies, results in this puzzling (at first sight) behaviour.

Figure 2. Values obtained for Vts and Vtd from experimental results for Bs/Bd oscillations and lattice QCD (solid lozenges) or assuming CKM unitarity (dashed). HPQCD’s (black) is the most accurate direct result to date.

An analogous phenomenon occurs in the physics of particles that are bound states of a bottom quark and either an anti-strange or anti-down quark, known as a Bs or Bd meson respectively. We can distinguish these particles from their anti-particles by their decay processes in our particle detectors. However the B and anti-B are coupled together through the weak interaction process shown in Figure 1. The particles with simple time behaviour (the ‘normal modes’) are then mixtures of the B and anti-B. A particle produced as a B may oscillate back and forth into an anti-B (and decay as one in our detector) at a later time with a frequency that can be measured very accurately in experiments at the Large Hadron Collider at CERN. Calculation of the coupling induced by Figure 1 requires the numerical solution of the theory of the strong force (that holds the B and anti-B together) using the method known as lattice QCD. HPQCD has recently done the most accurate calculations of this to date (arXIv:1907.01025) using DiRAC. Comparison to the experimental results enables us to determine the coupling between W bosons, top quarks and strange or down quarks (Vts and Vtd) as shown in Figure 2. Our results agree well with the Standard Model expectation that the CKM matrix of W-boson couplings is unitary, and limit the room for new physics. We also predict the annihilation rate of Bs and Bd mesons to a µ+µ- pair, a key mode for new physics searches at LHC. HPQCD has also done the first lattice QCD calculation of the difference of decay rates of the two normal modes of the Bs (arXiv:1910. 00970).


The external photo-evaporation of planet-forming discs

Planets form within flattened discs of material around young stars. These young stars themselves form in clustered groups of up to hundreds of thousands of stars. Most planet-forming discs therefore find themselves in a situation where they are being shone upon by ultraviolet (UV) radiation from these nearby young stars. This has the effect of stripping material from the disc in what is called a photoevaporative wind. This could play an important role in influencing how the discs evolve and the planets that they form.


Figure 1: An example disc external photo-evaporation model projected. The colour scale is the density and the arrows represent the flow structure of the material

There are two serious issues that limit our understanding of how external photoevaporation of discs. The first issue is that disc evaporation is very hard to model. To get the temperature correct, which determines the structure of the wind and the rate that mass is lost, one has to solve for the chemistry of the gas and how that is affected by the incoming UV radiation. Furthermore, each cell tries to cool by having “line photons” escape and carry away energy. Each region of the calculation therefore depends on every other region, since if the surroundings are very dense these photons won’t escape and vice versa. We have therefore only had 1D models of the process in the past, except in the strongest UV environments where thee modelling is easier.

T. J. Haworth, as part of the DiRAC project dp100, has developed the TORUS-3DPDR code (Bisbas et al. 2015, Harries et al. 2019), to tackle this problem in 2D models for the first time (Haworth & Clarke 2019). This has vastly improved our understanding of how discs are evaporated, including the rate at which they lose mass and where the mass is actually lost from the disc.

The other key challenge to understanding of how external photoevaporation is observing it in action. 1D models are insufficient for this purpose, but with these new 2D models, predictions can be made as to what should be observed (Haworth & Owen 2020)

References

  • Bisbas et al. (2015), MNRAS, 454, 2828
  • Harries et al. (2019), A&C, 27, 63
  • Haworth & Clarke (2019), MNRAS, 485, 3895
  • Haworth & Owen (2020), MNRAS, 492, 5030

Understanding the fundamental physics of the Earth’s radiation belts

Electromagnetic waves interact strongly with charged particles in the Earth’s inner magnetosphere. It is important to be able to model the evolution of these particles, since we rely on the many satellites that orbit within this hazardous radiation environment. Our current approaches to modelling the effect of wave-particle interactions in the outer radiation belt, over macroscopic time and length-scales, rely heavily upon quasilinear wave theory (e.g. see [1]). This theory describes wave-particle interactions as diffusive processes in the plasma, flattening out gradients and slowly moving a small fraction of electrons to higher energies (leading to acceleration), or different pitch-angles (leading to loss). However, the quasilinear formalism relies upon a number of assumptions that may not hold for all important wave types in the radiation belts. Use of the DiRAC facilities has enabled us to carry out important studies into the efficacy of the quasilinear formalism.

In [2], we used the EPOCH particle-in-cell code (see [3]) to conduct numerical experiments to investigate electron interactions with an incoherent spectrum of whistler-mode waves. Our novel approach directly extracts diffusive characteristics across all energy and pitch angle space. This benchmarking work establishes a framework for future investigations on the nature of diffusion due to whistler-mode wave-particle interactions, using particle-in-cell numerical codes with driven waves as boundary value problems.

In [4], we use the techniques developed in [2] to investigate the electron response to whistler-mode waves as a function of increasing wave amplitude. We find that whistler-mode waves with amplitudes of order (dB/B)2 ~ O(10-10) – O(10-6) in a uniform B give diffusive and advective dynamics. Over appropriately short timescales the diffusive component of the dynamics agrees with quasilinear theory even for the highest amplitude waves. These timescales range from thousands to tens of gyroperiods for the lowest and highest amplitude waves respectively. This provides very helpful insight into the fundamental nature of electron dynamics in energy and pitch-angle space due to the wave-particle interaction under the uniform background-field assumption, used in the basic quasilinear theory. This also motivates a variety of future numerical experiments and theoretical investigations on the applicability (or otherwise) of quasilinear diffusion theory to electron dynamics in the Earth’s radiation belts, using the particle-in-cell method established in [2].

References

  • [1] Glauert, S. A., and Horne, R. B. (2005), doi:10.1029/2004JA010851.
  • [2] Allanson, O. et al. (2019), doi:10.1029/2019JA027088
  • [3] Arber, T. D. et al. (2015), doi:10.1088/0741-3335/57/11/113001
  • [4] Allanson, O. et al (2020), doi:10.1029/2020JA027949

Exploring fundamental fields with strong gravity

Einstein’s theory of general relativity is both complex and beautiful; even after 100 years, there is still much it can reveal to us about the nature of gravity. It may also help us answer other fundamental questions, such the nature of the dark matter component of the universe. So far, this component is only known to interact gravitationally with other matter, and so its behaviour in strong gravity regimes may be the only way to probe key properties, such as its mass.

This project used numerical simulations of fields in gravitational backgrounds, to explore the behaviour of light dark matter particles around black holes. We demonstrated how accretion of light dark matter onto a black hole leaves a distinctive pattern in the density (above left) and began to study how its behaviour around spinning black holes might probe its self interactions (below right). Future work will extend these studies to the binary case, where such dark matter imprints may be observable by future detectors.


UKMHD: How is the solar corona heated?

The only viable mechanism to heat the outer solar atmosphere must involve the magnetic field. Two main approaches have been adopted. One is to model the heating along individual magnetic field lines and, by specifying the spatial and temporal form of the heating term in the thermal energy equation, try to compare predicted temperatures and densities with the observed values. In this way, constraints on the form of the heating are determined. However, the heating used is not determined from physical processes. The advantage of this approach is that the rapid spatial variations in the transition region can be adequately in 1D. The other approach is to solve the full MHD equations, without any assumptions on the form of the heating and see how heating occurs due to Ohmic and viscous effects. This second approach, while physically more realistic, requires the numerical resolution in 3D of large but narrow current sheets in which magnetic reconnection can occur. In the nanoflare heating model of Parker, a large number of intense current layers are required and this requires extremely large numerical grids to resolve the sharp current gradients. Hence, Dirac is the only UK computing facility that can cope with the computational requirements. However, it remains extremely difficult to resolve the transition region fully when the thermodynamics is included.

In a unique set of numerical experiments, the evolution of the solar coronal magnetic field is modelling by slowly stressing it, through slow photospheric motions. Once a significant amount of free magnetic energy has built up, and instability triggers a large release of energy. The important point is that the magnetic field does not release all the stored energy but tries to relax towards an non-potential state. After this large event, the imposed photospheric motions continue and the magnetic field releases more energy in the form of heat and kinetic energy in a series of smaller but more frequent events. The energy release is spread across the whole region covered by the magnetic field and this is known as an MHD avalanche process. For the first time, Reid et al, A&A 633, 2020, have used Dirac to investigate in detail how the heating process occurs and where the large number of heating are spatially located. Hence, the form of the coronal heating function is determined, without any prior assumptions, and can be used with the field aligned modelling to determine the plasma temperature and density for comparison with observations.

Figure 1 Left: Heating on a field line as a function of position and time. Middle: Heating on a field line, averaged in time, as a function of distance along the field line. Right: Heating on a field line, averaged in space, as a function of time. Distance s is normalised to the field line length and time is in units of the Alfven time.

Figure 1 (left) shows a contour plot of the heating along one field line as a function of distances along the field and time. Red indicates strong heating events. They are localised in space and time. Figure 1 (middle) shows the time averaged heating along the same field line. There is one large heating event about 0.4 of the distance along the field line but this is due to the very first event. After that there is no preferred location for the heating. Figure 1 (right) shows the spatially averaged heating as a function of time. Again, there is one large event that start off the avalanche process and then the heating comes in regular bursts. For more details see Reid et al Astron. & Astrophys., 633, id.A158.


The Simba Cosmological Galaxy Formation Simulations

Creating accurate models of our Universe is one of the great quests of humankind, going back to the ancient Greeks, through to Copernicus, Galileo, and Einstein. In recent years, huge progress has been made in developing physical models that connect the early conditions we see shortly after the Big Bang with the present-day population of galaxies, clusters, and intergalactic gas that we see around us today. Owing to the complex and highly nonlinear nature of gravitational collapse and radiative processes, supercomputer simulations play an increasingly critical role in developing models to explain why the Universe looks the way it does.

The Simba simulations are a next-generation suite of galaxy formation models, run on Cosma-7 at Durham. Simba includes unique physical modules that address some of the most pressing questions in modern galaxy formation theory, such as the growth of supermassive black holes and their impact on their host galaxy, and the formation and evolution of dust grains within a cosmological context. Simba predictions are in remarkably good agreement with a wide range of galaxy demographic properties, and have elucidated new ways in which galaxies and black holes interact with their surrounding cosmic ecosystems to shape the observable properties of both. Simba represents our most comprehensive and faithful view yet of the so-called baryon cycle that drives the formation and evolution of galaxies like our own Milky Way over cosmic time.

Simba has had a major impact on the community since its release in July 2019, with 18 publications as of June 2020 based primarily on Simba data, and over 30 active projects currently listed on the Simba pbworks page. The first annual Simba team meeting was held in February 2020, with over 30 attendees representing 17 institutions from 5 countries, demonstrating the global impact of Simba. Simba is gearing up for a public data release in the Fall, which will further widen the user base and establish it among the pre-eminent cosmological galaxy formation simulations today.


Understanding the Milky Way’s metallicity distribution

An important aspect of understanding the formation of bulges of galaxies, including that of the Milky Way, is understanding their metallicity trends. The most notable feature of the metallicity distribution is that when the bulge is viewed edge-on, it appears more vertically pinched than the density. In the Milky Way this manifests as an X-shape when viewed from the Solar perspective. Simulations that include gas, star formation and the chemistry of stars have managed to reproduce the pinched appearance of the metallicity (Debattista et al. 2017). However these simulations are expensive and do not permit an easy detailed matching to the Milky Way. In the past this has forced researchers to paint metallicity on particles in N-body simulations (i.e. without any gas or star formation). This type of painting has assigns a metallicity based on the position of a star (both its distance from the galactic centre and its height above the mid-plane). However this is equivalent to assuming that all stars at a particular position are born at the same time with the same metallicity, which is highly implausible. Using a suite of N-body simulations, we have shown that where a star ends up on a bulge depends strongly on its initial dynamical actions, which measure its angular momentum as well as its radial and vertical random motion. This allowed us to develop a novel technique for painting metallicity onto star particles using the dynamical actions. We showed that this technique produces metallicity maps comparable to those of star forming simulations. This more realistic metallicity assignment allows us to much more rapidly model the formation of the bulge of the Milky Way. The figure shows an example of such a metallicity mapping: the metallicity distribution (colours and black contours) is more pinched along the minor axis than is the density (white contours), as observed in the Milky Way and in external galaxies.

Moreover, using the actions to understand bulge formation has lead us to a number of important new insights. We showed for instance that the vertical gradient of the vertical action is largely erased by the formation of the bulge, whereas the very weak gradient of the radial action is vastly enhanced during the same time. This means that the vertical gradients observed in galaxies are not the result of the superposition of a metal-rich thin disc and a metal-poor thick disc. Another, quite surprising, result of our use of dynamical actions is that there are populations that vertically cool during bulge formation. We are investigating the properties of these populations and hope to be able to use them to constrain what the Milky Way looked like before the bulge formed.


Supermassive black hole jets stir old cosmic puzzles

Galaxy clusters are the most massive gravitationally bound systems and encode unique information on the composition of our Universe. They comprise of a massive dark matter halo and up to a few thousand rapidly moving galaxies enveloped by a very hot X-ray emitting plasma, known as the intracluster medium (ICM). Of particular importance for regulating the heating and cooling of the ICM, the decades long “cooling flow” problem, are the relativistic jets produced by supermassive black holes with masses in excess of a billion solar masses. Notwithstanding massive theoretical and observational effort, a major puzzle left to crack is how the energy from these powerful jets is transferred to the ICM to stop the catastrophic cooling.


Figure 1. Panel A shows a volume-rendered image of galaxy cluster and panel B shows the volume-rendered jet material as well as the gas velocity field (arrow vectors) in the cluster center. Panel C shows a cold disc structure which surrounds the SMBH. Finally, panels D and E show a 2D reconstruction of the Voronoi grid used and a velocity streamline map of the lower-right lobe-ICM interface, respectively. Credit: Bourne et al. 2020, MNRAS.

Using the moving-mesh code AREPO, we performed state-of-the-art simulations with the highest-resolution jets to-date within a large scale, fully cosmological cluster environment to shed light on this issue. The simulations make use of a novel refinement technique (see Figure 1) that not only allows high resolution close to the central black hole but also within the jet lobes themselves. We found that the mock X-ray observations of the simulated cluster revealed the so-called “X-ray cavities” and “X-ray bright rims” generated by supermassive black hole-driven jet, which itself is distorted by motions in the cluster remarkably resemble those found in observations of real galaxy clusters (see Figure 2). While it is well accepted that the vast amount of energy injected into the jet lobes (which would be enough to meet the Earth’s power usage for more than 1031 years) is sufficient to offset the cooling losses within the ICM and prevent the formation of a cooling flow, how this energy is effectively and isotopically communicated to the ICM is still an open debate. Our simulations showed that the ICM motions or “weather” are crucial to solving this problem. Through the cluster weather action, the jet lobes can be significantly moved and deformed, stirring them around the cluster, which ultimately leads to jet lobe disruption and effective energy transfer to the ICM which could be the long sought-after solution to combat the “cooling flow” puzzle.


Figure 2. This shows a mock observation made from the simulation on the right and an actual observation of the galaxy cluster MS 0735.6+7421 on the left. Both images show cavities excavated by the lobe inflation surround by X-ray bright rims of dense gas (blue), which are filled by distorted jet material (pink). Credit: Hubble and Chandra Image: NASA, ESA, CXC, STScl, and B. McNamara (University of Waterloo); Very Large Array Telescope Image: NRAO, and L. Birzan and team (Ohio University). Simulated Cluster Image credit: Hubble and Chandra Image (background): NASA, ESA, and B. McNamara (University of Waterloo); Simulated Observation Data: M. A. Bourne (University of Cambridge).

This work has been published in the Monthly Notices of the Royal Astronomical Society: “AGN jet feedback on a moving mesh: lobe energetics and X-ray properties in a realistic cluster environment” by Martin A. Bourne, Debora Sijacki and Ewald Puchwein. Royal Astronomical Society press release: “Stormy cluster weather could unleash black hole power and explain lack of cosmic cooling”, 14 October, 2019.


The effects of ionising star formation on clouds in galactic spiral arms

The formation and life-cycle of giant molecular clouds is a crucial part of understanding both the evolution of galaxies and the varying environments in which the star formation process occurs. The origins of the physical phenomena that govern giant molecular clouds and the star formation within them include external effects caused by the galactic environment (such as galactic potentials and external radiation fields) as well as more local effects such as stellar feedback and turbulence. Recently, we performed simulations of a galactic spiral arm consisting of complexes of giant molecular clouds that included ionising radiation from young stars. Unlike previous work in this area, which typically simulated isolated clouds, our models include a number of clouds along a spiral arm, and thus ionisation fronts propagate in a much more realistic environment. The initial conditions were achieved by extracting a 500 pc2 section of spiral arm from a galaxy scale simulation by Dobbs & Pringle (2013) and increasing the resolution.

Figure 1: The two left panels show gas density in the galactic plane after 3.3 Myr – the first with no stellar feedback, the second including photoionisation. The ionised gas fraction is shown by the contours. The dots represent sites of star formation – red if a massive ionising star is present and black otherwise. The panels on the right are equivalent but show the simulation after 4.24 Myr. Comparison between 3.3 and 4.24 Myr shows that when dense pockets are compressed by HII regions from multiple sides, triggered star formation often occurs. Gas that only has an HII region on one side may or may not undergo modest triggered star formation.

Using the simulations we examine the effect of photoionising feedback caused by the most massive stars forming within the clouds. Lyman continuum photons emitted from these massive stars ionise the surrounding gas creating bubbles – HII regions – which are bounded by expanding shocks. Photoionisation has the highest energy budget of the early onset stellar feedback mechanisms. Comparison of simulations with and without photoionisation shows that the timing and location at which stars form is significantly affected (Fig. 1). While the final fraction of gas converted to stars remains relatively unchanged, the time taken is reduced by a factor of nearly 2. This is due to star formation in nearby dense gas being compressed by shocks from the surrounding ionised regions. The result is an acceleration of the star formation rate and a much wider distribution of star formation across the region.


How stars form in the smallest galaxies

It has been a long-standing puzzle how the smallest galaxies in the Universe have managed to continuously form stars at such a remarkably low rate – tiny galaxies like Leo P (pictured above) form approximately one star every 100,000 years! Making use of the DIRAC DIaL supercomputer, our research team has established how such low-levels of star formation arise because small dwarfs can stay quiescent for several billions of years, slowly accumulating gas in their centres. When this gas eventually collapses, star formation resumes at rates consistent with observations.

Figure 1: Despite its extremely low-mass, Leo P contains a significant number of blue stars. Our simulated galaxies also present these tell-tale signs of recent star formation activity. Image credit: K. McQuinn and Hubble Space Telescope.

Through high-resolution computer simulations, we demonstrate that star formation in the very smallest dwarf galaxies is shut down in the early Universe as a result of heating and ionisation from the strong light of new-born stars, exploding stars and stellar winds. However, we find that this quiescent state can be temporary. Some of our simulated dwarf galaxies are able to slowly accumulate fresh fuel in the form of hydrogen gas, over many billions of years. Once a sufficient amount of gas has piled-up to overcome any residual heating, star-formation re-ignites and proceeds continuously until today. This scenario explains, for the first time, the puzzlingly low star formation rates observed in faint dwarf galaxies such as Leo P, Leo T, and others. The computational power provided by DIRAC DIaL was crucial to establish this result, allowing us to afford detailed modelling of the star formation and heating processes in these tiny galaxies, over the full lifetime of the Universe.

By deepening our understanding of dwarf galaxies, we gain new insights into how astrophysical processes, such as stellar explosions and the heating and cooling of cosmic gas, regulate galaxy formation. This work also opens the door to harnessing these galaxies as a new test of the “Big Bang” cosmological model. Further work is currently underway to predict how many such star-forming dwarfs exist in our Universe and how many will be discovered by the next generation of astronomical telescopes.


A steeply-inclined trajectory for the Chicxulub impact

Figure 1. The development of the Chicxulub crater. Frames from the simulation of a 17-km diameter asteroid striking Earth at a speed of 12 km/s.

Simulations performed on the DiRAC High Performance Computing Facility have revealed the asteroid that doomed the dinosaurs struck Earth at the ‘deadliest possible’ angle.

The simulations reveal that the asteroid hit Earth at an angle of about 60 degrees, leaving a signature of its trajectory in the rocks beneath the Chicxulub Crater in Mexico.

Such a strike would have maximised the amount of climate-changing gases thrust into the upper atmosphere, including billions of tonnes of sulphur that blocked the sun and triggered the KPg mass extinction. It is thought that up to 75% of plant and animal species on Earth became extinct, including dinosaurs.

The new simulations are the first ever fully 3D numerical calculations to reproduce the whole impact event, from the moment of impact to the moment the final Chicxulub crater was formed. Results of the simulations were compared with geophysical data from the site of the impact crater to diagnose the direction and angle of impact.

The results from these simulations were published in Nature Communications (Collins et al., 2020), and attracted widespread international media attention [2, 3]. The article was also featured on the STFC website [4].


Identifying and Quantifying the Role of Magnetic Reconnection in Space Plasma Turbulence

Plasma is the fourth state of matter after the solid, liquid, and gaseous states. Almost all of the visible matter in the Universe is in the plasma state. This includes all the stars, the material between the stars, the intergalactic medium, and also the gas in the Earth’s cosmic neighbourhood: the solar wind and the geospace environment in Earth’s magnetosphere. Very often, these plasmas have a low viscosity. Therefore, we expect that these plasmas are highly turbulent. The goal of our project is to understand the role of a process called “magnetic reconnection” in the omnipresent plasma turbulence in space.

Magnetic reconnection is a multiscale plasma phenomenon in which the magnetic field re-organises and transfers energy into the plasma particles. It occurs in laboratory plasmas, such as fusion or confinement experiments, and in astrophysical plasmas, such as the solar wind, coronal mass ejections, solar flares, explosive events in planetary magnetospheres, accretion discs, and star-formation regions. Although reconnection has been studied for over 60 years, there is still no consensus about a complete theory to describe magnetic reconnection at all scales involved.

Figure 1. Volume rendering of the electric current density J in one of our DiRAC simulations. The turbulence forms elongated structures, which are called “magnetic flux ropes”. The flux ropes are unstable and magnetic reconnection results from their non-linear interaction. Figure from Agudelo Rueda et al. (2020, in preparation).

Current theories and numerical simulations suggest that turbulence and magnetic reconnection are closely related. Turbulence transfers energy across the scales of the plasma. Energy is injected at large spatial scales and then transported to the very small scales where it dissipates as heat. This transfer is known as the turbulent energy cascade. It is proposed that the ultimate dissipation of energy at small scales strongly depends on a link between turbulence and reconnection.

We study this link between turbulence and reconnection with large computer simulations of plasma turbulence. The computational power provided by high-performance computing facilities at DiRAC enables us to study plasma processes from first principles using three-dimensional cutting-edge particle-in-cell (PIC) simulations. This type of simulation resolves the smallest plasma scales and accounts for phenomena that only reveal themselves in kinetic theory. Even with the enormous power of DiRAC, full PIC simulations are unable to cover the whole range of scales involved in plasma turbulence and reconnection in a real plasma, since these simulations are very expensive in terms of computing memory and time. With DiRAC’s capabilities, however, we have taken a big step forward to understanding turbulence and magnetic reconnection in plasmas.

Figure 2. Side view of our three-dimensional simulation box. The different colours represent various markers for magnetic reconnection. We use these markers to identify reconnection regions and to analyse the details of magnetic reconnection in plasma turbulence. Figure from Agudelo Rueda et al. (2020, in preparation).

We initialise our computations through the collision of counter-propagating plasma waves in our simulation box. These waves interact with each other and create turbulence, which is consistent with the observed turbulence in the solar wind. We find that, after some time, the turbulence itself generates current sheets and forms regions of magnetic reconnection. We define a set of criteria to identify where in the simulated plasma reconnection occurs. With these criteria, we find one extensive reconnection site in our simulation and study its properties in great detail.

One huge advantage of our simulations is that we can fly an artificial spacecraft through our simulation box. This spacecraft records data in the same way as a real spacecraft would record data in space plasmas like the solar wind. The measurements from our artificial spacecraft will serve as predictions for measurements of reconnection in the turbulent solar wind by ESA’s and NASA’s latest solar-system spacecraft Solar Orbiter and Parker Solar Probe which have the appropriate instruments onboard to test our predictions.


Extreme Gravity and Gravitational Waves

When two black holes inspiral and merge, they emit gravitational waves that have famously been detected for the first time by LIGO in 2015 [1]. These gravitational waves carry energy as well as momentum away from the black-hole binary system. The energy loss is responsible for the inspiral and eventual merger. The linear momentum radiated away, on the other hand, can impart a recoil on the black-hole system, just like the firing of a bullet imparts a recoil on the shooter. This effect follows from conservation of momentum; for black holes, the gravitational waves carry net momentum in some direction, so the black-hole emitter has to respond by moving accordingly in the opposite direction.

Figure 1. Snapshot of the gravitational-wave signal generated by an eccentric black-hole binary in the so-called superkick configuration where the black-hole spins point inside the orbital plane but in opposite directions. This snapshot shows the cross section of the wave signal around the time of the black-hole merger inside the plane perpendicular to the orbital plane. Note that the waves are stronger in the upper half than the lower half. This asymmetry corresponds to a net emission of linear momentum and a corresponding recoil of the black hole resulting from the merger. This recoil or “superkick” is illustrated in the three panels on the right which display a zoom-in of the central region at different times: the time Tm around merger (as in the large left panel) as well as 60 and 120 time units later (measured in units of the black-hole mass M). The (nearly circular) black line in these panels represents the black-hole horizon. Note that the black hole moves downwards as time progresses; this motion is the kick effect.

One of the most dramatic results of numerical relativity has been the discovery of “superkicks”, very large recoil velocities of up to 3,700 km/s that occur when the black holes start their inspiral with specific spin directions [2,3]. Such large kicks are sufficient to eject black holes from even the most massive host galaxies; the escape velocities from giant elliptic galaxies are about 1,000 km/s. This has sparked a lot of interest among astrophysical observers to look for ejected black holes, because this would have major implications for the formation history of supermassive black holes. Several candidates have been found but alternative explanations (without resorting to kicks) cannot be ruled out; see [4] for a review.

As part of our DiRAC project, we have found that even larger recoil velocities, up to 4,300 km/s are realized when the black holes inspiral on moderately eccentric (rather than quasi-circular) orbits. In general, moderate eccentricity can amplify the recoil effect by up to 25% with possible effects on the retention rate of black holes in globular clusters, second-generation populations of black-hole merger events, as well as the black-hole occupation fraction of galaxies.

  • [1] B. P. Abbott et al, Phys.Rev.Lett. 116 (2016) 061102, arXiv:1602.03837 [gr-qc]
  • [2] J. A. Gonzalez et al, Phys.Rev.Lett. 98 (2007) 231101, gr-qc/0702052
  • [3] M. Campanelli et al, Astrophys.J. 659 (2007) L5-L8, gr-qc/0701164
  • [4] S. Komossa, Adv.Astron. 2012, 364073, arXiv:1202.1977 [astro-ph]
  • [5] U. Sperhake et al, Phys.Rev.D 101 (2020) 024044 arXiv:1910.01598 [gr-qc]

Spontaneous Symmetry Breaking in the 3d Thirring Model

The Thirring Model describes relativistic fermions moving in a two-dimensional plane and interacting via a contact term between covariantly conserved currents. The physical system it most resembles is that of low-energy electronic excitations in graphene. For free electrons at half-filling on a honeycomb lattice, conduction and valance bands form cones just touching at their vertices at two “Dirac points” lying within the first Brillouin zone. Since the density of states vanishes, and the effective fine structure constant is boosted by a factor vF/c ≈ 1/300, where the pitch of the cones vF is the Fermi velocity, the resulting physics is described by a strongly-interacting relativistic quantum field theory, with equal likelihood of exciting electrons or holes.

Besides possible applications in layered condensed matter systems, the Thirring model is an interesting theory in its own right, and possibly the simplest theory of fermions requiring a computational solution. A major question is whether or not for sufficiently strong self-interaction a bilinear condensate <ψψ> forms, restructuring the ground state and causing a gap to form between the conduction and valence bands resulting in a phase transition from semimetal to insulator – this process is precisely analogous to spontaneous chiral symmetry breaking in QCD. It is anticipated that this can occur if the number of fermion species N lies below some critical Nc: determination of Nc requires non-perturbative quantum field theory.

Figure 1.

In lattice field theory, the problem seems to require a precise rendering of the correct U(2N) fermion global symmetries. We have been using the Domain Wall Fermion formulation, in which U(2N) Ginsparg-Wilson symmetries are recovered in the limit that the domain wall separation Ls to ∞. Simulations have been performed using the RHMC algorithm on 123 and 163 systems with Ls ranging from 8 to 48. It turns out that at strong couplings g2 and light fermion masses m recovery of GW symmetry is slow – nonetheless at weak-to-moderate couplings our results for <ψψ> as a function of g2, m obtained at Ls = 48 are compatible with an extrapolation to the large Ls limit based on the entire dataset. They are well-fitted by an equation of state which assumes a continuous symmetry-breaking transition with non-mean field theory critical exponents at a critical 1/g2 ≈ 0.28 (see figure). The fit is less convincing at the strongest couplings (lying within the broken phase), which we attribute to the Ls to ∞ limit not yet being reached.

A major outcome of the project to date is the confirmation that the critical number of flavors for symmetry breaking in the Thirring model Nc>1. This is a significant step towards understanding the formulation of fermion quantum fields in strongly-interacting theories.


The universal structure of dark matter halos over a mass range of 1020

Cosmological models in which the dark matter consists of cold elementary particles predict that the population of dark matter haloes in the Universe should extend to masses comparable to the Earth’s, many orders of magnitude below those where galaxies can form. Using a novel multi-zoom technique, we carried out a consistent cosmological simulation of the formation of present-day haloes over the full mass range populated when dark matter is assumed to be a Weakly Interacting Massive Particle (WIMP) of mass ~100 GeV. The simulation has a dynamic range of 30 orders of magnitude in mass and resolves the internal structure of hundreds of Earth-mass haloes in as much detail as that of hundreds of rich galaxy clusters. We find that halo density profiles are universal over the entire mass range and are well described by simple two-parameter fitting formulae, such as the well-known Navarro-Frenk-White profile. Halo mass and concentration are tightly related in a way that depends on cosmology and on the nature of the dark matter. At fixed mass, concentration is independent of local environment for haloes less massive than those of typical galaxies. Haloes over the mass range (103 – 1011) Mo contribute about equally (per logarithmic interval) to the dark matter annihilation luminosity, which we find to be smaller than all previous estimates by factors ranging up to one thousand. This was a hugely challenging project that took seven years to complete, during which we uncovered and solved a number of technical problems in N-body simulations revealed only in the extreme conditions of our calculation. It was carried out on DiRAC’s Cosmology Machine (COSMA) at Durham.

Figure 1. Projected dark matter density maps at three simulation levels: base level – cube side L = 740 Mpc; dark matter particle mass, mp = 1.6×109 Mo; top inset – L = 170 kpc, mp = 5.5×10-3 Mo; bottom inset – L= 170 pc, mp=1.6×10-11 Mo. In the base panel the largest haloes have a mass similar to that of a rich galaxy cluster, whereas in the bottom subpanel the smallest clearly visible haloes have a mass comparable to that of the Earth. The largest halos at all levels are resolved with at least 105 particles.

References:

  • Jie Wang, Sownak Bose, Carlos Frenk, Adrian Jenkins, Liang Gao, Volker Springel, and Simon White, to appear in Nature.

How primordial magnetic fields shrink galaxies

Little is known about how primordial magnetic fields are generated nor about their initial configurations. Their exact primordial normalization, coherence length and spectral index are undetermined but they are widely believed to be many orders of magnitude lower in strength (3 x 10-20 G < B0 < 3 x 10-10 G) than their microGauss values in present-day galaxies. What is known is that magnetic fields play as large a role in galaxies as turbulence, thermal and cosmic ray energy densities and that therefore they have a crucial role to play in shaping a galaxy’s evolution. To understand how different configurations of primordial magnetic fields might affect the global morphological and dynamical properties of galaxies, we used DiRAC to perform a suite of high-resolution constrained transport magneto-hydrodynamic cosmological zoom simulations where we varied the initial magnetic field strength and configuration along with the prescription for stellar feedback. In Martin-Alvarez et al, 2020, MNRAS, 495, 4475 we report our findings that strong primordial magnetic fields delay the onset of star formation and drain the rotational support of the galaxy, diminishing the radial size of the galactic disk and driving a higher amount of gas towards the centre. A possible mechanism behind such a reduction in angular momentum is magnetic braking. The figure above shows the effect of increasing primordial magnetic field strength on the stars and gas of a galaxy from our simulations. For the strongest primordial magnetic field studied (B0 = 3 x 10-10 G ), the gas radial scale length is halved compared to the simulation where the primordial magnetic field is weakest. New Scientist magazine reported on our work: https://www.newscientist.com/article/2244840-the-milky-way-may-have-been-shrunk-down-by-ancient-magnetic-fields/

Figure 1: (Top) Having run the same cosmological Adaptive Mesh Refinement magneto-hydrodynamical zoom simulation with different primordial magnetic field strengths, in the top panel we show how the stars in our simulated high redshift (z = 2) “NUT” galaxy would appear to the James Webb Telescope. The strength of the initial magnetic field B0 increases by ten orders of magnitude from B0 = 3 x 10-20 G to B0 = 3 x 10-10 G. Each panel is centred on the galaxy. On both the mock images and the gas density projections, we include golden circles of radius 3 physical kpc to aid the visual comparison of their dimensions (see also the scale bar on the Figure). (Bottom) Gas density projections ρg of the galaxy in a few selected simulations. For the strongest primordial magnetic field studied (B0 = 3 x 10-10 G), the gas radial scale length is halved compared to the simulation where the primordial magnetic field is weakest (B0 = 3 x 10-20 G). Figure adapted from Martin Alvarez et al. 2020.

Stellar Hydrodynamics, Evolution and Nucleosynthesis

The stellar hydrodynamic group of Prof Raphael Hirschi at Keele University and their international collaborators computed a series of 3D hydrodynamical simulations of stellar convection during carbon burning (see figure 1). These 3D implicit large eddy simulations were computed with the PROMPI code on the COSMA supercomputer in Durham. The simulations were analysed on-the-fly with PROMPI’s built-in mean-field analysis based on the Reynolds-averaged Navier–Stokes framework (RA-ILES, see Arnett et al 2019 and references therein). Both the vertical convective velocities within the convective region and the bulk Richardson number of the boundaries were found to scale with the driving luminosity as expected from turbulence theory: v ∝ L1/3 and RiBL-2/3, respectively. The positions of the convective boundaries were estimated through the composition profiles across them, and the strength of convective boundary mixing was determined by analysing the boundaries within the framework of the entrainment law. The entrainment was found to be approximately inversely proportional to the bulk Richardson number, RiB (∝ RiB-a, with a ~0.75). Although the entrainment law does not encompass all the processes occurring at boundaries, the results support the use of the entrainment law to describe convective boundary mixing in 1D models and new 1D models including entrainment show promising results. These large-scale simulations also inform the team’s ongoing theoretical efforts to overcome the shortcomings of the widely used but outdated mixing-length theory (Arnett et al 2019).

Figure 1. From left to right: simulations of a convective carbon burning shell (box-in-a-star set-up) with increasing energy generation rate boosting (eps boosting factor from 1000 to 33,000). The increase in the energy generation rate leads to stronger turbulence and faster growth of the convective region (fig. 6 in Cristini et al 2019).

The first observational evidence that planets
migrate?

The fact that young planets migrate as a result of interaction with their natal disc is well established theoretically and is widely applied when explaining the orbital properties of exoplanets. To date however the very long timescales for planetary migration have meant that there is no direct evidence for this. Moreover theoretical estimates for the rate of planetary migration are rather uncertain.

Figure 1. Left: ALMA image of Elias 24; Right: FARGO3D calculation by Richard Booth
illustrating the dependence of emission profile on planet migration rate. Clearly
a migrating planet better explains the depth of the emission gap.

We have proposed the first potential observational test of planetary migration, pointing out, on the basis of our hydrodynamical simulations of dust and gas in discs, that a migrating planet has a different signature from a stationary planet in terms of the way that it sculpts the disc dust. We have modeled the case of the young disc Elias 24 where the Atacama Large Millimetre Array (ALMA) has demonstrated a pronounced gap (presumed to contain a planet) and a bright emission ring outside. We have shown that the dust emission profile of the disc is better fit by models where the planet migrates . In order to test this hypothesis further we are seeking to acquire data at a range of wavelengths that are sensitive to a range of dust grain sizes. We predict that smaller grains (which are probed by shorter wavelength observations) should be preferentially concentrated in a ring interior to the planet whereas larger grains should instead congregate outside the planet; this therefore changes the emissivity profile at different wavelengths. The predicted profiles depend on how fast dust grains drift inwards relative to how fast the planet is migrating. Thus multi-wavelength observations provide a way to estimate the planet’s migration rate.


Computational spectroscopy of exoplanets

ince the first discovery of a planet orbiting a distant star 25 years ago (Nobel Prize in Physics 2019) over 4000 of exoplanets have been discovered. Now we want to know what these exoplanets are made of, what weather is like there, and whether they are habitable. These questions can be answered via characterisation of their atmospheres, which requires detailed models of the planetary atmosphere and large quantities of laboratory data on how possible atmospheric species absorb light.

Figure 1. Artist’s impression of the exoplanet K2-18b (ESA/Hubble, M. Kornmesser).

In 2019, a UCL team comprising Dr Angelos Tsiaras, Dr Ingo Waldmann, Prof Giovanna Tinetti, Prof Jonathan Tennyson, and Prof Sergey Yurchenko made the first detection of water in the atmosphere of an exoplanet, called K2-18b orbiting within the star’s habitable zone. This publication in Nature Astronomy, 3, 1086 (2019) was the highest ranked astronomy paper of 2019 with some 4000 worldwide media reports.

An atmospheric retrieval requires significant input on the spectral properties of the individual atoms and molecules which comprise this atmosphere. The ExoMol group led by Profs. Jonathan Tennyson and Sergey Yurchenko has pioneered quantum mechanical-based theoretical procedures which allow them to compute the appropriate lists of absorption lines. The ExoMol database provides comprehensive and accurate opacities for the key molecules which are likely to exist in the atmospheres of exoplanets. ExoMol line lists are used by exoplanetary models world-wide, to model the behaviour of molecules in the atmospheres of exoplanets. Because the atmospheres of most observable exoplanets are rather hot (often well over 1000 K) the molecular line list become huge. The calculation of such big data can be only accomplished on modern HPC facilities and DiRAC has been a reliable partner for ExoMol in providing these facilities since 2012.

ExoMol database contains extensive line lists for 80 molecules, 190 isotopologues. Most its 700 billion transitions have been produced using DiRAC resources, with 10 line lists resulted from the new thematic project “Spectroscopy of hot exoplanets”. One of these data is the line lists for SiO2 consisting of over 32 billion transitions (Owens et al., MNRAS, 495, (2020)). SiO2 is believed to be an important absorber in atmospheres of hot “lava planets” which orbit so close their host star that their rocky surface must be molten. Until our calculations, there existed no laboratory data on the absorption properties of this key molecule.

Owing to the success of ExoMol and the demand for these data ERC have just funded a follow-up project, ExoMolHD, led by Prof Tennyson, on precision spectroscopic data for studies of exoplanets and other hot atmospheres which will run from 2020 to 2025.


Exploring Jupiter’s Magnetic Field

The Juno spacecraft reached Jupiter in 2016 and has since been taking measurements of the gas giant’s magnetic and gravitational fields. Many interesting features on Jupiter’s surface, such as the intense isolated magnetic flux patch near the equator dubbed the “Great Blue Spot”, have been observed. Juno’s measurement also extends the Lowes spectrum—magnetic energy per spherical harmonic degree at the planet’s surface— considerably further than old data allowed. Using our numerical model of Jupiter’s dynamo, we investigate the connection between the shape of magnetic energy spectrum and the dynamo radius—the depth below the planetary surface at which dynamo action begins (Yue-Kin Tsang and Chris Jones, Earth Planet. Sci. Lett., vol. 530, 115879, Jan 2020).

Figure 1. Jupiter’s surface radial magnetic field from our numerical model, showing a broadly dipolar field with some intense flux spots

Our dynamo model solves the MHD equations describing the behaviour of an electrically conducting fluid in a spherical shell driven by convection. A key point is that the electrical conductivity increases sharply but smoothly with depth so it is not known a priori where the dynamo radius is. The density of Jupiter varies greatly with depth, so an anelastic model is required to represent the convection accurately. Furthermore, the model parameters needs to be in the strong dynamo regime where the magnetic field strongly affects the flow. Both features are essential for realistic Jupiter models, but they increase computational cost significantly, making the use of the DiRAC facilities essential for our work.

Figure 2. Radial magnetic field from the Juno mission (Connerney et al. 2018), showing the equatorial great blue spot

The numerical model produces Jupiter-like dipolar dominant magnetic field. The Lowes spectrum at the surface, as well as the magnetic energy spectrum at different depths, are calculated. We find that within the region where magnetic field is being generated, the magnetic energy spectrum is shallow and maintains more or less the same shape. Outside this dynamo region, the spectrum steepens. This transition enables us to identify the dynamo radius in the model. We can compare our numerical spectrum with the spectrum observed by Juno, which shows that the dynamo radius of Jupiter cannot be deeper than 0.83 Jupiter radius.

Although Juno has revealed the intense flux patches predicted by our simulations, such as the great blue spot, the observed field is nevertheless smoother than expected at the high magnetic Reynolds numbers inside Jupiter. One possible explanation is the existence of a stably stratified layer just under the upper non-conducting molecular layer due to “helium rain-out”, high pressure making the helium in the gas giant form droplets which fall under gravity. This stable layer might also help explain the fierce zonal winds seen on Jupiter. We are currently including a stably stratified layer in our models, to see if this can give a dynamically consistent picture fitting the Juno data.


‘Fast and furious’ planets around tiny stars

New astronomy research from the University of Central Lancashire suggests giant planets could form around small stars much faster than previously thought. As published in the “Astronomy and Astrophysics Journal” (https://www.aanda.org/10.1051/0004-6361/201936954), Dr Anthony Mercer and Dr Dimitris Stamatellos’ new planet formation research challenges our understanding of planet formation.

Figure 1. Computer simulation of planets forming in a protoplanetary disc around a red dwarf star.

Red dwarfs, the most common type of stars in our Galaxy, are small stars, 10% to 50% the size of our Sun. Despite their small mass, they are found to host giant planets up to 10 times bigger than Jupiter, the largest planet in our Solar System. The formation mechanism of these big planets remains an unsolved mystery. Giant planets around stars, like our Sun, are thought to have formed by the gradual build-up of dust particles to progressively bigger bodies. However, red dwarfs are tiny when compared to the Sun, and they do not seem to have enough material around them to form such big planets.

The research team used the UK Distributed Research using Advanced Computing (DiRAC) supercomputing facility to simulate the evolution of protoplanetary discs around red dwarf stars. Protoplanetary discs are rotating structures of dense gas and dust found around all newly-born stars. The researchers found that if these young discs are big enough they can fragment, i.e. break up into pieces, forming gas giant planets. This theory predicts that the formation of giant planets happens within a few thousand years, a timescale which is extremely fast in astrophysical terms. The researchers also found these planets are extremely hot when they form, with temperatures at their cores reaching thousands of degrees. Such hot planets would be relatively easy to observe when they are still young. They do not have an internal energy source, so they become fainter with time, and the window of opportunity to directly observe them is very small. Nevertheless, they can still be indirectly observed by their effect on their host star. Many planets like these have been discovered so far around small stars. Future observations of planets around very young red dwarf stars will test the predictions of this new theory.

A high resolution animation can be downloaded at here, and a related YouTube animation can be found at here.


Monte Carlo radiative transfer for exotic thermonuclear supernovae

The last few years have seen many exciting developments in time-domain astronomy. In particular, thanks to modern surveys that monitor large areas of the night sky on a regular basis, there has been a huge increase in the both the quantity and quality of observational data for a wide variety of astrophysical transients, including supernovae. Such data allow us to test our understanding of astrophysical phenomena in ways, and to a degree of accuracy, that was not previously possible. For example, while the basic picture that supernovae result from the explosive deaths of stars is well established, it has become clear that there is a great deal of variation within the supernova population. This applies even to classes of supernovae that have been traditionally considered rather homogenous, such as the thermonuclear “Type Ia” supernovae (Taubenberger 2017).

Our work focuses on using computer simulations to make predictions for the radiation emitted by supernovae and related stellar explosions. By comparing our results to observations, we hope to test the underlying theoretical models and to help interpret new observational data. Thus, driven by observational advances, a major component of our work in recent years has been understanding how the observed diversity can be reconciled with particular explosion scenarios.

Figure 1. Comparison of our calculated synthetic spectrum for a new double-detonation model (black line) to the observed spectrum of the unusual supernova SN 2016jhr (red) which is clearly different from the spectrum of a ‘normal’ Type Ia supernova such as SN 2011fe (shown in orange). Note that our model provides a reasonable match to both the overall shape and to many of the discreet spectral features (figure adapted from Gronow et al. 2020).

In 2020 we published new results based on simulations of a particular class of white-dwarf explosion and showed that these can provide a plausible match to certain unusual supernovae. Specifically, our work explores the “double-detonation” model in which a white dwarf star, which is composed mainly of carbon and oxygen, is able to accumulate a surface layer of helium via interaction with a companion star in a binary system. If nuclear reactions can ignite in the helium layer, it has been suggested that this may lead first to detonation of the helium which, in turn, may trigger detonation of the underlying carbon-oxygen material. This “double-detonation” scenario predicts certain key signatures owing to the structure where nuclear ash from the carbon-oxygen core is overlaid by material that originated in the helium layer. Our simulations show (see Figure; details published in Gronow et al. 2020) that the theoretical spectra predicted from such a model give a good match to particularly unusual transients, such as SN 2016jhr (Jiang et al. 2017). Such comparisons support the conclusion that transients of this sort may indeed be explainable via this class of theoretical model – one piece for the puzzle of understanding the origins of supernovae. We are now developing further such simulations to explore more fully the range of properties that double-detonation models may be able to explain – it remains to be seen whether this class of model can really account for a large fraction of the observed Type Ia supernova population or whether a mix of scenarios must be invoked to account for the variations that are observed.

This work also used the Cambridge Service for Data Driven Discovery (CSD3), part of which is operated by the University of Cambridge Research Computing on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk).

References:

  • Taubenberger S., 2017, in Handbook of Supernovae, ed. Alsabti & Murdin (New York Springer), 317;
  • Jiang J., Doi M., Maeda K., et al. 2017, Nature, 550, 80;
  • Gronow S., Collins C., Ohlmann S., Pakmor R., Kromer M., Seitenzahl I., Sim S., Röpke F., A&A, 2020, 635, 169.

COSMOS: Studying cosmic strings

COSMOS consortium members (dp002) have made good progress over the past year using DiRAC HPC Facilities, particularly implementing adaptive mesh refinement (AMR) techniques for the study of extreme gravity in a cosmological context. This year we highlight high resolution simulations of radiation from cosmic strings which were generated using GRChombo, that is, the pioneering numerical relativity code developed within the COSMOS consortium and now attracting a growing research community (see the public repository github.com/GRChombo). The improving sensitivity of the LIGO/Virgo gravitational wave detectors ultimately motivated this work because of the discovery potential of these experiments for cosmic strings, but the initial applications are actually on the origin of dark matter axions.

Figure 1. Massless radiation through the n=2 quadrupole is the dominant decay mode for global (axion) strings.

Cosmic strings – heavy line-like defects traversing through space – can arise when the rapid cooling of the hot early Universe triggers a phase transition. For this reason, cosmic strings are strongly tied to fundamental physics and so their detection could offer deep insights into the underlying structure of particle physics and spacetime itself. Accurate signal templates are essential for observational searches, but numerical simulations of cosmic strings are computationally challenging because of the wide range of physical scales involved; the ratio between the string width and the scale of the Universe across which the strings stretch is approximately 1030! This is far beyond the capability of current simulations to resolve, so different approximations are made for simulations to be computationally viable, usually either taking a zero thickness limit or a fixed comoving width; the outcome has been quite different predictions in the literature for both gravitational wave signatures and string axion (dark matter) radiation. A recent consortium paper (arxiv.org/1910.01718.pdf) addresses this controversy by using AMR to simulate global (or axion) string configurations and networks, allowing the numerical grid resolution to adapt to the scale of the problem. Figure 1 shows a simulation of an oscillating sinusoidal string, with massless radiation emanating out in a quadrupole pattern. Performing a careful quantitative analysis of this axion radiation led to the conclusion that the evolution of global strings tends towards that predicted in the thin-string limit, with additional radiation damping or backreaction. This provides a significant step towards resolving this issue and making precise predictions (in this case) of the small axion mass, which is a key parameter defining current dark matter axion searches.

Figure 2. String network decay channels into massive radiation are generically suppressed, except in nonlinear regions where there can be explosive production.

This research is also being taken forward with much larger simulations of full cosmic string networks (see Figure 2), using diagnostic tools developed to measure competing massless (axion or GWs) and massive (Higgs) decay channels. This cosmic string work with the flexible GRChombo code has proved to be an important exemplar for testing new HPC capabilities, particularly for the COSMOS Intel Parallel Computing Centre collaboration on in-situ visualization (with the Intel Viz team, TACC and Kitware); this pioneering work using the OSPRAY ray-tracing visualization package has been demonstrated by consortium members at recent international supercomputer conferences and at SIGGRAPH 2019. We have also initiated a collaboration with the Chombo team at Lawrence Berkeley National Laboratory to speed up core libraries.

Figure 3. Lead author, Amelia Drew, on stage with Intel at SIGGRAPH 2019 demonstrating HPC in-situ visualization capabilities using cosmic defect codes.

The evolution of high mass-ratio circumbinary discs

In recent years the detection of a large variety of structures in protoplanetary systems (once believed to be featureless), the detection of a high number of exoplanets with properties much different from those detected in the solar system, and the detection of gravitational waves from binary black holes have all raised a similar fundamental research question: “How does the mutual interaction between a binary and its surrounding gas shape the systems we observe?”

The conservation of angular momentum during the infall of material on to a binary (which in this context can mean two stars, two black holes, or a star and a planet) forces the material to form a disc structure that orbits the binary, and the dynamics of these so-called circumbinary discs have been studied for many years. The binary exerts a so-called “tidal torque” on the accretion disc, carving a gap or even a cavity in the disc. The disc in turn exerts a back-reaction torque on the binary, changing its orbit, causing it to migrate and/or changing its eccentricity. The traditional theoretical approach requires some simplifying assumptions, in order to make the equations analytically tractable, such as assuming that the mass of the secondary, or the mass of the disc, is much smaller than the primary. When these assumptions are not satisfied the theoretical predictions become unreliable, and we must turn to numerical hydrodynamic simulations in order to investigate the problem properly. However, the time-scales involved in this problem are long (at least hundreds of binary orbits), and high numerical resolution is required to resolve the tidal torques correctly. As a result, accurate hydrodynamical simulations of binary-disc interactions are extremely computationally challenging.

Figure 1: Disc surface density from 3-D simulations of circumbinary discs, for several different values of the secondary-to-primary mass ratio
q, performed on the DiRAC Data Intensive@Leicester system. White dots mark the location of the binary components. (Figure adapted from Ragusa et al., submitted). More massive companions invariably result in non-axisymmetric “horseshoe” structures in the disc.

By exploiting the power of DiRAC, we were able to perform a suite of 3-D Smoothed Particle Hydrodynamics simulations which follow the evolution of a wide range of different circumbinary discs for long timescales (typically 2000 binary orbits). Our main aim was to understand the physical mechanism responsible for the formation of the so called “horseshoe” structures that have been seen in a large number of recent simulations of circumbinary discs, and which have been proposed as an explanation for the asymmetric structures observed by ALMA in numerous planet-forming systems. Our simulations (shown in Fig.1) showed that horseshoe structures are associated with the rapid growth of the cavity size when the disc becomes eccentric, if the binary companion is sufficiently massive. The long-term evolution of our simulations suggests that these features are actually transient, but they can survive for thousands of binary orbits. In real systems this corresponds to a significant fraction (~10%) of the ~Myr lifetimes of protoplanetary discs, which may explain why these structures are only seen in a sub-set of observations. Our simulations have provided important new insight into an important theoretical problem, as well as raising some interesting new questions. In particular, we unexpectedly discovered a very strong relationship between the growth of the disc cavity and its eccentricity, and our future work with DiRAC will investigate the physics behind this result in much greater detail.


Astrophysical Maser Flares from Shock Impact

Figure 1. A sequence of synthetic images in maser line emission of an astrophysical cloud being compressed by a 7.5km/s shock moving vertically upwards. The percentage figures represent the fraction of the cloud compressed. Axes are in the scaled units of the model, and the colour bar to the right of each figure is the log (base 10) of the maser intensity in units of the saturation intensity; all panels are on the same scale.

One mechanism that has been put forward as a generator of maser flares in massive star-forming regions is the impact of a shock of modest speed on a cloud with a typical size of a few astronomical units. The sequence of diagrams on the left show the effect on images of such a cloud if struck by a 7.5km/s shock that moves up the z-axis (bottom to top). There is a frame change at the 50% shocked point from a view where the shock appears to move, at lower precentages, to a view where the remains of the uncompressed cloud flows into a stationary shock at the cloud mid-point (for shocked proportions > 50%). The shock is modelled as isothermal, with a compression factor proportional to the square of the Mach number. In this case, the Mach number is 3, giving a density enhancement of 9 in the shocked part of the cloud, which increases from 10% of the cloud in the top diagram to 80% in the bottom panel. Compression is applied to the node distribution of the model prior to triangulation. The colour palette is the base-10 logarithm of the maser specific intensity relative to the saturation intensity so the brightest parts of the image increase from about 0.001 of the saturation intensity in the undisturbed cloud to of order 104 times this parameter in the 80% panel, so that the shock increases the brightest pixel intensities by of order 10 million times. The effect on the flux density, averaged over the whole cloud is lower, but this impact still represents a very convincing mechanism for generating high-contrast maser flares. If one model unit on the axes of each panel is equal to half an astronomical unit, then the rise time of the flare is approximately the time taken for the shock to cover 1 astronomical unit, or 231 days. The effectiveness of the flare mechanism relies on a switch, due to the shock, from a weakly saturated initial cloud (approximately spherical) to a quite strongly saturated, and highly oblate, final cloud.
Data from DiRAC is available for (original) optical depths ranging from 0.1-30.0, and a number of different shock speeds.


Dust destruction in supernova remnants

It is well established that (sub-)micrometre sized dust grains can form in over-dense gas clumps in the expanding ejecta of supernovae remnants. However, highly energetic shock waves occur in the ejecta which can potentially destroy a large fraction of the newly formed dust grains. The gas in the ejecta is heated up to billions of Kelvin and is accelerated to a few hundreds of kilometre per seconds which causes thermal and kinematic sputtering of the dust grains. Moreover, the dust grains can collide among each other with high velocities and get fragmented or even vaporized. Previous predictions for the dust survival rate depend strongly on initial parameters and range from less than 0.1 % to 99 %. The net dust survival rate is crucial for determining whether or not supernovae significantly contribute to the dust budget in the interstellar medium.

Figure 1: Hydrodynamical simulation of the temporal evolution of the spatial gas density when the reverse shock impacts the clump. Our simulations were performed on the DiRAC @ Cambridge Service and show that the clump is disrupted within ∼60 years. (Adapted from Kirchschlager et al. 2019).

In order to model a shock wave interacting with an ejecta clump we performed hydrodynamics simulations using the grid-based code AstroBEAR (Fig. 1). Afterwards, dust motions and dust destruction rates are computed using our newly developed external, post-processing code Paperboats, which includes gas and plasma drag, grain charging, kinematic and thermal sputtering as well as grain-grain collisions. We used DiRAC HPC Facilities to determine the dust survival rates for the oxygen-rich supernova remnant Cassiopeia A for a huge range of parameters, including initial grain sizes, dust materials and clump gas densities.

Figure 2: Surviving silicate mass as a function of the initial grain size a peak and density contrast χ between clump and ambient medium (Kirchschlager et al. 2019).

We find that up to 40 % of the silicate (Fig. 2) and up to 30 % of the carbon dust mass is able to survive the passage of the reverse shock. The survival rates depend strongly on the initial grain size distribution, with ∼10−50 nm and ∼0.5−1.5 μm as the grain radii that show the highest surviving dust masses. The dust processing causes a rearranging of the initial grain size distribution. Our results show that grain-grain collisions and sputtering are synergistic and that grain-grain collisions can play a vital role in determining the surviving dust budget in supernova remnants.

References:

  • Kirchschlager, F., Schmidt, F. D., Barlow, M. J., Fogerty, E. L., Bevan, A., & Priestley, F. D. 2019, MNRAS, 489, p.4465-4496

Gaseous Dynamical Friction in the Early Universe

Figure 1: Variation in drag force with Mach number for two perturber masses, with the numerical results shown by the dots, and the analytic prediction shown by the line (Morton et al. in prep)

Dynamical friction is the process by which a massive perturber, moving through some background medium, gravitationally interacts with that medium, producing a net retarding force to its motion. When the background medium is gaseous, the pressure forces present in the gas must be included in modelling the response of the medium, and so impact the resultant force. To numerically model this process, we used the gravo-hydrodynamic code GIZMO, which uses state of the art Lagrangian hydro solvers, to test highly idealised wind tunnel-like setups against the analytic predictions for this net force. We found that the numerical results significantly under produce the force predicted by analytic treatments of the problem. These solvers did not reproduce the predicted structure of the wake, instead smoothing out predicted density profiles, and producing unexpected bow wave structures. These results were found in regimes where the analytic prediction should hold. As mentioned above, dynamical friction is critical in driving the mergers of dark matter sub-halos into their larger hosts, as it provides a crucial mechanism by which angular momentum can be transferred away from the orbiting substructure, allowing it to spiral into the central object. This is particularly important in the early Universe, when gas fractions were higher. The reduced retarding force (shown below in Figure 1 for different perturber velocity Mach numbers) suggests that modern cosmological simulations, that are run using these solvers, could be underestimating the merger rates of early dark matter structure, and so incorrectly recovering the evolution of early galaxies.


The b1 resonance in coupled-channel scattering from Lattice QCD

The vast majority of hadrons are resonances, seen experimentally through decays into a multitude of other hadrons in various angular distributions and corresponding to pole singularities in scattering amplitudes. Resonances which decay into two pseudoscalar mesons (mesons with JP=0, where J is spin and P is parity) have been studied extensively using first-principles lattice QCD calculations over the last decade. However, all but the simplest resonances have decay channels which involve hadrons with non-zero spin and, in particular, those with ‘unnatural’ spin parity, JP=0, 1+, 2, are forbidden to decay to two psuedoscalars. Calculations of these scattering channels are significantly more complicated.

Figure 1.

In a recent publication [arXiv:1904.04136, Phys. Rev. D 100, 054506 (2019)], we studied the axial-vector resonance (JP=1+). Experimentally its decay channels include πω and πφ, where π is a pseudoscalar meson (JP=0) and ω, φ are vector mesons (JP=1). The non-zero spin of the vector mesons means that the decay may feature a superposition of orbital angular momentum L=0 (S-wave) and L=2 (D-wave). Working with heavier-than-physical light quarks such that the π mass is 391 MeV, we performed the first lattice QCD computation of the relevant coupled scattering amplitudes – the resulting squared-amplitudes for πω in S-wave and D-wave, and πφ in D-wave are shown in the figure below (upper panels) labelled by 2S+1LJ. Notably, we observe a bump-like enhancement in S-wave πω, suggestive of a narrow resonance, and we indeed find a pole in the analytic continuation of the amplitudes to complex energies (lower panel) – the real and imaginary parts are related to the mass (mR) and width (ΓR) respectively, and the strengths of the coupling to each channel (|c|) are also shown. Interestingly, the absence of significant enhancement in S-wave πφ suggests no evidence for a ZS, proposed as an analogue of the enigmatic Zc(3900) observed in πJ/ψ. This work opens the door to investigating the π1(1600) exotic light resonance, nucleon resonances such as the Roper, and puzzling structures observed in the charmonium and bottomonium sectors.


Electron acceleration and transport in turbulence

Highly energetic electrons are present in many different astrophysical systems, from solar flares and supernovae to planetary magnetospheres and the intra-cluster medium. Plasma turbulence is ubiquitous in these systems, and it is fundamental for the production of energetic particles and their transport properties.

The problem of particle acceleration in astrophysical turbulence was first considered by Fermi, who considered particles interacting with randomly moving scattering centres, thereby gaining energy stochastically [1]. If the scattering centres have some coherent, large-scale motion, the energisation is more rapid [2]. Since Fermi’s original ideas, the topic has been extensively investigated.

Figure 1. (a) Magnetic field magnitude for a 2D hybrid PIC simulation of turbulence, and two typical electron trajectories showing a trapped orbit (red) and an open orbit (blue).
(b) Magnetic field magnitude and a typical electron trajectory (dots), with its trapped portion in red. The inset shows the energy history (blue) of the electron shown in the figure, with the trapped portion highlighted in red. The green and orange line show the perpendicular and parallel particle energy with respect to the local magnetic field. Figure partially adapted from Ref 3 [Trotta et al., 2020].

Using DiRAC we have studied the acceleration and transport properties of transrelativistic electrons in plasma turbulence [3] using hybrid PIC and test particle simulations. We have discovered that turbulence strength is a key parameter that controls electron acceleration, with rapid acceleration due to particle trapping happening when the turbulence level is high. Figure 1a shows two examples of open and trapped electron trajectories. Figure 1b shows a detail of electron trajectory with the trapped part highlighted in red. It can be seen, in the figure inset, that, when the electron is trapped, fast quasi-monotonic energisation is achieved. The fast energisation stage resembles a first-order Fermi process as observed by other authors in simulations of magnetic reconnection (see Ref [3] for further details).

The importance of turbulence strength to activate fast electron acceleration has important consequences for heliospheric and astrophysical plasmas, with implications for understanding, for example, the physics of solar flares and radio emission from galaxy clusters.

References:

  • [1] E. Fermi, On the origin of cosmic radiation, Physical Review 75:1169-1174 (1949)
  • [2] E. Fermi, Galactic magnetic fields and the origin of cosmic radiation, The Astrophysical Journal 119:1 (1954)
  • [3] D. Trotta, L. Franci, D. Burgess and P. Hellinger, Fast acceleration of transrelativistic electrons in astrophysical turbulence, The Astrophysical Journal, 894:2-136 (2020)

Extreme QCD: Quantifying the QCD Phase Diagram

The FASTSUM collaboration uses DiRAC supercomputers to simulate the interaction of quarks, the fundamental particles which make up protons, neutrons and other hadrons. The force which holds quarks together inside these hadrons is Quantum ChromoDynamics, “QCD”. We are particularly interested in the behavior of QCD as the temperature increases to billions, and even trillions of Kelvin. These conditions existed in the first moments after the Big Bang, and are recreated on a much smaller scale in heavy ion collision experiments in CERN (near Geneva) and the Brookhaven laboratory (near New York).

The intriguing thing about QCD at these temperatures is that it undergoes a substantial change in nature. At low temperatures, QCD is an extremely strong, attractive force and so it’s effectively impossible to pull quarks apart, whereas at temperatures above the “confining” temperature Tc, it is much weaker and the quarks are virtually free and the hadrons they once formed “melt”.

We study this effect by calculating the masses of protons and other hadrons and their “parity partners”, which are like their mirror-image siblings. Understanding how these masses change with temperature can give deep insight into the thermal nature of QCD and its symmetry structure.

Figure 1.

Our most recent results are summarized in the plot where we show the proton mass as a function of temperature up to around Tc in the top left panel. The other panels show results for hadrons containing various numbers of strange quarks. As can be seen, the mass of positive parity states have fairly constant masses, whereas their negative parity partners’ masses decrease substantially as the temperature increases until they become nearly degenerate with their positive parity partners at around Tc, as dictated by the global chiral symmetry of the QCD Lagrangian. Beyond Tc masses are difficult to determine which is consistent with the QCD force becoming so weak that the hadrons break apart. This calculation supplements our theoretical understanding of QCD with quantitative detail..


Deep inelastic structure of the proton

Hadron structure functions are ubiquitous in the description of leptonic interactions with hadrons, encoding: elastic form factors, inclusive electro- (and photo-) production of resonances, diffractive processes and Regge phenomena, and partonic structure in deep inelastic scattering. Lattice QCD, however, has really only been able to probe some limited kinematic corners of the all-encompassing structure functions primarily being limited to elastic form factors and low(est) moments of leading-twist parton distributions. The particular interest in partonic structure has motivated a number of strategies to overcome limitations in the lattice formulation, including: the Euclidean hadron tensor, lattice OPE, heavy-quark Compton amplitude, symmetry-improved operator construction, factorisable matrix elements, and quasi-PDFs and related quantities. We have embarked on a complementary program to extract the forward Compton amplitude in the unphysical region [1]. Computationally, we are able to take advantage of the efficiency of the Feynman-Hellmann approach to hadron structure and avoid the need to compute 4-point functions. Here we highlight [2,3] some recent progress towards revealing scaling and higher twist-phenomena in the low-order moments of the Compton amplitude from lattice QCD.


Figure 1: (Left) deep inelastic scattering where a hadron (usually a proton with u and d quarks) is destroyed by a highly energetic lepton (usually an electron) by emitting a photon which strikes a quark or parton in the hadron. This process is described by the Compton amplitude. (Right) the computed lowest moment of the u-d parton distribution as a function of the momentum transfer showing the need for power corrections.

References:

  • [1] A. Chambers et al., Phys. Rev. Lett. 118 (2017) 242001.
  • [2] A. Hannaford-Gunn et al., PoS (LATTICE2019) 278.
  • [3] R. Horsley et al., ibid 137.

New LIGO-Virgo gravitational waves

The third LIGO-Virgo observing run (O3) began in April 2019, and has since announced three gravitational-wave observations – the binary-neutron star merger GW190425, the binary-black-hole merger GW190412, and most recently GW190812, the merger of a 23-solar-masss black hole with a 2.6-solar-mass object, that could be either the lowest-mass black hole yet observed through gravitational-waves, or the highest-mass neutron star ever found. Measuring the properties of these sources, in particular the masses of the objects, relied on several theoretical signal models, including the family of phenomenological models that is tuned to fully general relativistic numerical simulations of black-hole binaries that are performed in this project.

Figure 1.

Over the last year, the models have been upgraded to include subdominant harmonics (which were essential to the analysis of GW190412), and to treat black-hole—neutron-star systems (of which GW190812 may be one). O3 data continue to be analysed using these models, and a full catalogue of all confirmed observations will be published when it is complete.


UNITY: Multi-wavelength simulations of relativistic cosmology

Gravitation in our Universe is best-described by Einstein’s theory of General Relativity. Cosmological simulations typically use a Newtonian approximation for the gravitational field however, which is far simpler to solve, and which proves to be an excellent approximation over a wide range of distance scales. Forthcoming surveys of the large-scale structure of the Universe (e.g. galaxy surveys such as Euclid and LSST, and intensity mapping surveys with MeerKAT and SKA) are now becoming large enough to capture the very largest distance scales in the observable Universe however, which inherently require a relativistic treatment.

To help us precisely understand the effects of the relativistic corrections on observations that will be made by these surveys, we have produced the largest ensemble of fully-relativistic N-body simulations to date, using the cutting-edge gevolution code. This includes a rigorous treatment of relativistic degrees of freedom, such as massive neutrinos, as well as solving for photon paths using the full set of relativistic fields that exist on large scales in the Universe. The result is a particularly accurate representation of a range of different cosmological large-scale structure observables that fully accounts for subtle GR effects.

Figure 1: A simulation of the redshift-space distortion fluctuation field that includes a full set of general relativistic corrections, taken from the UNITY simulations. This figure shows a slice through the lightcone at a redshift of z=0.47, produced using the latest version of the gevolution relativistic N-body simulation code on DiRAC. Another 50 simulations of this kind were produced as part of the project.

As part of the UNITY project, we ran an ensemble of 51 gevolution simulations on the DiRAC Data Intensive system in Leicester, with corresponding lightcones and tomographic maps of various observables taken from each and stored for further processing to add survey-specific observational features, such as different populations of galaxies. The simulations were divided into two sets: an ensemble containing different random realisations of the initial conditions, which can be used to study the statistical properties of observables and methods used to recover them, and a set with fixed initial conditions but systematically different values of cosmological parameters, allowing us to perform a precision study of the cosmology-dependence of the various relativistic correction terms.

Two key works are based on these simulations are currently being prepared for publication: a technical description of the simulations and study of the relativistic dipole in the galaxy distribution (L. Coates et al, in prep.), and a detailed study of GR-induced multipoles in the power spectra and two-point functions of multiple galaxy populations (C. Guandalin et al, in prep.).


Impact of Reionization on the Intergalactic Medium


Figure 1. Below a certain scale the matter distribution of the Universe is not uniform, but rather it forms a vast network of filamentary structures known as the “cosmic web”. The Lyman-α forest observations offer a unique opportunity to characterize the properties of the cosmic web at high redshifts. Image credit Jose Oñorbe.

The latest measurements of CMB electron scattering optical depth reported by the Planck satellite significantly reduces the allowed range of HI reionization models, pointing towards a later ending and/or less extended phase transition than previously assumed. During HI reionization the intergalactic medium (IGM) is photoheated to ~104 K, and owing to long cooling and dynamical times of the diffuse gas, comparable to the Hubble time, its vestiges persist in the IGM to much later times. Existing and upcoming observations of the Lyman-α (Ly-α) forest at high redshift can be used to detect these extant thermal signatures of HI reionization by comparing them with accurate hydrodynamical models of HI reionization. This can provide new independent constraints not only on the timing of HI reionization but also on the sources that reionized the universe. However to achieve these goals reliable and precise models of the Ly-α forest are needed to be compared with the observations. The challenge for these models is to reach enough dynamical range to attain the necessary precision in the Ly-α forest predictions and at the same time to obtain the number of models required to guarantee a good sampling of the parameter space.

We have run a grid of state-of-the-art cosmological hydrodynamical simulations in order to explore the physical parameter space characterized by the onset of reionization, its duration and the heat injected into the IGM. The new methodology used allowed us to take into account the complex inhomogeneities of the reionization process and to explore the full physical parameter space very efficiently. The creation of an accurate emulator of the Ly-α statistics with these simulations will be crucial to provide accurate fits to Ly-α observations.


Tidal disruption by supermassive black holes

Figure 1.

Tidal disruption events (TDEs) occur when a star passes too close to a supermassive black hole (SMBH). By “too close” we mean that the tidal field resulting from the black hole’s gravity is so strong that it can overcome the star’s self gravity and rip the star apart. Stars that pass close to the black hole typically do so on near-parabolic orbits, and thus approximately half of the debris from the disrupted star is on bound orbits, and the other half is on unbound orbits with respect to the black hole. The bound portion of the stream is thought to return to the black hole and circularise into an accretion disc which powers a luminous flare in an otherwise dormant galactic nucleus. Thus TDEs can be used to probe the SMBH mass function, the properties of individual stars, and stellar dynamics in galactic nuclei. Upcoming missions will detect thousands of TDEs, and accurate theoretical modelling is required to interpret the data with precision. In Golightly et al. (2019 ApJL 882, 2, L26) we present numerical simulations performed on Data Intensive at Leicester (DI@L) of TDEs with accurate structures for the simulated stars that are taken from the MESA stellar evolution code. This is in contrast to previous numerical simulations which employed simple polytropic models for the stars. We find that the more accurate stellar density profiles yield qualitatively-different fallback rates when compared to polytropic density profiles, with only the fallback rates from low-mass (less than the mass of our Sun), zero-age main sequence stars are well fit by a standard polytopic model. Stellar age has a strong affect on the shape of the fallback curve, and can produce characteristic timescales (e.g., the time to the peak of the fallback rate) that greatly differ from the polytropic values. We further investigate whether polytropic models for the fallback rate, which are routinely used to infer properties of observed systems from observational data, can lead to inaccurate measurements of e.g. the black hole mass. In the Figure we show the results of two simulations with different black hole masses (differing by a factor of 5) in one of which the star was modelled as a polytrope and in the other the star was modelled with an accurate structure from the MESA stellar evolution code. The similarity of the fallback curves for such a different black hole mass suggests that detailed modelling that captures realistic stellar density profiles is required to accurately interpret observed lightcurves of TDEs.


DiRAC enables prediction for matter-anti-matter asymmetry in the Standard Model

The Standard Model (SM) of Elementary Particles has been extremely successful in correctly predicting and describing the properties of elementary particles studied at experimental facilities all around the world. The SM itself covers particles like quarks and electrons, for instance, and their anti-particles. While both particles and anti-particles behave largely in the same way, the SM allows for tiny differences in their behaviour.

This is however extremely important since it is an essential ingredient for being able to describe the abundance of matter over anti-matter observed in our universe. One of the big questions in modern particle physics is therefore, whether the SM correctly describes the observed symmetry breaking, or whether new, not yet known and undiscovered physics is driving it.

DiRAC resources have now enabled a detailed study of the symmetry breaking by considering the decay of kaons into two pions, as studied experimentally at CERN and Fermilab two decades ago. Both particles are mesons, i.e. they consist of a quark and an anti-quark, respectively. Researchers from Brookhaven National Laboratory, CERN, Columbia University, the University of Connecticut, the University of Edinburgh, the Massachusetts Institute of Technology, the University of Regensburg and the University of Southampton (RBC/UKQCD Collaboration) set out to compute the amount of symmetry violation present in the SM by means of large-scale computer simulations of Lattice Quantum Chromodynamics (QCD). QCD is the part of the SM that describes the strong interactions of quarks (and anti-quarks) via the exchange of force carriers called gluons. The group of researchers have developed the required theoretical understanding and the numerical and algorithmic techniques needed to complete their ambitious and state-of-the-art program.

In Lattice Quantum Chromodynamics one performs the calculation by constructing a discrete four-dimensional space-time grid (the lattice) on which one numerically solves the QCD equations of motion. Such lattice QCD simulations are the only known method to address the above questions without having to rely on ad hoc assumptions. This type of computation crucially relies on access to the world’s fastest parallel supercomputers like DiRAC’s Extreme Scaling system.

To date RBC/UKQCD’s prediction is the only one of its kind and therefore highly anticipated. At the level of precision achieved in their calculation the result agrees with the experimental measurement. On the one hand this result is yet another piece of evidence for the unparalleled success of the SM as the theory of elementary particle physics. On the other hand, we expect the SM to be incomplete in describing all aspects of particle physics and CP violation remains an excellent place to look for limitations of the SM. The Collaboration is therefore now working on ways to improve the precision of their result.

Bibliography:

  • “Direct CP violation and the ΔI=1/2 rule in K→ππ decay from the Standard Model”, arXiv: 2004:09440
  • “Standard Model Prediction for Direct CP Violation in K → ππ Decay” Phys.Rev.Lett. 115 (2015) 21, 212001, arXiv:1505.07863

Ion-neutral decoupling in the nonlinear Kelvin—Helmholtz instability: Case of field-aligned flow

In many areas of the solar atmosphere, including prominences, and many other astrophysical systems the majority of the fluid is neutral with only a small fraction being plasma. As with fully ionized plasmas, plasmas that are only partially ionized can be subject to flow instabilities which result in turbulence developing in the plasma. However, the coupling between the neutral component of the fluid and the magnetic field is not perfect, with the coupling coming as a result of collisions between neutral and charged species. The consequence of this is the fluids are strongly coupled to each other (resulting in the neutrals strongly feeling the Lorentz force) for slow dynamics, but if dynamic motions are quick enough the neutral fluid will not feel the magnetic field and behave purely hydrodynamically.

Figure 1. Image of the density structure that develops in the neutral fluid as part of the partially ionised plasma version of the magnetic Kelvin-Helmholtz instability

In this study we performed an extremely high-resolution 2D calculation, which allowed the dynamics at both coupled and decoupled scales to be simultaneously resolved. The basic set up had a velocity and density jump and a magnetic field aligned with the flow. The fluid was only 1% ionized.

The most surprising results of this study came from the thermodynamics. We found that heat transfer in the system could occur strongly across the magnetic field because the neutral fluid

How turbulent heating occurred also threw up a surprise. In standard turbulence, it is normally the smallest scales which drive the dissipation, but we found the frictional heating in partially ionized plasma is determined by the larger vortex scale instead of the small scale.

Based on results published in “Ion-neutral decoupling in the nonlinear Kelvin—Helmholtz instability: Case of field-aligned flow” by Andrew Hillier, Physics of Plasmas (2019). (The article can be accessed at https://doi.org/10.1063/1.5103248.)


Simulation of Sp(2N) gauge theories for Composite Higgs models


Figure 1: The figure (from [1]) shows the mass squared of the lowest-lying vector particle versus the mass squared of the pseudoscalar particle in the continuum for Sp(4) gauge theory with static (blue) and dynamical (red) fermions. The bands represent the error of the corresponding best fit to the data. The effect of unquenching the fermions are clearly visible in the figure.

Understanding the nature of the standard model Higgs boson is still an open problem. An appealing possibility is that the Higgs boson be a composite particle resulting from a novel strong interaction. The lack of observation of otherwise unexplained particles interacting through this conjectured novel force and with mass comparable to that of the Higgs boson would then require a mechanism that keeps the Higgs parametrically light. Such a mechanism can be provided by the spontaneous breaking of the global flavour symmetry in the new interaction. In this framework, the Higgs boson is interpreted as a Pseudo-Nambu-Goldstone Boson (PNGB) of the novel interaction. Among candidate realisations of PNGB compositeness are Sp(2N) gauge theories. The two simplest theories in this class are Sp(2), coinciding with SU(2), and Sp(4). Given the strong nature of the novel force, in order for the former to be validated experimentally as viable theories, first-principle calculations would need to be performed that allows us to determine quantitative predictions from these models. These calculations can be executed numerically formulating the models on a spacetime lattice.

Figure 2: The figure shows the phase shift as a function the centre of mass energy of the scattering process of two Goldstone bosons in the SU(2) gauge theory. The slope of a linear fit allows to constrain the coupling of the PNGBs to the vector resonance in the composite Higgs framework.

Our project has provided the first numerical calculation in the Sp(4) gauge model with two fundamental flavours and the first determination of its lowest-lying meson spectrum and of the corresponding decay constants in the continuum limit. This has enabled us to determine the coefficients of an effective field theory that can be used to compare the predictions of the model with experimental results [1]. Our study of the Sp(2N) Yang-Mills model combined with results in the literature for SU(N) and SO(N) gauge theories has unravelled that in both three and four spacetime dimensions the ratio of the mass of the tensor glueball over the mass of the scalar glueball is independent of the gauge group [2]. This result provides further insights on the phenomenon of confinement, which characterises non-Abelian gauge models and provides striking experimental signatures such as jets in high-energy experiments involving the standard model strong force.

Our project also provided the first calculation in a composite Higgs scenario of the scattering amplitude of two PNGBs in the SU(2) model. The amplitude controls the production cross section of a new resonance expected to be produced at the LHC in a Composite Higgs scenario. The results therefore constrain the viability of the model and shed light on a key observable to test composite Higgs models using lattice simulations.

References:

  • [1] E. Bennett et al., Sp(4) gauge theories on the lattice: Nf=2 dynamical fundamental fermions, JHEP 12 (2019) 053 [arXiv:1909.12662]
  • [2] E. Bennett et al., On the colour dependence of tensor and scalar glueball masses in Yang-Mills theories, Phys. Rev. D Rapid Communications to appear [arXiv:2004.11064]
  • [3] V. Drach et al., Resonance study of SU(2) model with 2 fundamental flavours of fermions, Proceedings of Science [arxiv:1910.13847] – paper in preparation.

2018 Highlights

Some of the Research Highlights from our users in 2018:

2018 list of publications

The weighty issue of quarks

Figure 1. An artist’s impression of an ηc meson containing a charm quark and antiquark (large blue circles) in a strongly interacting background soup of gluons (wavy lines) and quark-antiquark pairs (small circles).

Electrons make circular tracks when they move perpendicular to a magnetic field in a particle detector. From the radius of the circle, knowing the electric charge, we can determine the electron mass. Quarks, on the other hand, are never seen as free particles. Instead, all we see in our particle detectors are the bound states of quarks, called hadrons. We can measure hadron masses, but how do we then determine, or even define, the mass of a quark?

The answer is that we must define the mass as the parameter that enters the equations of QCD, the theory of the strong interaction that binds quarks into hadrons. We can solve the equations of QCD using the numerical approach of lattice QCD and hence determine the quark mass parameter that we need to put into the theory to give the correct mass for a calibration hadron containing that quark. Since experimentalists can measure some hadron masses very well, we can do this accurately. For example, to fix the charm quark mass, mc, we use the mass of the ηc (pictured in Figure 1).

To be of use to other theorists, however, we must convert the lattice quark mass into that appropriate to QCD defined in continuous space-time. We must do this accurately too, to avoid introducing a sizeable uncertainty into the result.



Figure 2. A comparison of lattice QCD results for mc, given in the conventional continuum QCD scheme. The top result is our new one, and the third our 2014 result, both obtained on DiRAC. The dark grey band is the lattice QCD world average, with much better uncertainty than the previous average light grey band (marked PDG).

HPQCD developed one of the main techniques used to determine quark masses (the JJ method) and most recently used this, on DiRAC, to determine mc to 1% in 2014 (arXiv:1408.4169). This year we used a completely different technique (called RI-SMOM) to ‘match’ our lattice quark mass to that of continuum QCD, directly on the lattice (arXiv:1805.06225). We were careful to quantify and remove lattice artefacts, which had been ignored in previous calculations. Our new 1% accurate result agrees very well with our 2014 value and also with a third, MRS, method developed by FNAL/MILC/TUM in 2018. This gives a world-average for mc of 1.2753(65) GeV (1.36 times the mass of the proton). The quark masses need to be accurately known so that we can test whether the newly-discovered Higgs boson is behaving as we expect when it decays to hadrons via quark-antiquark pairs of different flavours.


Light quark flavour breaking effects in B meson decay constants

In the standard model of particle physics, there are six types of quarks whimsically called up u, down d, strange s, charm c, bottom b and top t in order of increasing mass. These bind together to form two classes of hadrons, mesons consisting of a quark – antiquark pair and baryons with three quarks. An example of a baryon is the proton (uud) while examples of mesons are the pion (ud), kaon (us), B (ub) and Bs (sb). These mesons decay to a lepton (usually the electron) and a neutrino. The decays are parametrised by ‘meson decay constants’, from which, together with experimentally determined leptonic decay rates elements of the CKM matrix can be found. These are fundamental parameters of the Standard Model and their determination can help to isolate possible new physics. Presently recent B physics experiments have left us with a number of heavy flavour physics puzzles which will be more intensively investigated by Belle II.

Figure 1: Left panel: A ‘fan’ plot for the light quark ud and us decay constants (normalised with the average ud, us decay constant) for one lattice spacing against the pion mass; Right panel: The heavy b quark ub, sb decay constants, again normalised, for several lattice spacings also plotted against the pion mass.

In our Lattice QCD programme we have developed a method by expanding about a point in the light quark mass parameter space where the u, d, s quark masses are initially set equal, and then extrapolating to the physical quark masses while keeping the average quark mass constant. The original expansions were for hadron masses, and lead to a ‘fan’ plot where the different hadron masses radiate from their common point. They were extended to decay constants for the pion and kaon mesons, [1], as shown in the left panel of the figure. We have now further extended this programme to the heavier quark masses and in particular the b, [2]. In the right panel of the figure we give the equivalent plot for the B and Bs mesons, together with the present (FLAG16) values.

[1] V.G. Bornyakov et al., Phys. Lett. B 767 (2017) 366.

[2] S. Hollitt et al., PoS(LATTICE2018)268, arXiv:1811.06677 [hep-lat].


Gravitational Waves 2018

The Advanced LIGO and Virgo detectors were undergoing an upgrade during 2018, so the year was devoted to completing analyses of data from the first two observing runs (O1 and O2), and preparations for the third observing run (O3), due to start in April 2019.

The chief results from the LIGO and Virgo collaborations were the final estimates of the physical properties of the binary-neutron-star merger, GW170817, including stronger constraints on the neutron star tidal deformability, and therefore its equation of state and radius; and the release of the first catalogue of observations from O1 and O2, based on a re-analysis of all data, which in turn had undergone improved characterisation and cleansing. This re-analysis uncovered new signals in the data, and doubled the number of observed black hole binary mergers from five to ten. This included re-classifying the marginal “LIGO-Virgo transient” from October 12, 2015 as a bona-fide GW signal, and among the new signals, GW170729, which is likely the most massive binary yet observed, at around 85 solar masses. All of this work used the waveform models that were calibrated to numerical-relativity simulations performed on DiRAC.

Figure 2: The eleven gravitational-wave signals observed in the first and second LIGO-Virgo observing runs (O1 and O2).

Modelling work also continued. The first inspiral-merger-ringdown model to include higher signal harmonics (PhenomHM) was published in Physical Review Letters, and is now being used to update the measurements of the properties of GW170729. The bulk of the most extensive and systematic set of simulations of precessing binaries was completed on Cosma6, and a preliminary model of the precession effects through merger and ringdown (the first of its kind) was completed, with publication expected in 2019, for use in analysis of O3 observations.


Exomol: Computational spectroscopy of Exoplanets

Molecular spectra play a critical role for the retrieval of atmospheres of exoplanets and cool starts. However, the lack of laboratory data on many of molecular species severely limits models for objects as diverse as Jupiter, exoplanets and brown dwarfs. The UCL ExoMol team is world leader in providing molecular line lists for exoplanet and other hot atmospheres. The ExoMol procedure uses a mixture of ab initio calculations and available laboratory data. These line lists form the input for opacity models for cool stars and brown dwarfs as well as for radiative transport models involving exoplanets. So far ExoMol has collected molecular opacities for more than 50 molecules (130 isotopologues): 30 line lists have been generated using the ExoMol methodology and other line lists were taken from external sources or from our work predating the ExoMol project.

Ethylene and Acetylene are important absorber in hot Jupiter exoplanets, brown dwarfs and cool carbon stars. As part of our Hydrocarbons DiRAC project, we have new line lists for these molecules, each of which contain over 30 billion transitions. In order to accomplish this data intensive task, some of the UK’s most advanced supercomputers have been used, provided by the DiRAC project. Even larger line lists have been completed. For example, the methyl chloride’s line list contains over 300 billion lines.

Many of those exoplanets discovered thus far are categorized as rocky objects with an atmosphere. Most of these objects are however hot due to their short orbital period. Models suggest that water is the dominant species in their atmospheres. The hot temperatures are expected to turn these atmospheres into a (high pressure) steam bath containing remains of melted rock. The spectroscopy of these hot rocky objects will be very different from that of cooler objects or hot gas giants. Molecules suggested to be important for the spectroscopy of these objects are reviewed together with the current status of the corresponding spectroscopic data. We have started building a comprehensive database of linelist/cross sections applicable for atmospheric models of rocky super-Earths as part of the ExoMol project.

An efficient usage of line lists comprising billions of lines is extremely challenging. To address this challenge of Big Data, we have developed a new Fortran 2003 program ExoCross. ExoCross takes line lists as input and returns pressure- and temperature-dependent cross sections as well a variety of other derived molecular properties which depend on the underlying spectroscopic data. These include state-dependent lifetimes, temperature-dependent cooling functions, and thermodynamic properties such as partition functions and specific heats and is designed to work with the recently proposed method of super-lines. It is capable of simulating non-LTE spectra using a simple two-temperature approach.

Supported by the DiRAC high performance facilities, we have organised and run a workshop “Digital Exoplanets” in Prague, January 2019 (www.digitalexoplanets.org). The workshop gathered over 50 computational exoplanetary scientists (with more than half career researchers) and focused on the data for atmospheric models of exoplanets and cool stars. The main theme of the workshop was the open access to the data and computer codes. This was the first workshops of this kind, focusing not only on observations and theory, but mostly on ‘how to actually run these codes’. The participants had the opportunity to learn how to use (install, run, debug etc) different main stream atmospheric and spectroscopic codes during the hands-on sessions, which were the central part of the workshop. These practical sessions were run on the resources provided by DiRAC team, with the help of the DiRAC team, which is greatly appreciated.


UK MHD Consortium: 1 Solar and Planetary Interiors

One of the great challenges in the study of planetary interiors is to understand the generation of magnetic fields in the interiors of the giant gas planets. Wieland Dietrich and Chris Jones from the University of Leeds have been addressing this problem through high resolution magnetohydrodynamic models, with the novel feature of incorporating realistic radially-dependent profiles for the electrical conductivity (Icarus 305 (2018) 15-32). This allows the study of a range of models in which the conductivity ranges from Jupiter-like, with only a thin outer non-conducting shell to Saturn-like, with a thick non-conducting shell. They find Jupiter-like steady dipolar fields, quadrupolar dynamos for profiles between those of Jupiter and Saturn, and dipolar fields again for Saturn-like profiles. The non-axisymmetric components of the Saturn model are, though, more pronounced than in Saturn itself, prompting the suggestion that a stably stratified region – postulated to exist in Saturn, but not yet incorporated in the model – may be of significance.



Figure 3: The figure shows (from left to right) time-averaged meridional sections of the azimuthal flow, meridional stream function, kinetic helicity, radial and azimuthal magnetic fields; the right hand plots are typical snapshots of the radial field and azimuthal flow at the surface. The top row is for the case of uniform electrical conductivity; the bottom row is for a Jupiter-like profile, in which the electrical conductivity drops off sharply at 90% of the planetary radius.

Leicester Highlight: Planet-driven warps in protoplanetary discs

Since their discovery in the 1990s our knowledge of extra-solar planets has increased at a staggering rate, and we now know that most, perhaps all, stars host planetary systems. These planets form in cold discs of dust and gas around young stars, and in recent years our observations of these protoplanetary discs have also advanced dramatically. We typically treat planet-forming discs as flat, smooth structures but recent observations have revealed a wealth of sub-structures in these discs, including spiral arms, gaps, misalignments and warps. While we can explain the origin of some of these features, many are still open questions. One of the best-studied protoplanetary discs is around the nearby star TW Hya, and this almost face-on disc shows evidence of gaps (possibly due to planets) as well as a localised dark region in the outer disc. Observations with the Hubble Space Telescope found that this dark region is moving around the outer disc, but it is seen to move much more quickly than gas orbits at that radius. This strongly suggests that the dark region is actually a shadow cast by something in the inner disc (where the gas orbits faster). This was interpreted as evidence of a disc warp, such that the inner part of the disc is inclined with respect to the outer disc. The relative misalignment between the inner and outer disc then results in a shadow being cast on the outer disc. Such a configuration could be caused if a misaligned planet is present in this system, where the planets orbit is tilted away from the mid-plane of the disc. The motion of the inner, tilted disc and hence the shadow would thus be governed by the gravity of this misaligned planet, and it might be possible to infer properties of the planet from the observed shadow.

Figure 4: 3-D numerical simulation of en inclined planet in a protoplanetary disc, performed on the DiRAC Data Intensitve @ Leicester system. Our simulations showed that an inclined giant planet can create an observable warp in the disk. (Figure adapted from Nealon et al. 2018).

Simulating such a configuration is technically challenging, because we need to resolve the out-of-plane interactions between the disc and the planet, as well as including the influence of the outer disc. In practice this requires high resolution 3-D hydrodynamic simulations covering a large dynamic range. Using DiRAC, we were able to run a suite of simulations to study the interaction between misaligned planets and their parent discs in detail, investigating the importance of different planet masses and inclinations on the subsequent disc structure. For planets that were massive enough to carve the disc into an inner disc and an outer disc (as seen in Fig.1), we confirmed it was possible to drive the configuration predicted from the observations of TW Hya. Importantly, the large spatial scales we were able to simulate allowed us to show that this was possible even for small planet inclinations. We subsequently ran detailed radiative transfer simulations to calculate how our simulated discs would appear to our telescopes, and confirmed that such a configuration is broadly consistent with what is observed in TW Hya. New observations are finding that misalignments of this type are common in planet-forming discs, and in the coming years DiRAC will allow us to understand the dynamics and evolution of these nascent planetary systems in detail.


COSMOS Consortium

The COSMOS consortium (dp002) continues to exploit DiRAC HPC Facilities and has made progress towards ambitious sciences in extreme gravity, inflation and the early universe, the study of the cosmic microwave sky and large-scale structure (including the Final Data Release for the ESA Planck satellite). Here, we highlight recent breakthroughs in general relativistic simulations of black holes and inflation using GRChombo, a flexible AMR numerical relativity code publicly released by COSMOS consortium members in early 2018 (see grchombo.org and github.com/GRChombo).

The groundbreaking discovery of a gravitational signal from a binary black hole merger in 2016 by LIGO is now followed by an exciting series of detections, including that of the first neutron star binary merger in 2018. With the current LIGO/VIRGO O3 science run with upgraded detectors detecting multiple events weekly, the new era of gravitational wave astronomy is well underway.

The GRChombo AMR code, originally developed and now anchored by COSMOS consortium members, is well-placed to exploit these new observations of gravitational waves (GWs). We have continued our long-standing industrial collaboration through the COSMOS Intel Parallel Computing Centre (IPCC) to modernize our research codes in cosmology and general relativity, focusing on the optimization of GRChombo for multi/many-core systems; for example, GRChombo is one of the few codes that can take full advantage of the many-core Xeon Phi systems. In addition, GRChombo continues to grow in functional capabilities, with the addition of a powerful MultiGrid initial conditions solver and the pioneering many-core OSPRay realtime ray tracer and volume renderer with in-situ capabilities (supported by the Intel Viz team).

The advent of GW astronomy has opened up a window into the one of the key questions in modern cosmology about the nature of dark matter. COSMOS investigators are taking a leading role in this search. One of the prime candidates for dark matter is an ultralight field which are sometimes called “axion-like particles” or ALPs. In recent papers (notably arXiv:1609.04724 and arXiv:1806.09367), we have investigated the formation of highly compact “axion dark stars” from these light fields, and the potential GW signals created by their collisions with neutron stars and black holes (arXiv:1808.04668, shown in the figure below), and with other axion stars (arXiv:1802.06733). In (arXiv:1904.12783), we showed that these fields can also interact directly with black holes to grow “scalar hair”, leading to a possibly detectable dephasing of the GW signals from BH binary mergers.

A volume rendering of the bosonic energy density (blue) and the BH conformal factor (grey).

Beyond GW astronomy, COSMOS consortium investigators are also leading in the numerical study of fundamental questions in physics. For example, while inflation is now the paradigmatic theory of the early universe, providing a dynamical mechanism to explain why the universe is homogenous over cosmological scales and provide a source of the seeds of large scale structure. Nevertheless, can inflation begin in the first place, if the initial conditions are not homogenous? In recent work (e.g. arXiv:1712.07352), we classified the models of inflation which are not robust to inhomogenous initial conditions. Intriguingly, we found that low scale inflation – preferred by many string theory phenomenological models ­– is not robust, failing to inflate in the presence of large scale but subdominant inhomogeneities. 

In addition, we continue to take the lead in investigating the nature of gravity in higher dimensions. Following our work in showing the failure of weak cosmic censorship in 5D due to black ring instabilities, we continue to study the end states of higher dimensional rotating (Myers-Perry) black holes. We find that, similarly to the black ring systems, these black holes exhibit the Gregory-Laflamme instability, leading to a fractal instability with the ultimate end state likely to exhibit naked singularities in an asymptotically flat space time (shown above). Given these results in extra dimensions, it is clear that, if cosmic censorship is indeed true, then it must be a property specific to the three-dimensional space that we live in; without this essential property, Einstein’s theory of general relativity could lose its predictive power.


Modelling Planetary Collisions with the SWIFT code

We have produced the highest resolution simulations of the giant impact that is thought to have knocked Uranus’ spin axis into the plane of the solar system. The two orders of magnitude increase in resolution was enabled by our use of the SWIFT hydrodynamics and gravity code on the DiRAC facility in Durham. We incorporated equations of state for various different materials into SWIFT, which has primarily been designed for cosmological applications.  The high-resolution simulation of this planetary giant impact shows features that are entirely absent from previous smaller calculations. For instance after the initial impact, the subsequent sloshing around of the system can be sufficiently vigorous to unbind some of the icy mantle from the proto-Uranus – an effect that was not seen in earlier million-particle simulations. Determining what material was placed into orbit as a result of this impact is crucial for deciding if the currently observed Uranian satellites could have formed from this debris. Again, the higher resolution simulations now allow us to make reliable predictions for this. These represent the first scientific results with the SWIFT code, and demonstrate how it will open up new ways to constrain models of planetary giant impacts relevant to most bodies in the solar system.


A mid-collision snapshot of a grazing impact with 10$^8$ SPH particles coloured by their material and internal energy, showing some of the detailed evolution and mixing that can now be resolved. In the left panel, light and dark grey show the target’s ice and rock material, respectively, and purple and brown show the same for the impactor. Light blue is the target’s H-He atmosphere.

This calculation used the Dell 940 system with four Intel skylake with 112 cores.  This system was provided by Intel and Dell as a proof-of-concept system as part of the Dell/Intel and DiRAC@Durham industrial engagement at list price of approximately £100k.


The FABLE simulations of galaxies, groups and clusters

Clusters of galaxies collapse from the densest regions of the early Universe to become the largest objects in existence by the present day. Their unparalleled sizes make them powerful tools to probe the growth of large-scale structure and further our understanding of cosmology. At the same time, clusters represent unique astrophysical playgrounds in which galaxies interact with each other and with the intracluster gas, which itself is churned up by the powerful energy output of supermassive black holes. This process, known as AGN feedback, stems from black holes at the centres of galaxies swallowing gas, converting it into outflows of energy and impacting not only their galaxy, but also the surrounding intracluster medium millions of light-years away.

The inclusion of AGN feedback in cosmological hydrodynamical simulations of galaxy formation has proven to be a key ingredient for producing realistic galaxies in a representative region of the universe. Projects such as the Illustris simulation have met with great success in this area and thus have made great strides in our understanding of galaxy formation and evolution, as well as guiding observations. Yet whilst this model matches a wide range of observed galaxy properties, it struggles to reproduce other important properties of galaxy clusters and groups such as the contents of the intracluster medium.

Figure 1: The main panel shows the gas distribution of one of the largest fable clusters, revealing an array of interconnecting filaments and infalling galaxies and groups. Panel 1 shows an observed spiral galaxy with an enormous tail of gas shining in X-rays and panel 2 shows a similar view for one of our simulated galaxies. This gas has been stripped from the galaxy as it falls into the dense cluster environment. Panel 3 highlights a smaller group of galaxies of various shapes and sizes, while panel 4 shows the structure of AREPO’s moving hydrodynamical mesh through a slice of the central galaxy.

With the FABLE project we have built upon the success of Illustris by improving the models for feedback from AGN and stars so that we can simultaneously produce realistic galaxies, groups and clusters, thereby completing our picture of the Universe across this vast range of scales. Consisting of high-resolution simulations of more than thirty groups and clusters performed with the state-of-the-art moving-mesh code AREPO, FABLE reproduces a number of important observables such as the build-up of stars in galaxies and the thermodynamic properties of the intracluster medium.

FABLE further reproduces a range of integrated cluster properties and their interdependencies as represented via the oft-studied cluster scaling relations. These relations are vital for the cosmological interpretation of observed cluster data, the availability of which is set to be greatly expanded by numerous ongoing and future surveys, for example with observations of the Sunyaev-Zel’dovich effect via SPT-3G and ACTpol or in X-rays with the eROSITA and Athena observatories. Yet as larger cluster surveys beat down the statistical uncertainties, we are increasingly limited by our incomplete understanding of cluster physics and its impact on the cluster scaling relations, which are required to link cluster observables to their fundamental masses.

Figure 2: The evolution of the fable cluster shown in the first figure, starting from just 2.2 billion years after the Big Bang (left) through to the onset of a major merger between two smaller clusters 5.7 billion years later (right) that will eventually form one of the largest possible gravitationally-bound objects in the simulated (and real) universe.

Cosmological hydrodynamical simulations are in a unique position to quantify these effects and thereby facilitate the use of clusters as precise cosmological probes. Indeed, careful comparison of our results with observed scaling relations hints towards a non-negligible bias in observational studies that attempt to measure the masses of clusters from their X-ray emission under simplifying assumptions about the clusters’ dynamical state.

Furthermore, the increased size and depth of future surveys means that many new clusters will lie at vast distances from which their detectable emission may have taken many billions of years to reach us. This means it is vital to understand how the cluster scaling relations change over time so that they can be applied reliably to the most distant objects in upcoming datasets. While current observations are severely limited in their ability to constrain such evolution, simulations allow us to study the scaling relations for a well-defined sample of objects over the whole history of the Universe. Upcoming results from FABLE demonstrate a significant departure of the evolution of the relations from the simple model typically assumed in observational studies, which has important implications for interpreting the incoming wealth of new cluster data.


Fast and energetic AGN-driven outflows in simulated dwarf galaxies

Dwarf galaxies, the least massive galaxies in the Universe, are the smallest building blocks of the cosmic structure formation process. They are interesting cosmological probes, as there are a number of apparent discrepancies when comparing observations of dwarf galaxies with the predictions of structure formation models. For example, the large disparity in the number of observed dwarfs with respect to the predicted number of dark matter haloes that may be hosting these systems (the missing satellite problem) or the lack of observed massive dwarf galaxies (the too-big-to-fail problem). Some have suggested that these issues could be resolved by changing our models of dark matter, which provides the skeleton for cosmic structure formation. Others argue that the focus should be on improving our models of the luminous, ‘baryonic’ matter that we can observe directly.


One of the most important baryonic processes, reionization, stems from the radiation produced by the first galaxies in the Universe. This radiation shuts down star formation in the low-mass dwarf galaxies rendering them undetectable. For the more massive dwarf galaxies it has been suggested that violent stellar explosions – supernovae (SNe) – could shut down star formation, however this remains controversial. Most simulations indicate that another ingredient would be needed for the theoretical models to explain the sparsity of massive dwarf galaxies in our Universe.

Figure 1: Edge-on projections of all the simulation runs at t = 300 Myr. The first row shows surface densities, the second row shows temperatures, the third row shows vertical velocities, and the fourth row shows metallicities. The runs with low star formation efficiency are depicted on the left hand side, and the high star formation efficiency runs on the right hand side. With only SN feedback, the outflows are slow, warm, and high-density. With additional AGN feedback, the outflows have an additional hot, low-density component moving at high velocities.

Recently, it has been proposed that active galactic nuclei (AGN) – actively-accreting massive black holes – might be this missing ingredient. Previously, theorists did not include AGN in their simulations of dwarf galaxies as these were only observed in more massive galaxies. However, the systematic analysis of optical surveys has revealed a population of dwarf galaxies hosting AGN, which have been confirmed by X-ray follow-up observations. All of these AGN are extremely luminous relative to their black hole mass and are likely only the tip of the iceberg. There has also been some first evidence of direct AGN feedback from the MaNGA survey, which identified six dwarf galaxies that appear to have an AGN that is preventing on-going star formation. These observational results are very exciting as they suggest that AGN may be able to solve the too-big-to-fail problem.


It is therefore timely to study the physical properties of dwarf galaxies, in particular whether the presence of an AGN can affect their evolution. Using the moving mesh code arepo, we have investigated different models of AGN activity, ranging from simple energy-driven spherical winds to collimated, mass-loaded, bipolar outflows in high resolution simulations of isolated dwarf galaxies hosting an active black hole. Our simulations also include a novel implementation of star formation and mechanical SN feedback.

Figure 2: Comparison between the line-of-sight velocity maps for stars and ionized gas for a galaxy without observed AGN activity (left panel) and a galaxy with AGN activity (right panel). With AGN activity, the ionized gas is visibly offset from the stellar component.

Figure 1 shows the visuals of our simulation suite. Here, edge-on projections of surface density, temperature, vertical velocity and metallicity at t = 300 Myr are plotted. The control runs with neither SN nor AGN feedback (’NoFeedback’ runs) do not produce any outflows. With added SN feedback, we obtain relatively cold and slow-moving outflows. When we also include AGN activity in our simulations, the outflows are much faster as well as much more energetic, reaching temperatures of up to 10 9 K.

These results provide tantalizing evidence that the observed correlation between large kinematic offsets and AGN activity may be due to high-velocity AGN-driven outflows (see Figure 2 for an example of the different line-of-sight velocity maps observed with MaNGA). In our simulations, the same correlation arises because only the AGN-driven gas outflows are fast enough to alter the velocity patterns set by the rotational motion of the galaxy. The relatively slow-moving supernova-driven outflows on the other hand do not offset the rotational motion of the gas so that it stays aligned with the stars (see Figure 3, left panel). This then results in a significant difference between the kinematic position angles (PAs) of stars and gas for dwarf galaxies hosting AGN (see Figure 3, right panel).

Figure 3: Left panel: Comparison between the 2D velocity maps of the stars and gas in the simulations at t = 150 Myr. The colour coding of the pixels indicates the surface density. The spatial coverage of the primary MaNGA sample of 1.5R eff is indicated by a white circle. With added AGN feedback, the central circular motion is significantly altered by the outflows so that the gas would be seen as kinematically offset. With SN feedback alone, the circular motion remains dominant and we do not observe this feature. Right panel: Comparison between the line-of-sight velocity maps for stars and ionized gas at t = 150 Myr. Only gas and stars within 1.5R eff are included (in analogy to MaNGA), and the spatial resolution is matched to the MaNGA resolution at the mean redshift of the primary sample. For the AGN runs, the ionized gas is visibly offset from the stellar component, just like in the observed MaNGA galaxies.

We also investigated the effects of these AGN-driven outflows on star formation and found that AGN activity has a small but systematic effect on the central star formation rates (SFRs) for all set-ups explored, while substantial effects on the global SFR are only obtained with strong SNe and a sustained high-luminosity AGN with an isotropic wind. Our findings from the isolated galaxy set-up therefore indicate that AGN are unlikely to directly affect global dwarf SFRs. However, in realistic cosmological environments inflows are known to be important especially for
high-redshift dwarfs. It is hence possible that AGN-boosted outflows may prevent some of this cosmic ‘pristine’ gas reaching the dwarfs in the first place, providing a mechanism for indirect star formation regulation. This possibility will be addressed in a follow-up study using cosmological zoom-in simulations.


Horizon: Studying the origin and amplification of magnetic fields in galaxies

Magnetic fields permeate the Universe from cosmological to molecular cloud scales, but neither their origin nor the mechanism of how they grow in amplitude is pinned down. We have used DiRAC to make headway on this on two fronts. Firstly in Martin-Alvarez et al. 2018, MNRAS, 479,3348 we sought to understand whether hierarchical galaxy growth could amplify extremely weak (~ 10-19 Gauss) primordial magnetic fields to micro-Gauss levels in present day galaxies. For this purpose, we ran a suite of constrained transport magnetohydrodynamical adaptive mesh refinement cosmological zoom simulations with different stellar feedback prescriptions. We found that cosmological accretion at high redshift excited a galactic turbulent dynamo, which amplified the magnetic field even in the absence of stellar feedback. Our convergence tests are the first in the literature to demonstrate the capture of the elusive turbulent dynamo in cosmological simulations of galaxies.


Figure: The magnetic energy density at redshift 4 in a (150 kpc)3 piece of a cosmological volume. Using a new tracer algorithm (Katz et al. 2019), we have color-coded the magnetic energy density by origin where green corresponds to primordial magnetic fields, red denotes fields injected by supernovae and blue indicates the interaction of the two. Primordial magnetic fields dominate the volume of the simulation whereas supernovae-injected magnetic fields are associated with halos (Katz et al. 2019) where they may be amplified by a turbulent dynamo (Martin-Alvarez et al. 2018).

Secondly, in Katz et al. 2019, MNRAS, 484, 2620, we presented an algorithm to trace magnetic fields produced by different sources, ranging from primordial fields generated for example by inflation to magnetic fields injected by supernovae and black holes. The figure above displays the magnetic energy density, color-coded by origin (green corresponds to primordial magnetic fields, red denotes fields injected by supernovae, and blue indicates the interaction of the two) in one of our simulation outputs of a 150 kpc on a side cosmological volume at redshift 4. With this algorithm, we will be able to methodically explore the contributions of different sources to the magnetization of the Universe.


Turbulence, Shocks and Dissipation in Space Plasmas

In situ observations from solar and heliospheric spacecraft missions measure power spectra of the solar wind plasma and electromagnetic field fluctuations which show a power-law behaviour over several decades in frequency. This is interpreted as the manifestation of a turbulent cascade of energy from large fluid scales to small kinetic scales, characteristic of the motion of the ions and the electrons. Direct numerical simulations of turbulent plasmas are not only important for interpreting the nature of the solar wind fluctuations at large scales, but also a fundamental tool to understand how energy is channelled to protons and electrons. In particular, the hybrid particle-in-cell model, which treats the ions as particles and the electrons as a charge-neutralizing massless fluid, has proved to be the most appropriate and efficient method to follow the turbulent cascade down to sub-ion scales, retaining ion kinetic effects [1].



Figure 1. Left panel: Pseudocolor plot of the power of the perpendicular (with respect to the mean field) magnetic fluctuations when turbulence is fully developed in a 2D 4096×4096 simulation. Middle panel: The same as in the left panel, but for a 3D 512x512x512 simulation. Right panel: Omnidirectional 1D power spectra of magnetic field B, electric field E, ion bulk velocity u, parallel magnetic field Bz, and ion density n, from the 3D simulation. Power laws with different slopes represent theoretical predictions and/or observational results, as reference. In the insets, a direct comparison from 2D and 3D simulations shows a remarkable agreement.

We have used DiRAC resources to perform a collection of state-of-the-art two-dimensional (2D, Fig. 1 left panel) and three-dimensional (3D, Fig. 1 middle) hybrid simulations of plasma turbulence, investigating the processes responsible for coronal heating and solar wind heating and acceleration. We have analysed, among other things, the spectral properties (Fig. 1, right), the spectral anisotropy [2], the intermittency properties, and the nature of fluctuations at kinetic scales. We also investigated the interplay between plasma turbulence and the proton firehose instabilities in a slowly expanding plasma, performing the first 3D hybrid simulation employing the expanding-box model, to take into account the solar wind expansion [3].

Our numerical results are being used to interpret in situ observations in the Earth’s magnetosheath and in the solar wind from the Magnetospheric Multiscale (MMS) mission [4], as well as to predict upcoming results from Parker Solar Probe (PSP) and Solar Orbiter.

More recently, we have produced high-resolution high-cadence 2D and 3D data exploring the conditions encountered by PSP on its first perihelion. Analysis of the results is ongoing and it will provide a large amount of information on many different aspects of plasma turbulence, to be directly compared with present and upcoming spacecraft observations.

[1] Franci et al., Solar Wind Turbulent Cascade from MHD to Sub-ion Scales: Large-size 3D Hybrid Particle-in-cell Simulations, ApJ 853(1):26, 2018
[2] Landi et al., Spectral anisotropies and intermittency of plasma turbulence at ion kinetic scales, submitted to ApJL, 2019
[3] Hellinger et al., Turbulence vs. fire hose instabilities: 3-D hybrid expanding box simulations, submitted to ApJ, 2019
[4] Franci et al., Predicting and interpreting data of solar wind turbulence with numerical modelling: hybrid simulations vs. MMS observations, in preparation, 2019


Longest ever simulation of a planet in a protoplanetary disc

Ragusa et al (MNRAS 474,460: 2018) analysed the longest ever simulation of a planet in a protoplanetary disc, using the FARGO-3D code (Ben´ıtez-Llambay & Masset 2016) to follow a planet’s evolution over a record 300000 orbits. During this time the planet’s eccentricity (blue in figure below) underwent a complex evolutionary variation as it exchanged angular momentum with the disc. The simulations demonstrate that a relatively small (factor three) change in the disc mass has a major effect on the final eccentricity of the planet; the complex evolutionary pattern can be understood in terms of a superposition of precessing eccentric modes of the planet + disc system. These findings are relevant to explaining the significant eccentricity of the first hot Jupiter observed in a protoplanetary disc.

Figure 1: Eccentricity e as a function of time for light (left-hand panel) and massive (right-hand panel) case. The blue curve shows the planet eccentricity, the green curve the disc eccentricity at R = 4.7 in the light case and at R = 5 (azimuthal averages) in the massive one, while the red curve is a global measurement of the disc eccentricity starting from the AMD.

Rosotti & Juhasz (MNRAS 474,L32; 2018) proposed a novel way to probe the vertical temperature profile in protoplanetary discs from measuring the pitch angle of spiral structure in discs in both the infrared and submm. Their hydrodynamic simulations, using the PLUTO code (Mignone et al. 2007) of a planet induced spiral in a disc were post-processed with the radiative transfer code RADMC-3D and were able to demonstrate that spiral structure is significantly more tightly wound in the submm than the infrared. This finding is directly linked to the fact that submm emission traces cold mid-plane regions of the disc. This work means that multi-wavelength observations of spiral structure in real discs can be used as a `dynamical thermometer’ of disc mid-plane regions.

Winter et al (MNRAS 475,552; 2018) presented long timescale Smoothed Particle Hydrodynamical simulations using the GANDALF code (co-developped in house: Hubber et al 2018; MNRAS 473,1603) of a remarkable young stellar system whose widely spaced components are associated with extremely extended far infrared emission. These simulations demonstrated that the morphology of the emission and the kinematics of the stellar components can be explained through a dynamical history involving a previous star-disc encounter. This successful reproduction of an observed system provides some of the first evidence that young stars indeed from dynamically complex few-body systems, as suggested by star formation simulations.


Galactic Archaeology in the Gaia Era

On 25th April 2018, the ESA’s Gaia mission has made its second data release (Gaia DR2). Inspired by our N-body simulations of the Milky Way-like disk in this and our previous projects, Kawata et al. (2014, MNRAS, 443, 2757) and Kawata et al. (2018b, MNRAS, 473, 867), we studied the rotational and vertical velocity structure of the Galactic disk as a function of the Galactocentric radius, across a radial range of 5<Rgal<12 kpc, using the Gaia DR2 data. We find many diagonal ridge features (Fig. 1). We have found also radial wave-like oscillations of the peak of the vertical distribution in the Gaia DR2 data.



Figure 1: Normalized distribution of the rotation velocity for the selected Gaia DR2 stars with the RVS radial velocity data (upper) and our all selected Gaia DR2 stars (lower) as a function of the Galactocentric radius. The vertical dashed lines show the position of the Scutum, Sagittarius, Local, and Perseus spiral arms from left to right, which are calculated from Reid et al. (2014, ApJ, 783, 130). The vertical solid line is the assumed solar radius. The white lines highlight the identified ridge features.

We noticed that the diagonal ridge features found in Gaia DR2 looks qualitatively similar to what we see in our N-body/SPH simulations (an example shown in Fig.2). The simulation shows transient spiral arms and we considered that the transient and winding spiral arms act as perturbers in the disk to make these ridge structures. We followed up this idea in Hunt, Hong, Bovy Kawata & Grand (2019, MNRAS, 481, 3794), and using a toy models of transient and winding spiral arms, we showed that the rotation velocity ridge features can be explained with the transient and winding spiral arms.

Figure 2: The rotation velocity vs. the galactocentric radius for the star particles in N-body/SPH simulation from Kawata et al. (2014). The particles are selected from a similar region of the galactic disk to where the Gaia DR2 data are analysed in the left panel.

2017 Highlights

Some of the Research highlights from our users in 2017:

2017 list of publications


 

New Models give insight into the heart of the Rosette Nebula

New research, led by the University of Leeds, offers an explanation for the discrepancy between the size and age of the Rosetta Nebula’s central cavity and that of its central stars.

The Rosette Nebula (Credit Nick Wright, Keele University)

The Rosette Nebula in the Milky Way Galaxy, roughly 5,000 light-years from Earth, is known for its rose-like shape and distinctive hole at its centre. The nebula is an interstellar cloud of dust, hydrogen, helium and other ionised gases with several massive stars found in a cluster at its heart.

Stellar winds and ionising radiation from these massive stars affect the shape of the giant molecular cloud. But the size and age of the cavity observed in the centre of Rosette Nebula is too small when compared to the age of its central stars — something that has has puzzled astronomers for decades.

Through computer simulations run in part on DiRAC Resources, astronomers at Leeds and at Keele University have found the formation of the Nebula is likely to be in a thin sheet-like molecular cloud rather than in a spherical or thick disc-like shape, as some photographs may suggest. A thin disc-like structure of the cloud focusing the stellar winds away from the cloud’s centre would account for the comparatively small size of the central cavity.

Study lead author, Dr Christopher Wareing, from the School of Physics and Astronomy said: “The massive stars that make up the Rosette Nebula’s central cluster are a few millions of years old and halfway through their lifecycle. For the length of time their stellar winds would have been flowing, you would expect a central cavity up to ten times bigger.”

A projection of the simulated molecular cloud and nebula along a specific line of sight (and) A slice through the simulation of the Rosette Nebula. (Credit: C. J. Wareing et al.)

“We simulated the stellar wind feedback and formation of the nebula in various molecular cloud models including a clumpy sphere, a thick filamentary disc and a thin disc, all created from the same low density initial atomic cloud.

“It was the thin disc that reproduced the physical appearance – cavity size, shape and magnetic field alignment — of the Nebula, at an age compatible with the central stars and their wind strengths.”

More information on this result can be found here, the STFC Press Release can be found here and Alt-metrics for the paper, indicating it has had a very high level of attention, can be found here here.


 

Gravitational Waves 2017

2017 saw a series of impressive discoveries during the LIGO-Virgo second observing run. Early in January we made our third binary-black-hole observation (GW170104). Another black-hole signal arrived later in June (GW170608).

In August came two impressive observations. The first followed the inclusion in August of the French-Italian Virgo detector in the observing run, with the first binary-black-hole observation that included three detectors, on August 14th. A third detector made it possible to accurately triangulate the position of the source on the sky, making a dramatic improvement over previous observations.

Only three days later, on August 17th, came the first observation of gravitational waves from two neutron stars spiralling together. This observation was astounding in several ways. The source was only 40 Mpc away, ten times closer than the breakthrough binary-black-hole merger observed in 2015; that lead to the strongest signal we have yet observed. This signal made it possible to provide the strongest support yet for Einstein’s general relativity, an independent measurement of the Hubble constant, the first strong-field constraints on the equation of state of neutron stars.

Beyond that, the results of the merger – a gamma-ray burst and afterglow – were observed by many EM telescope teams. This marks the beginning of multi-messenger astronomy, and the implications for astronomy and astrophysics are still being understood.

Fig. 1: Time-frequency map of two neutron stars spiralling together.

An integral part of all of these GW measurements were the theoretical models of binary-black-hole signals that were developed with the aid of numerical simulations performed on DiRAC. In the weeks following the first binary-neutron-star observation, these models were extended to include neutron-star tidal terms. In preparation for the next observing run, due to begin in early 2019, these models have also been extended to an approximate model of subdominant harmonics, and this model is now being more precisely tuned to numerical simulations.


 

HPQCD_banner

Shining Light on the Structure of Hadrons

Understanding matter at its deepest level is complicated by the fact that the fundamental constituents known as quarks are hidden from us. Instead of being available for experimental scrutiny, as electrons are, quarks are always bound together into particles called hadrons. We have to infer the properties of quarks, and that of the strong interaction that binds them, from experimental studies of hadrons. A window into the hadron is provided by striking it with a photon, the quantum carrier of the electromagnetic interaction. The photon can penetrate the hadron to interact with the quarks inside, but the bound state nature of the hadron is revealed by the fact that the struck quark must redistribute the photon’s momentum between its fellow constituents for the hadron to change direction as momentum conservation requires (see Fig. 1). The hadron’s electromagnetic form factor, F(Q2), parameterises this bound-state behavior as a function of the squared 3-momentum transferred from the photon. New experiments at Jefferson Lab in the USA will shortly be taking data to determine the electromagnetic form factor of hadrons called pi and K mesons up to much larger values of Q2 than has been possible before. The aim is to reach a regime in Q2 where the strong interaction becomes relatively weak and F(Q2) is expected to have simple behavior from QCD, the theory of the strong interaction.

Fig 1. When a pi meson interacts with a photon (wavy blue line), momentum must be redistributed via gluon exchanged (curly blue line).

However, it is not at all clear where this regime begins. Hadron electromagnetic form factors can also be calculated using the numerical techniques of lattice QCD. The HPQCD collaboration has recently been able to reach a value of Q2 of 7 GeV2 for the determination of F(Q2) for a ‘pseudopion’, a particle like the pi meson but made of strange quarks instead of up and down quarks.

Fig. 2. Lattice QCD results from arXiv: 1701.04250 compared to a simple pole model at low Q2 and leading-order perturbative QCD at high Q2.

This improves numerical speed and statistical accuracy, but still allows us to compare to high-Q2 expectations. We find Q2F(Q2) to be flat in Q2 but in clear disagreement with the simple high Q2 result from perturbative QCD (see Fig. 2).

Calculations of F(Q2) for pi and K meson are underway to make direct predictions for the JLAB experiments. Darwin, and now CSD3, at Cambridge have proved ideal for this work since we can store intermediate quark propagators for re-use at multiple Q2 values.


 

Stellar Structure and Nucleosyntheis

Movie showing the evolution of the velocity magnitude of a 10243 resolution run of Carbon-burning shell (video credit: Dr Raphael Hirschi). This movie shows the turbulence in the central convective zone and gravity waves in the top and bottom stable regions. In addition, this movie nicely shows the Kelvin-Helmhotz (shear) instability developing at the top boundary due to the fast horizontal motion of the convective flow when it is deflected and “de”celerated by the boundary. Results from Cristini et al 2016, in prep.Very high resolution movie of the C-shell.


 

COSMOS Consortium: Flashes of light on dark matter

Flashes of light on dark matter

Figure 1: The “cosmic web”, image credit Matteo Veil.

Simulations of the dense cosmic forest in intergalactic space illuminated by distant quasars have revealed new insights into the properties of dark matter that have recently been published in Physical Review Letters. The study and its results further support the theory of Cold Dark Matter, which is composed of particles that move very slowly, while, for the first time, they highlight the incompatibility with another model, the Fuzzy Dark Matter (FDM), for which dark matter particles have larger velocities.In order to study these dark matter properties, COSMOS Consortium investigators analysed the interaction of the “cosmic web” – a network of filaments made up of gas and dark matter present in the whole Universe – with the light coming from very distant quasars and galaxies. Photons interacting with the hydrogen of the cosmic filaments create many absorption lines called the “Lyman-alpha forest”. The atomic interaction of photons with the hydrogen present in the cosmic filaments is used to study the properties of the cosmos and of the dark matter at enormous distances. On the basis of the results obtained, they were able to infer some of the characteristics of the particles that compose the dark matter. In particular, evidence showed for the first time that the mass of the particles, which allegedly compose the dark matter according to the FDM model, is not consistent with the Lyman-alpha forest observed by the Keck telescope (Hawaii, US) and the Very Large Telescope (European Southern Observatory, Chile). The data, instead, support the scenario envisaged by the cold dark matter model.

GRChombo goes public!

In February 2018, GRChombo, a new COSMOS Consortium code for numerical general relativity simulations targeted at theoretical physics applications, was made open source. It is developed primarily by a consortium of physicists from Argonne National Laboratory, Cambridge University, Göttingen University, King’s College London and Queen Mary University of London, in close collaboration with the COSMOS Intel Parallel Computing Centre at DAMTP Cambridge. GRChombo is maintained by a collaboration of numerical relativists and cosmologists with a wide range of research interests, from early universe cosmology to astrophysics and mathematical general relativity. Further information can be found at www.grchombo.org, including details of how to obtain and use the code. The GRChombo team hopes that this will be a useful resource for other researchers wishing to investigate the effects of strong gravity in fundamental physics. The next release of GRChombo (version 1.1) will shortly incorporate in-situ or “on the fly” visualization capabilities for AMR grids which were developed as part of the Software-Defined Visualization collaboration involving Intel, TACC and COSMOS IPCC (see sdvis.org)


 

VirgoConsortium

A new model of black hole growth and AGN feedback: an analytic model that explains the phenomenology seen in the EAGLE simulations as well as the observed colour bimodality in the galaxy population. In galaxies less massive than 3×1010 M, young stars and supernovae drive a high entropy outflow. Above a halo mass of 1012 M the outflow ceases to be buoyant and star formation is unable to prevent the build-up of gas in the central regions, triggering a strongly non-linear response from the black hole. Its accretion rate rises rapidly, disrupting the incoming supply of cool gas and starving the galaxy of fuel. The host galaxy makes a transition to the red sequence.

Large scale distribution of the hot, X-ray emitting gas (purple) and stars (yellow) in one of the C-EAGLE cluster regions. The filamentary structure surrounding the cluster halo can clearly be seen. The image is approximately 10 Mpc across. Credit: David Barnes and the Virgo Consortium.

A new theory for the nature of the galaxies that reionised the Universe, based on the EAGLE simulations, in which bursts of star formation can create channels through which ionising photons escape. It was shown that carbon-enhanced metal poor (CEMP) stars are the siblings of the stars that reionised the Universe. Today, these may be found in the intracluster light in groups and clusters.

Construction and analysis of synthetic HI rotation curves of dwarf galaxies in the APOSTLE cosmological simulations of Local Group analogues. Using the same techniques employed by observers (e.g. tllted ring modelling), for each individual galaxy a large diversity of rotation curves is obtained, depending on the orientation of the line of sight. These variations arise from non-circular motions in the gas. As a result, the presence of a core in the dark matter distribution is sometimes incorrectly inferred when there is actually a central cusp, as predicted in the simplest Λ-CDM models.

Completion of the C-EAGLE set of 30 galaxy cluster resimulations using the EAGLE code (including examples with self-interacting dark matter). Their X-ray properties, such as the spectroscopic temperature soft-band luminosity, metal content and Sunyaev-Zel’dovich properties, are in agreement with the observed relations.

The present-day cosmic large scale structure on a scale of 25 Mpc. Circles indicate dark matter haloes resimulated in E-MOSAICS; the mages on the right are zooms showing the distribution of gas, stars and globular clusters in one such galaxy. The latter are coloured to indicate whether they formed in-situ within the galaxy or were brought in by accreted satellites. The images at the bottom show the evolution of the galaxy from the epoch of reionisation at z=10 to the present day.

A new implementation of radiation feedback on galactic scales whereby star formation and the effects of their radiation field on the evolution of the gas are calculated self-consistently.

The first simulations that self-consistently follow the formation and evolution of star clusters and their host galaxy in a full cosmological setting. These “E-MOSAIC” simulations graft the MOSAICS semi-analytic model of star cluster formation and evolution into the EAGLE simulations (see Figure 1).

Simulations of galaxy formation using the EAGLE code in models where the dark matter consists of (warm) sterile neutrinos. Large regions of the sterile neutrino parameter space can be ruled because not enough satellite galaxies are produced.

An explanation for the mass discrepancy-radial acceleration relation (MDAR) of disc galaxies in the context of Λ-CDM and its verification in the EAGLE and APOSTLE hydrodynamics simulations, demonstrating that the MDAR does not require MOND.

Figure 1: Projections around one of our haloes at z~6.4, showing (top) gas column density, (middle) Minimum temperature, and (bottom) hydrogen ionization fraction, showing clear self-shielding.

Using artificial neural networks to constrain the halo baryon fraction during reionization. Radiative feedback from stars and galaxies has been proposed as a potential solution to many of the tensions with simplistic galaxy formation models based on Λ-CDM, such as the faint end of the UV luminosity function. The total energy budget of radiation could exceed that of galactic winds and supernovae combined, which has driven the development of sophisticated algorithms that evolve both the radiation field and the hydrodynamical response of gas simultaneously, in a cosmological context. In Sullivan, Iliev and Dixon (2018) we probed self-feedback on galactic scales using the adaptive mesh refinement, radiative transfer, hydrodynamics, and N-body code Ramses-RT. Unlike previous studies which assume a homogeneous UV background, we self-consistently evolved both the radiation field and gas to constrain the halo baryon fraction during cosmic reionization. We demonstrated that the characteristic halo mass with mean baryon fraction half the cosmic mean, Mc(z), shows very little variation as a function of mass-weighted ionization fraction. Furthermore, we found that the inclusion of metal cooling and the ability to resolve scales small enough for self-shielding to become efficient leads to a significant drop in Mc when compared to recent studies. Finally, we developed an Artificial Neural Network that is capable of predicting the baryon fraction of haloes based on recent tidal interactions, gas temperature, and mass-weighted ionization fraction. Such a model can be applied to any reionization history, and trivially incorporated into semi-analytical models of galaxy formation.

Figure 2: Halo baryon fraction, in units of the cosmic mean,The colour bars show (top to bottom) the strongest tidal force exerted over the last dynamical time, mass-weighted hydrogen ionization fraction, mass-weighted gas temperature (in units of the virial temperature) and the virial ratio (ratio of kinetic to potential energies). The dashed lines show the mean baryon fraction in bins of logarithmic halo mass, where the error bars correspond to the standard deviation.We use the existing halo catalogue data, as it is presented in the left-hand column, for our network predictions. The ANN is trained using only our fiducial simulation, and is then tested against our other models, showing excellent agreement.


 

Breaking resonances between migrating planets

One of the major breakthroughs in astronomy in the last decade has been the discovery that extra-solar planets are ubiquitous. We now have a variety of ways of detecting exoplanets, and our telescopes have now found thousands of planetary systems. Perhaps the most surprising discovery was that most known extra-solar planetary systems look nothing like the Solar System: we see giant planets orbiting their stars in just few days; planets on highly eccentric orbits; and even systems where multiple rocky or icy planets are packed into very closely-spaced orbits. The latter type of system, discovered by the Kepler mission, are particularly interesting, as these planets orbit too close to their host stars to have formed in their current locations. The conventional picture of planet formation suggests that such planets formed much further from their stars, and then “migrated” inwards due to gravitational interactions with their parent protoplanetary gas disc. Planets of different mass migrate at different rates, however, and in multi-planet systems this process invariably leads to neighbouring planets becoming trapped in so-called mean-motion resonances. These resonances are special locations in planetary systems, where adjacent planets’ orbital periods are integer multiples of one another (i.e., a 2:1 resonance occurs when the outer planet has exactly twice the period of the inner planet). When in such a resonance the planets’ feel one another’s gravity much more strongly than they would do otherwise. Planets which become “trapped” in these locations are therefore expected to remain in resonance indefinitely, but this is at odds with what we see in real systems: most exoplanets in multiple systems are not in resonance with their neighbours. This stark discrepancy between theory and reality has led astronomers to question the very basics of our understanding of how planetary systems form.

Figure 1: Numerical simulation of a pair of “super-Earth” planets migrating though a protoplanetary disc, performed on the DiRAC-2 Data Analytic cluster. The simulation followed the planets’ migration into, and eventually out of, the 2:1 mean-motion resonance, showing that such resonances can be readily broken. (Figure adapted from Hands & Alexander 2018.)

In the last few years astronomers have suggested a wide variety of ways around this problem, but as yet no solution has gained acceptance. The Theoretical Astrophysics Group in Leicester has recently used high-resolution numerical simulations to study how planets in multiple systems migrate through their parent protoplanetary discs. The computational power provided by DiRAC allowed us to follow the evolution of multi-planet systems for many thousands of orbits, capturing the planet-disc and planet-planet interactions in exquisite detail (see Figure 1). These simulations show that under certain conditions (conditions which are prevalent in many planet-forming discs) the gravity between the disc and the planet can overcome the gravity between the planets, breaking the system out of the resonance. This mechanism appears to operate efficiently, with planets escaping resonance after as little as a thousand years. This potentially allows multi-planet systems to migrate into their observed tightly-packed orbits without becoming “trapped” in resonances, and offers an elegant answer to the question of how to assemble the tightly-packed planetary systems observed by Kepler.


 

Extreme QCD: Quantifying the QCD Phase Diagram

Our collaboration uses the DiRAC supercomputers to simulate the interaction of quarks, the fundamental particles which make up protons, neutrons and other hadrons. The force which holds quarks together is Quantum ChromoDynamics, “QCD”. We are particularly interested in the behavior of QCD as the temperature increases to billions and even trillions of Celsius. These conditions existed in the first moments after the Big Bang, and are recreated (on a much smaller scale!) in heavy ion collision experiments in CERN (near Geneva) and the Brookhaven laboratory (in New York State).

The intriguing thing about QCD at these temperatures is that it undergoes a substantial change in nature. At low temperatures, QCD is an extremely strong, attractive force and so it’s effectively impossible to prise quarks apart, whereas at temperatures above the “confining” temperature, Tc, it is much weaker and the quarks are virtually free and the hadrons they once formed “melt”.

We study this effect by calculating the masses of protons and other hadrons and their “parity partners”, which are like their mirror-image siblings. At low temperatures, hadrons have lower masses than their partners. Above Tc, due to the fact that QCD acquires more symmetry, hadrons and their partners may become equal in mass.

Figure 1: Plot of the “R”-value as function of temperature.

Our results are summarized in the plot where we show the “R” value which measures how close the masses of the partners are: R=1 corresponds to very different masses, and R=0 means that they have identical masses. For a variety of hadrons including the proton (N), delta (Δ) and omega (Ω) as the temperature increases above Tc, we see that R is close to zero implying that the parity partners have near equal masses. This confirms the long-held, but unconfirmed suspicions regarding the nature of QCD. Drilling deeper into this data, it’s clear from the figure, that the Ω hadron partners are the least close in mass. This can be understood because the constituent quarks in Ω are the “strange” quarks which are significantly heavier than the virtually massless “up” and “down” quarks inside the N and Δ. This extra mass means that the symmetry mechanism mentioned does not work as well.


 

Hadron structure

Hadrons fall into two classes: baryons, such as the familiar proton, and mesons, for example the pion. Deep Inelastic Scattering was the first direct evidence that baryons really were made up of much smaller constituents, “partons”, i.e. quarks and gluons. Investigating the structure of hadrons has become a vital component of many experiments at (for example) JLAB and the LHC.

Figure 1: First results of the method leading to a reconstruction of a parton distribution function (left). Ratio of the electric and magnetic form factors as function of momentum transfer (right).

In [1] we have proposed an alternative method of measuring the parton distribution in hadrons. Parton distribution functions tell us how the momentum of a nucleon is shared out between its partons. The traditional lattice method of measuring parton distributions proceeds through the operator product expansion. In that approach lattice operators are constructed which correspond to moments of the distribution – each moment needs a different operator, with its individual renormalisation constant. Our new approach directly involves simulating scattering on hadrons – in essence we are repeating a deep inelastic scattering experiment on the lattice. First results of the method leading to a reconstruction of a parton distribution function (left panel) are shown above.

Other investigations are designed to determine the distribution of electromagnetic currents in hadrons due to the presence of quarks. These can be parameterised by two functions (electric and magnetic form factors). For most of the second half of the 20th century, it was believed that they both depended on momentum in the same way, so that their ratio was constant. However recent experiments have shown that at higher momentum transfer the ratio decreases and perhaps passes through zero. To investigate this theoretically is difficult (due to the larger momentum transfer). We have developed a method, [2], that allows this range to be considerably extended, approaching the region where the sign might change, as shown in the right panel.

Bibliography: [1] A. J. Chambers et al., Phys. Rev. Lett. 118 (2017) 242001. [2] A. J. Chambers et al., Phys. Rev. D96 (2017) 114509.


 

Lattice QCD tests Standard Model at high precision

Figure 1: Fermilab g-2 Experiment (Photo: Wikipedia/Glukicov)

We have recently studied the properties of the electron’s heavier sibling, the muon. Its anomalous magnetic moment, aμ, quantifies how vacuum fluctuations contribute when it interacts with an external magnetic field. With accuracy of the order of 1ppm, aμ is one of the most precisely determined quantities in experimental as well as theoretical particle physics. Since its value is sensitive to contributions from yet unknown new physics, the persistent 3−4σ tension between experiment and theory is generating a lot of interest within the particle physics community. Is this really a first sign of new and yet unknown physics? Two new experiments at Fermilab (USA, see Figure 1) and J-PARC (Japan), respectively, are trying to address this question. Both efforts are expected to reduce the uncertainty on the experimental value by a factor of around four, down to 140ppb. This puts much pressure on the theory community to match this precision – only the combination of theory and experiment will be able to answer this important question.

Figure 2: Summary of results for the HVP [4] – RBC/UKQCD 2018 are the results addressed in the text – note the tension with the “No new physics data point”. Green points are other lattice determinations, magenta points are phenome-nological determinations.

On the theory side the error budget is dominated by hadronic uncertainties, in particular uncertainties on predictions for vacuum polarisation effects due to the strong interaction. DiRAC has enabled the first-ever computation of these effects starting from first principles, i.e. from the Standard Model Lagrangian by allowing simulations of Lattice Quantum Chromodynamics including effects of Quantum Electrodynamics. In a series of papers [1-4] researchers from the Universities of Edinburgh and Southampton, together with their collaborators in the US, have developed the required theoretical understanding and the numerical and algorithmic techniques needed to complete their ambitious and state-of-the-art program. In Lattice Quantum Chromodynamics one performs the calculation by constructing a discrete four-dimensional space-time grid (the lattice) on which one numerically solves the QCD equations of motion. Such lattice QCD simulations are the only known method to address the above questions without having to rely on ad hoc assumptions. This type of computation crucially relies on access to the world’s fastest parallel supercomputers like DiRAC’s Extreme Scaling system.

There is still some way to go until a full theoretical computation will match experimental precision. The DiRAC researchers have however developed a method that allows to get very close to the required precision by combining complementary experimental data and their numerical simulations. It will be exciting to compare to new experimental results expected by end of 2018!

Bibliography:[1] Calculation of the hadronic vacuum polarization disconnected contribution to the muon anomalous magnetic moment, Blum et al., Phys.Rev.Lett. 116 (2016) no.23, 232002 [2] Lattice calculation of the leading strange quark-connected contribution to the muon g–2, RBC/UKQCD Collaboration, Blum et al., JHEP 1604 (2016) 063 [3] Isospin breaking corrections to meson masses and the hadronic vacuum polarization: a comparative study, Boyle et al., JHEP 1709 (2017) 153 [4] Calculation of the hadronic vacuum polarization contribution to the muon anomalous magnetic moment, Blum et al., arXiv:1801.07224


 

Black Hole highlight
Modelling black hole spins and jets in the new era of gravitational wave astronomy

Active galactic nuclei (AGN), with powerful nuclear emission that can outshine the entire host galaxy, represent some of the most extreme physical environments in the Universe. At the heart of these systems reside rapidly growing supermassive black holes (SMBH). They act as efficient gravity engines releasing vast amounts of energy through accretion, over most of cosmic time. It is believed that these energy outbursts may change not only the properties of gas in the vicinity of the AGN, but transform the whole host galaxy, expelling material all the way into intergalactic space.

Figure 1: Column density rendering of a self-gravitating gaseous disc surrounding an equal mass, 106 M sol SMBH binary. Gas streams excited by the time-varying gravitational potential of the binary develop from the cavity edge to the SMBHs. These streams provide fresh material for accretion and affect the binary dynamics through gravitational torques, extracting angular momentum and reducing the SMBH separation. Simulations have been performed on DiRAC supercomputers at the Universities of Cambridge, Leicester and Durham.

There are, none the less, many outstanding questions on the connection between SMBHs and their host galaxies, and numerical simulations have a major role in advancing our knowledge and providing testable theories that future observations may prove or rule out. For example, it is well established that accretion discs play a fundamental role in modulating mass accretion onto SMBHs and driving their spin evolution, with profound implications for relativistic jets and gravitational waves from coalescing SMBH binaries. Their contribution is, however, customarily neglected in large-scale galaxy formation simulations due to the inherent complexity.

Researchers at the Institute of Astronomy (IoA) and at the Kavli Institute for Cosmology (KICC) at the University of Cambridge, have taken a major step forward in solving this issue by developing a novel numerical model within the moving-mesh code AREPO, which self-consistently describes mass feeding through an accretion disc, governing both the evolution of mass and spin of a SMBH. In a recently submitted paper (Fiacconi, Sijacki, Pringle, MNRAS submitted, arXiv:1712.00023), they showed that SMBHs in spiral galaxies, like in our own Galaxy, are expected to be highly spinning, while SMBHs in more massive elliptical galaxies likely have lower spins. IoA/KICC researchers are now exploring the pairing of SMBH spins in binaries (see Fig. 1), which will represent the main cosmological source of gravitational waves for the LISA space observatory (launch 2034).

SMBHs spins are thought to be closely related to the jet phenomena, which are often observed to inflate giant lobes of relativistic plasma in galaxy clusters. Understanding the propagation and interaction of jets with the surroundings by means of hydrodynamical simulations is challenging because of the large dynamic range involved, spanning over 8 orders of magnitude in spatialscales. Building on an earlier implementation of a super-Lagrangian refinement scheme that allows unprecedented spatial resolution close to SMBHs (Curtis & Sijacki, 2015, MNRAS, 454, 3445), IoA/KICC researchers have developed a new jet model for AREPO capable of injecting realistic, highly beamed jets (see Fig. 2; Bourne & Sijacki, 2017, MNRAS, 472, 4707). They found that the inflation of lobes can heat the intra-cluster medium (ICM) through a number of processes, although jets seem to be unable to drive significant turbulence within the ICM in agreement with the gas kinematics observed by the Hitomi satellite in the centre of the Perseus
cluster.

Figure 2: Simulated AGN-driven jet propagating through the ICM. Each panel shows a different gas property in a slice through the jet. Also shown in the middle panel is the Voronoi cell structure, highlighting the wide dynamic range afforded by the super-Lagrangian refinement scheme: smallest cell size is ~10pc while the size of the computational domain is 20Mpc. Simulations have been performed on DiRAC supercomputers at the Universities of Cambridge, Leicester and Durham.

The next immediate aim is to combine the new models of accretion and jet feedback, as jet power and direction are expected to be directly related to the BH spin. With the upcoming Hitomi replacement, X-ray Astronomy Recovery Mission (XARM, launch 2021), and future Advanced Telescope for High Energy Astrophysics (Athena, launch 2028) on the horizon, IoA/KICC researchers’ simulations on DiRAC supercomputers will provide testable predictions and help interpret these new observations.


 

Exeter: The formation of massive stars

Figure 1: Density cut through the simulation. The colour scale shows density (scaled between 10−19 and 10−14 g/cc). The arrows show the direction of the velocity field, and their colour the speed (between 0 and 100 km/s).

Stars form as molecular clouds gravitationally collapse and fragment. The formation of the most massive stars (generally understood to be those with masses greater than twenty times the mass of the Sun) is complicated by the presence of the intense protostellar radiation field, which produces outward forces which may hinder accretion and even lead to outflows. Incorporating the effects of the radiation field on the gas is a complex problem, involving a combination of microphysical effects (dust heating, scattering, ionisation) that require a polychromatic treatment of a radiation field.

Figure 2: The line profile (upper left), integrated intensity map (upper right in Jy km/s/pixel), moment 1 map (lower left) and position velocity diagram (lower right in mJy/pixel), for the CH3CN J = 13 → 12, K = 3 line.

At Exeter we have developed a novel radiation hydrodynamic (RHD) code that treats the problem using a Monte Carlo method, splitting the radiation field into many millions of photon packets that propagate through the computational domain, undergoing absorptions, scatterings and re-emissions. We have run this code on DiRAC’s Complexity system to simulate the growth of a massive star. We find that the star forms via stochastic disc accretion and produces fast, radiation-driven bipolar cavities. The evolution of the envelope infall rate and the accretion rate on to the protostar are broadly consistent with observational constraints. After 35 kyr the star has a mass of 25 solar masses and is surrounded by a 7 solar-mass Keplerian disc of 1500 au radius (see Figure 1). Once again these results are consistent with those from recent high-resolution studies of discs around forming massive stars.

We are able to construct so called-synthetic observations from our RHD models. For example, molecular line simulations of a methyl cyanide (CH3CN) transition compare well with observations in terms of surface brightness and line width. These results indicate that it should be possible to reliably extract the protostellar mass from such observations (see Figure 2).


 

Radiative Transfer Simulations for Type I Supernovae

Figure 1: Comparison between synthetic nebular spectra of W7 calculated with ARTIS (coloured by emitting ion) and the model of Fransson & Jerkstrand (2015; black line).

Between tens and hundreds of days after explosion, the material ejected from Type Ia supernovae become optically thin at most wavelengths. At this time, the observed spectra are dominated by emission from forbidden radiative transitions of singly- and doubly-ionised Fe, Co, and Ni. With almost all of the 56Ni having decayed prior to the nebular phase, most of the energy injection is from the decay of 56Co. The decays of 56Co release gamma rays and positrons, which Compton and Coulomb scatter with thermal electrons, resulting in a population of high-energy (~MeV) non-thermal electrons. These non-thermal electrons are especially important for the ionisation balance, as the rate of non-thermal ionisation is generally higher than photoionisation for the most important ions at this time.

Figure 2: Comparison between normalised ARTIS synthetic spectra for W7 and sub-Chandrasekhar models (normal and high metallicity), and the observed nebular spectrum of SN2011fe from Mazzali et al. (2015).

The ARTIS code (Sim 2007, Kromer & Sim 2009) is a multi-dimensional radiative transfer code that uses the Monte Carlo method with indivisible energy packets (Lucy 2002) to compute spectra for modern supernova explosion simulations. To extend the range of validity to late times, we have (2016/17) developed a new non-LTE population/ionisation solver that includes non-thermal ionisation, excitation, and heating processes, and a comprehensive new atomic database with forbidden transitions and modern photoionisation cross sections. This allows us, for the first time, to calculate spectra out to one year post-explosion (early studies with ARTIS were typically limited to less than one month). Extensive testing has verified that our code can reproduce previous one-dimensional work (see Figure 1).

Taking advantage of DiRAC HPC resources, we have calculated nebular spectra for a range of explosion scenarios, including both Chandrasekhar- and sub-Chandrasekhar-mass explosions (Nomoto et al. 1984; Iwamoto et al. 1999; Sim et al. 2010). Results substantiate the potential of the [Ni II] 7378 feature to distinguish between explosion scenarios (Shingles et al.).


 

Understanding our Milky Way

The most important new result in 2017 was our (Debattista et al. 2017) demonstration that all the trends observed in the Milky Way’s bulge are produced by the evolution of the bar. Trends such as the X-shape of the metal-rich bulge stars, which is absent in the metal-poor stars, the vertical metallicity gradient, the old age of bulge stars, the weak bar of the oldest stars compared with the strong bar in slightly younger stars, and the varying kinematics of stars with metallicity, have all, in the past, been interpreted as being due to the bulge having formed partly in situ and partly accreted. Our simulations show that all these trends result purely from secular evolution of the bar itself (i.e. the bulge is part of the bar), with no hints of an accreted component other than the stellar halo, at about 1% of the Milky Way’s total stellar mass. Our results have resolved 20 years of conflicting interpretations of the origin of the Milky Way’s bulge and represent an important step in understanding it. We explained the physics of this evolution via a new concept, kinematic fractionation: the morphological separation of stellar populations on the basis of their kinematics at bar formation. The simulations predict strongly X-shaped metallicity maps in external edge-on boxy/peanut-shaped bulges, a prediction which has since been confirmed using MUSE observations (Gonzalez et al. 2017), establishing a novel link between the Milky Way and galaxies in general.

Figure 1: Metallicity map.


 

Protoplanetary disks

Figure 1: The CO emissivity profile of the extensive disc around the young star IM Lupi, demonstrating that the observed profile is best fit by the photoevaporative flow driven by photoevaporation with a low ambient ultraviolet radiation field.

We have completed a study (Rosotti et al 2017) involving the longest term integration of a planet + protoplanetary disc system that has ever been conducted (totaling 3 x 105 planetary orbits) with a view to understanding whether discs can drive significant planetary eccentricity. This is a fundamental issue for interpreting data on exoplanetary eccentricity. We have shown that there is a complicated exchange of eccentricity between the disc and the planet on these timescales which can be understood in terms of two secular modes that are subject to long term damping and pumping. While there is still much to be understood about the processes responsible for mode damping and pumping, our study has shown that eccentricity growth cannot be determined via the sort of short timescale (< 1000 orbit) simulations that have been conducted previously. These GPU accelerated calculations pave the way for a more comprehensive exploration of long term eccentricity driving of planets in discs.

In Haworth et al 2017 we have presented the first calculations that demonstrate that discs may lose significant mass from their outer regions as a result of photoevaporation by ambient far ultraviolet radiation, even in regions where the ambient radiation levels are very low. This study involved a novel code, TORUS-3DPDR which solves for radiative transfer in dust, hydrodynamics and photochemistry within the outflowing wind and which demonstrates that the extended CO emission observed over many 100s of AU from the disc IM Lupi is well explained by a photoevaporative wind.


 

Excited Charmonia and Charmonium Resonances from Lattice QCD

Figure 1:  Examples of the results: left is the spectrum in the isospin-1 hidden-charm channel with JP=1+ (J is spin, P is parity), right is the isospin-0 doubly-charmed channel with JP=1+. Boxes are the computed masses and one-sigma statistical uncertainties; horizontal lines show the meson-meson energy levels expected if there were no meson-meson interactions with an adjacent number indicating the degeneracy if it is larger than one (lattice QCD calculations are performed in a finite volume and so the spectrum of meson-meson energy levels is discrete). The number of energy levels in the computed spectrum is equal to the number of expected non-interacting meson-meson levels and they all lie close to the non-interacting levels. This suggests that there are only weak meson-meson interactions and no strong indications of a bound state or narrow resonance in these channels.

Experiments have observed a number of puzzling charmonium-like states in recent years. Of particular interest are states with ‘exotic flavour’, i.e. those with flavour quantum numbers which cannot arise from solely a quark-antiquark pair and so signal physics beyond a conventional meson. One proposed explanation for such states is that they are tetraquarks, compact states containing two quarks and two antiquarks. In Cheung et al [JHEP 1711 (2017) 033, arXiv:1709.01417] we developed methods to investigate the existence of tetraquarks using first-principles lattice QCD computations. The spectrum of states was calculated in a selection of exotic-flavour channels (isospin-1 hidden-charm and doubly-charmed channels), including a number where tetraquarks are expected in phenomenological models. There were no strong indications for any bound states or narrow resonances in the channels studied and so no sign of tetraquark candidates.


Exomol

Thousands of exoplanets have been discovered in the recent years, most of them are gas giant and hundreds appear to be rocky. Many of these exoplanets have very short orbital period, hence hot atmospheres. The hot temperatures are expected to turn these atmospheres into a high pressure steam, thus having very different spectroscopic signatures than cooler objects, which will influence interpretation of the atmospheric observations. At present interpretation is severely impacted by the lack of corresponding spectroscopic data. The massive number of absorbers in the atmospheres of exoplanets also affects the cooling and therefore the evolution of the young hot objects; comprehensive molecular data is also crucial to model these processes.

Figure 1: A giant planet that orbits close to its sun (Illustration: NASA/ESA/G. Bacon (STScI)).

Our group ExoMol specialises in and leads the field of producing theoretical spectral data (so-called line lists) for molecules of atmospheric, astrophysical, and industrial importance applicable for high temperature applications. The ExoMol database already contains over 300 billion transitions for over than 50 molecular species. In order to accomplish this data intensive task, we use some of the UK’s most advanced supercomputers, provided by the Distributed Research utilising Advanced Computing (DiRAC) project and run by the University of Cambridge. Our calculations require tens millions CPU hours, hundreds thousands GPU hours, and up to 6 TB of RAM, the processing power only accessible to us through the DiRAC project. We are in a unique position to provide line lists for more complex molecules (such as large hydrocarbons), comprising up to 1012 lines.

Methane is a major absorber in hot Jupiter exoplanets, brown dwarfs and cool carbon stars. There is a huge demand for a reliable hot methane line list and there are several groups working towards this objective. As part of our project we have generated a new 34 billion methane line list (known as 34to10), which extends our original 10to10 line list to higher temperatures (up to 2000 K). Even larger line lists are now being completed (for silane, ethylene, acetylene, propene, methyl chloride etc).

Billions of transitions make their direct (line-by-line usage) application in radiative transfer calculations impractical. To mitigate this big data issue we have developed a new, hybrid line list format, based on the idea of temperature-dependent absorption continuum. The new hybrid scheme has being now applied to larger hydrocarbon’s with line lists containing hundreds billions of transitions.


 

Galactic Archaeology in the Gaia Era

Impacts of a flaring star-forming disc and stellar radial mixing on the vertical metallicity gradient. We ran an N-body simulation of the evolution of a barred disk galaxy similar in size to the Milky Way, using our original Tree N-body code, GCD+. We first assumed the thin disk has a constant scale-height, zd, independent of the radii, ”constant zd model”. We then assigned the metallicity of the thin disk particles to follow the radial metallicity gradient of [Fe/H] = 0.5−0.08× R with a dispersion of 0.05 dex with no vertical metallicity gradient at t = 0. Left two panels of Fig. 1 show the initial (at t = 0) and final (t = 8 Gyr) vertical metallicity distribution for the thin disk component of this model (constant zd model). Interestingly, after 8 Gyr of evolution, the vertical metallicity gradient becomes positive, which is inconsistent with the negative vertical metallicity gradients observed in the mono-age population of the thin disk (e.g. Ciucă, Kawata et al. 2018, MNRAS, 475, 1203).

Figure 1: Vertical metallicity distributions of the thin disk component for constant scale-height, zd, model (left two panels) and flaring model (right two panels) in the radial ranges of 6 < R < 10 kpc at t = 0 and 8 Gyr. The white dots and the error bars at t = 8 show the mean and dispersion of [Fe/H] at different height, |z|, and the solid line shows the best fit straight line. The dotted line shows the vertical metallicity gradient assumed at R = 8 kpc at t = 0.

We presented that a flaring thin disk is one possibility to remedy this issue. We made ”flaring model” with a 2nd thin disk component whose vertical scale height increases with radius. This flaring thin disk component mimics a mono-age population that is born in a flaring star-forming region. We assigned the metallicity following the same radial metallicity gradient as our constant zd model to the flaring disk particles. Right two panels of Fig. 1 show the initial (at t = 0) and final (t = 8 Gyr) vertical metallicity distribution for the flaring thin disk component of this model. In this case, the metal poor stars initially in the outer disk become dominant at high vertical height at every radii, which can drive a negative vertical metallicity gradient. Therefore, if mono-age populations of the Milky Way thin disk stars are formed within a flaring star forming disk, each mono-age thin disk population can have a negative vertical metallicity gradient. Then, older population has a steeper negative vertical metallicity gradient, because they have more time to experience more radial mixing. This can explain the steeper negative vertical metallicity gradient observed in the older population of the thin disk with the combined data of Gaia DR1 and RAVE DR5 (Ciucă, Kawata et al. 2018).


 

Linking the nuclear interaction to the structure of the heavy elements

Figure 1: Successful comparison between the experimental charge distribution of 36S and our simulations, along with the predicted charge profile of 34Si.

Ab initio calculation of the potential bubble nucleus 34Si. The possibility that an unconventional depletion (referred to as a “bubble”) occurs in the center of the charge density distribution of certain nuclei due to a purely quantum mechanical effect has attracted theoretical and experimental attention in recent years. Based on a mean-field rationale, a correlation between the occurrence of such a semibubble and an anomalously weak splitting between low angular-momentum spin-orbit partners has been further conjectured. Energy density functional and valence-space shell model calculations have been performed to identify and characterize the best candidates, among which 34Si appears as a particularly interesting case. While the experimental determination of the charge density distribution of the unstable 34Si is currently out of reach, we studied the potential bubble structure of 34Si on the basis of the state-of-the-art ab initio self-consistent Green’s function many-body method and using HPC resources at CEA Saclay (France) and the DiRAC Complexity cluster (UK). Demanding that the experimental charge density distribution and the root-mean-square radius of 36S be well reproduced, along with 34Si and 36S binding energies, only leaves one model specific model of the Hamiltonian as a serious candidate to perform this prediction. In this context, a bubble structure, whose fingerprint should be visible in an electron scattering experiment of 34Si, is predicted. The occurrence of a bubble structure in the charge distribution of 34Si is convincingly established on the basis of state-of-the-art ab initio calculations.


 

Understanding the origin of strong galactic outflows

Understanding the origin of strong galactic outflows is one of the key issues in galaxy formation theory. However, due to our incomplete picture of stellar feedback processes and the lack of numerical resolution, much remains uncertain about how gas is blown away from star-forming regions in galaxies. Using high-resolution radiation-hydrodynamic simulations of a gas-rich isolated disk, we investigated the impact of radiation feedback and supernova explosions on the star formation histories and gas outflow rates. In particular, we have introduced radiation pressure from resonantly scattered Lyman α photons for the first time, and found that it efficiently destroys giant molecular clouds in the early stage of bursty star formation episodes in a metal-poor dwarf galaxy. As a result, supernovae explode in low-density environments, carrying a large amount of gas that is comparable to the star formation rate. We also demonstrated that this early feedback self-controls the gas outflow rate by regulating a bursty star formation history, as opposed to some recent claims that multi-scattering photons can drive extremely massive outflows. As Lyman α photons scatter very efficiently in a dust-poor medium, it may also have played a crucial role in suppressing the formation of extremely metal-poor globular clusters in the early Universe.

Figure 1: radiation-hydrodynamic simulations of a metal-poor dwarf galaxy. The left panels display the projected distribution of gas (top) and stellar density (bottom). The right panels show that the majority of Lyman α radiation is generated (top) and scattering (bottom) inside star-forming regions. Lyman α feedback suppresses the formation of a massive stellar bulge by blowing gas out from the galaxy centre. In order to cover a wide dynamic range from several hundred kpc to a few pc, the simulations were carried out on the high-performance computing facility, DiRAC, employing ~10 million adaptively refined grids.

2013 Highlights

Blackhole Highlight:  The environment of bright QSO’s at z ~ 6

cosmos12.png

Figure 1. The distribution of massive black holes in different patches of the 900 million year old Universe according to the recent simulations carried out at the IoA/KICC. Black holes are both more massive and more numerous as the overdensity increases.

Observations of nearby galaxies and high redshift quasars suggest that black holes are present in the majority of galaxies. The SDSS, UKIDSS and VISTA surveys have revealed numerous high-redshift QSOs harbouring black holes with masses in excess of a billion solar masses at a time when the Universe was less than one billion years old (about 10% of it’s current age). The fact that such massive black holes have managed to grow by this time provides a significant challenge for theoretical models of the formation and growth of supermassive black holes.

Researchers at the IoA/KICC have performed a suite of state-of-the-art numerical simulations which self-consistently follow the growth of black holes from very early times to the present day. By simulating a large volume of the Universe, they investigated the growth of massive black hole in a variety of cosmological environments. The images shown in Figure 1 illustratethe distribution of gas in different patches of the 900 million year old Universe. Black holes are far more numerous and massive in the most overdense regions of the Universe. The biggest black circles in the bottom row correspond to black holes with masses close to one billion solar masses. Only very massive black holes located in highly overdense regions were found to give rise to peak luminosities in agreement with observational estimates for the brightest, high redshift quasars, as shown in Figure 2. This finding indicates that these quasars should reside in some of the rarest dark matter halos at the time. Members of the IoA/KICC also used these simulations to study the so called AGN feedback, the interaction between massive black holes and gas in their surroundings. They found that the energy released due to accretion onto the most massive black holes gives rise to powerful outflows that push gas at speeds exceeding a thousand km/s. The comparison of simulated quasar outflows with observations will provide an essential tool to pin down the physical mechanisms underlying the interaction of massive black holes and their host galaxies.


COSMOS Highlight:  Planck Satellite Science – Searches for Non-Gaussianity

The ESA Planck satellite, launched in May 2009, provides an unprecedented high resolution survey of the temperature of the cosmic microwave background (CMB) radiation. COSMOS and HPCS@DiRAC resources were vital for the science exploitation efforts in several key Planck papers. This effort has led to new estimates of cosmological parameters – shown in Figure 1. Planck has also crossed important qualitative thresholds, making gravitational lensing and the study of non-Gaussian statistics truly quantitive subject areas for the first time.

Using the unique capabilities of the COSMOS system, consortium members have used Planck data to undertake the most stringent tests of the inflationary paradigm to date by studying the prediction that primordial fluctuations should have Gaussian statistics (most results are in the ground-breaking Planck Non-Gaussianity paper (dp002.03)). The threepoint correlator (“triangles on the sky”) or bispectrum was evaluated to high precision for the first time (Figure 1); it is a 3D tetrahedral object depending on three triangle multipoles l1, l2, l3. Searches were undertaken for the standard forms of local, equilateral and orthogonal bispectra, described by the ‘nonlinearity’ parameter fNL

These limits, up to four times stronger than WMAP, significantly constrain broad classes of alternative inflationary models. Despite the stringent constraints on scale-invariant models, the Planck bispectrum reconstructions exhibit large non-Gaussian signals (Figure 1), inspiring further investigation of other non-Gaussian models using separable modal methods developed on COSMOS. The two most promising classes of models appear to be those with excited initial states (non-Bunch Davies vacua) and those exhibiting periodic features with apparent high significance. Intensive non-Gaussian analysis of the Planck data is ongoing for a broad range of primordial models, as well as investigating second-order late-time effects such as ISW-lensing.

cosmos11.png

Figure 1. CMB bispectrum reconstruction obtained from the Planck data employing Fourier modes (left) and higher resolution polynomial modes (right); several isosurfaces are shown both positive (red) and negative (blue). The predicted ISW-lensing signal can be seen along the tetrahedral edges (for the first time) and apparent ‘oscillatory’ features in the interior.


ECOGAL Highlight:  Galactic scale models of star formation

Galactic scale models of star formation focus on resolving how large-scale gas flows in spiral galaxies form the dense gas clouds where star formation occurs on scales 10,000 times smaller. The rate at which stars form in galaxies appears to be linked to the global as well as local properties. Using DiRAC, we have performed the largest-scale numerical simulations that can resolve the dense regions where stars form, and hence directly study the physics that drives star formation. We are using these simulations to understand how star formation is initiated, and what determines the resulting properties including why it has such a low efficiency (~1%).

ecogal1.png

Figure 1. The dense gas formed from a spiral shock is viewed perpendicular to the plane of the galaxy.

There also exist ‘special’ regions such as the centre of our Galaxy, where star formation is more exotic and primarily forms only very massive stars. Using realistic models for the galaxy’s gravity, we are studying the dynamics of gas clouds and how they can produced the observed structures and star formation events, including the young stars that orbit the super-massive black hole.

In order to construct a full theory of star formation, we need to also include additional physics of magnetic fields and the radiative and kinetic feedback processes from young stars. Constructing these self-consistent models, akin to modelling a full galactic ecology, requires the ability to model 100 to 1000 million individual elements in the galaxy and can only be performed with the HPC resources provided through DiRAC.

ecogal2.png

Figure 2. A gas cloud moving on an eccentric orbit in the Galactic Center is tidally sheared to form nearly complete stream.


EXETER Highlight:  The formation of a star and its magnetized outflows in radiation magneto-hydrodynamic simulations

As a gravitationally unstable molecular cloud core undergoes collapse to form a star, it undergoes several distinct phases. Initially it collapses almost isothermally as energy is easily radiated away. However, eventually the density rises to the point where radio and infrared radiation cannot freely escape from the gas, and the temperature begins to increase. This substantially increases the pressure, resulting in the formation of a pressure supported gaseous object with a typical size of ~10 AU (known as the first core). This object increases in mass as it accretes surrounding gas and, as it does so, its central temperature continues to rise. When its central temperature reaches ~2000 K, the molecular hydrogen from which it is primarily composed dissociates into atomic hydrogen. This is an endothermic process which results in a second collapse occurring inside the first core. This second collapse continues until the hydrogen is completely atomic, where upon a second, or stellar, core is formed with a size similar to that of our Sun. This object then accretes to its final mass to become a star. If reasonably strong magnetic fields are present initially, outflows from both the first core and the stellar core can be launched at speeds of several km/s. These outflows can carry some of the mass, angular momentum, and energy away from the forming star.

exeter11.png

Figure 1. Visualizations of the magnetic field direction in the outflows from the first cores (top) and stellar cores (bottom) in three calculations with different initial magnetic field strengths (magnetic field strength increases from left to right). Published in Bate, Tricco & Price (2013).

The DiRAC facility, Complexity, has been used to perform the first fluid dynamical simulations that include both magnetic fields and radiation transport to follow this whole process. Such calculations are essential for predicting what may be observed in highresolution observations that will be made using the Attacama Large (Sub-)Millimetre Array (ALMA) over the next few years, and interpreting these observations. In the Figure, we show visualizations of the magnetic field structure in the outflows. Animations of the calculations are at: http://www.astro.ex.ac.uk/people/mbate/Animations/stellarcore.html and the full dataset from the calculations is on the Open Research Exeter archive at http://hdl.handle.net/10871/13883 .


Exomol

exomol11.png

Figure 1. Absorption of methane at T = 1500 K – new theoretical spectrum (10to10) compared to the experiment (HITRAN12).

The aim of the ExoMol project is to compute comprehensive spectroscopic line lists for hot molecules thought likely to occur in the atmospheres of exoplanets, brown dwarfs and cool stars. The list of such molecules is quite long but only those for four or more atoms require the use of DiRAC resources. Our calculations have been split between two computers located at Cambridge: Darwin, which was used to diagonalise small matrices and to compute transition intensities, and COSMOS which was essential for diagonalising larger matrices. We encountered some performance issues with COSMOS when performing large matrix diagonalisations. This problem was referred to SGI who have very recently supplied us with a new diagonaliser for COSMOS. Our initial tests with this new diagonaliser are very encouraging and suggest that our performance issues should largely be resolved. However we have yet to use diagonaliser in productions runs and the results discussed below did not employ it.

Work using DiRAC during 2013 focussed on four main molecules: methane (CH4), phosphine (PH3), formaldehyde (H2CO) and sulphur trioxide (SO3).Methane is a major absorber in hot Jupiter exoplanets, brown dwarfs and cool carbon stars. There is a huge demand for a reliable hot methane line list and there are several groups working towards this objective. This project was therefore given priority. Our main result is that we have generated a new methane line list, called 10to10, which contains just under 10 billion transitions. This line list is complete for wavelengths longer than 1 μm and temperatures up to 1500 K It is by some distance the most comprehensive line list available for methane (see Fig.1). It is currently being actively used by about a dozen groups worldwide to model methane in a variety of astronomical objects (and one group for studies of the earth’s atmosphere). Fig. 2 shows the spectrum of the brown dwarf spectrum of 2MASS J0559-1404 compared to our simulations.

Our tests on this line list show that for photometric studies of the K, H, J bands show that previously available line lists (a) agree well with 10to10 at 300 K for the K and H bands but significantly underestimate J-band absorption due to lack of experimental data in this region; and (b) seriously underestimate absorption by methane for all bands at temperatures above 1000 K. We have also completed initial, room temperature line lists for PH3 and SO3, and a full line list for hot H2CO.


HORIZON Highlight

Dark matter is one of the strangest puzzles in physics: the universe seems to be composed of far more stuff than we can see directly. The evidence is overwhelming, yet indirect.Over the past few decades, astronomers have been able to measure the tug of gravitation in a huge range of environments – from galaxies, through clusters of hundreds of galaxies, to the early universe as seen in the cosmic microwave background – and we keep coming to the same conclusion, that the tug is surprisingly strong. To explain that, one needs around six times more matter than is seen directly. The missing five sixths is known as dark matter.

But attempts to detect particles of this matter have given no clear result. That doesn’t mean dark matter doesn’t exist, but it does mean we can’t be certain what it’s made from. If we can’t manufacture or find particles of it here on Earth, the only way to make progress is to keep studying its behaviour in the sky.

horizon11.png

Figure 1.

As our ability to infer the presence of dark matter in the night sky has improved, we can now measure how the invisible mass is distributed through individual galaxies. In the meantime, increasingly powerful computers have allowed the expected distribution to be predicted (based on assumptions about the particle’s nature). It has proved exceptionally difficult to get these two methods to agree – perhaps pointing to an exotic dark matter particle which behaves in unexpected ways; or, alternatively, that the entire dark matter idea is on the wrong track.

Using the Dirac-2 Complexity cluster, the Horizon collaboration has been studying this problem from a different perspective. Perhaps the difficulty in making theoretical ideas agree with the real Universe stems from an incomplete physical picture of what goes on inside galaxies. We know that vast amounts of energy are available – young stars pump out light and occasionally explode in supernovae. If some of this energy is transferred into the dark matter, the distribution can be radically reshaped.

We established almost two years ago that there were viable physical mechanisms for transporting stellar energy through to the dark matter distribution (dp016.6). Since then, we have been studying with higher resolution simulations how sensitive this mechanism is to fine details of how the gas behaves (e.g. dp016.7) – always the hardest thing to get right in galaxy simulations, because there are so many factors (including resolution, the cooling rates and the coupling of stellar populations to the surrounding gas). Our latest work, (Pontzen et al 2013, MNRAS submitted), shows that even if one gets many of these details intentionally wrong, the dark matter reshaping remains possible.

This is an encouraging sign that despite remaining uncertainty, we can soon determine whether data from the real universe support – or perhaps refute – the simplest dark matter models.


Leicester Highlight: Tearing up the Disk: How Black Holes Accrete

Almost all galaxies have supermassive black holes in their centres. These holes grow because gas from the galaxy falls into them, a process which also makes the holes spin. In general the gas spirals slowly in to the hole in a disk, initially lying in a plane which is at an angle to its spin equator. But near to the hole, powerful forces drag it into this plane. This means that the gas flows through a warped disk, whose plane changes as it moves in. Until recently it was thought that this process occurred fairly smoothly, with a smooth warp gently straightening the disk plane. However work by Ogilvie in 1999 suggested that in some cases the warp could be much more abrupt, because the forces holding the disk together actually weaken in a warp. The disk would then break into a pair of disks, the outer ring in the original misaligned plane, and the inner one aligned to the black hole spin.

leicester11.png

Figure 1. Simulation of disk tearing. Gas spirals towards a supermassive black hole in a plane which is highly inclined to the black hole spin. Close to the hole the gas is forced to orbit in the equatorial plane, and this breaks the disk. The inner disk tumbles in space, and eventually is rotating partially in the opposite direction to the outer disk. Where the two disks touch their gas collides. This removes the rotation that supports it against the black hole gravity, so it falls to a new orbit much closer to the hole. But there it also encounters gas moving in the opposed direction, so the process repeats. This produces rapid gas infall on the black hole, making it grow rapidly, and producing an extremely luminous source of radiation at the center of the host galaxy.

Only detailed work with a powerful supercomputer can check this possibility. Our work with DiRAC2 shows for the first time that it actually occurs in many realistic cases. The spin forces near the black hole break the central regions of tilted disks around spinning black holes into a set of distinct planes with only tenuous flows connecting them. These component disks then precess – their planes tumble in space – independently of each other. In a most cases the continued precession of these disks eventually sets up partially counterrotating gas flows, so that gas in one disk is moving almost oppositely to gas in a nearby disk. As these opposed flows interact, this drives rapid infall towards the black hole. The process can repeat itself as disks form and interact even closer to the black hole. This produces a cascade of tumbling and precessing disks closer and closer to the black hole. This is important because rapid gas infall makes the black hole grow rapidly. This in turn makes the infalling gas extremely hot and bright, accounting for the most luminous objects in the entire Universe.


MEISKIN:
Simulations of the Circumgalactic Medium

meiskin1.png

Figure 1. Environment of galaxy haloes.

meiskin2.png

Figure 2. Halo mass distribution.

We are using cosmological numerical simulations to predict the expected physical properties of the Circumgalactic Medium (CGM) in the environment of masive starforming galaxies in the context of a ΛCDM cosmology. Our primary goal is to establish the gaseous flow pattern around the galaxies and the spatial extent to which winds impact on the CGM through comparison with the observations. Since we wish to compare with observations , as a first step in this pilot project we need to identify the sites of galaxies in the simulation. We do this by identifying their dark matter haloes. This is not straightforward, akin to identifying distinct clouds in the sky. An illustration of the complex regions in which galaxy haloes reside is shown in Figure 1. Different halo identification methods produce different halo catalogs, and different simulation numerical methods produce different haloes. We compare the haloes found using two different algorithms applied to results from two simulation methods to seek a range of halo masses for which the methods agree. Figure 2 shows agreement between these methods for haloes more massive than ten billion solar masses, corresponding to galaxy-size haloes. The flow fields around the haloes are currently being explored.


PLANET

We have examined the evolution of astrophysical disc models for which the temperature decreases as a function of radius, leading to an angular velocity profile that varies with both radius and height. This work demonstrates for the first time that growth of the vertical shear instability in discs leads to a sustained turbulent flow whose associated Reynolds stress leads to outward angular momentum transport. The results may have application to the outer regions of proto-planetary discs, influencing mass accretion and the formation of planets.

planet.png

Figure 1. Contours showing the growth of vertical velocity perturbations in a disk due to the development of the vertical shear instability.

We have examined the migration of low mass planets embedded in magnetized discs with a layered structure consisting of turbulent regions near the surface, and a non-turbulent “dead zone” near the disc midplane. A planet migrates because it experiences a net torque with two contributions: a Lindblad torque that drives inward migration; a corotation torque that slows or reverses migration. Our results show for the first time that the corotation torque in a dead zone is ineffective, with the implication being that low mass protoplanets will migrate rapidly as they grow to become super-Earths or Neptune-like planets. A paper describing these results is in preparation.

planet2.png

Figure 2. The left panel shows a 10 Earth mass planet embedded in the dead zone of a disc. The right panel shows the log of the accretion stress as a function of radius and height. Values near the midplane correspond to a ~ 0.0001, and near the surface a ~ 0.1.


Virgo
Constraining the nature of dark matter using the COCO simulations

The standard cosmological model, the ”Lambda cold dark matter model” (LCDM) is backed up by an impressive array of data that cover a huge range of scales, from the entire observable universe, probed by measurements of temperature anisotropies in the microwave background radiation, to the scales of galaxy clusters and individual bright galaxies, sampled by large galaxy surveys. On smaller scales than this, there is no strong evidence to support the standard model. Yet, it is on such scales that the nature of the dark matter is most clearly manifest. In the standard model, the dark matter consists of cold particles, such as the lightest supersymmetric particles. There are, however, models of particle physics that predict lighter particles, such as sterile neutrinos, that would behave as warm (WDM), rather than cold (CDM), dark matter. If the dark matter is warm, free streaming in the early universe would have erased primordial fluctuations below mass scales corresponding to dwarf galaxies. The abundance and properties of dwarf galaxies could then encode information about the nature of the dark matter.

14vigro

Figure 1. The image above is a slice through the simulation volume, with the intergalactic gas colour coded from blue to green to red with increasing temperature. The inset images show details in a small region of the simulation centered on a spiral galaxy seen face-on.Enter a caption

The Figure below shows two simulations from a Virgo Consortium project called “Copernicus Complexio” (COCO). These show the structure that grows in a region of diameter approximately 150 million light years in model universes in which the dark matter is cold (left) or warm (right). The WDM model is chosen to have as large a free streaming length as is allowed by observations of gas clouds seen in the early universe (the so-called “Lyman-alpha forest” in distant quasar sight-lines). There are about a hundred haloes in each volume with masses similar to that of the dark halo around the Milky Way galaxy.

Each of these is resolved with about 10 million particles making it possible for the first time to obtain a good statistical sample of well resolved subhaloes. On the scales apparent in this figure there is very little difference between the two models. However, on small scales there are large differences. In CDM tens of thousands of subhalos are visible; in WDM only a few tens are. In principle this difference should be detectable in observational studies.

The project made extensive use of the DiRAC-2 data centric facility because the simulations, with 13 billion particles each, require a machine that has both an exceptionally large memory per core and a large total memory.


CLOVER:  Charming Physics

Strongly interacting particles (hadrons) come in two classes: mesons made of quarks and antiquarks and baryons consisting of three quarks. Quarks come in six varieties or flavours: up, down, strange, charm (u, d, s, c) and much heavier bottom and top quarks (not considered here).

Symmetries between the different quark flavours mean that particles group together in families called multiplets. The proton and neutron are part of a multiplet with 20 members, shown in the left figure. Particles with no charm quark are in the bottom layer, followed by a layer of particles with one charm quark and then with two charm quarks. At present there is very little experimental knowledge about the doubly-charmed particles in the top layer.

14clover

Figure 1. The octet multiplet (left) and a compilation of recent lattice determinations of baryon mass splittings (right).

We want to understand how particle masses are related to their position in the multiplet, and the masses of the quarks they contain. In computer simulations we are not limited to the quark masses nature has given us – we can see what would happen if the quarks had quite other masses. As an example, in the second figure we show how the masses of the particles in the lower slice of the multiplet change as we move between a situation with all quark masses equal (the crossing point of the fan) and the physical point (at the left edge of the graph).

While the main force between the quarks and antiquarks comes from QCD there is also a contribution from the electromagnetic force, (quantum electrodynamics, QED), which is usually left out of lattice calculations. We are also doing calculations with both forces included, to see what the effects of QED are. We can see clearly that mesons in which the quark and antiquark have the same sign of electrical charge become heavier compared with mesons where the quark and antiquark have opposite charges, as you might expect from the fact that opposite charges attract, and like charges repel.

Simulating two forces needs more computing power than just looking at one force, so DiRAC II is important in making all aspects of this project possible.


HOT QCD:  Extreme QCD: Towards Quantitative Understanding of the QCD Phase Diagram

hotqcd11.png

Figure 1. Plot of the temperature dependency of the conductivity of matter across transition temperature.

Matter is usually comprised of protons and neutrons which consist of quarks bound tightly together by the Strong Interaction of Particle Physics. However at incredibly large temperatures of a few trillion Celsius, quarks become free and a new and poorly understood “Quark-Gluon Plasma” (QGP) phase is created. While the QGP is presumed to have existed soon after the Big Bang, it has also been produced in experiments where heavy nuclei (such as gold) are collided in particle accelerators at virtually the speed of light. This has been performed in the Large Hadron Collider in CERN and at the Brookhaven National Laboratory on Long Island, USA.

Because each nucleus is incredibly small (100 billion of them side-by-side would span a distance of 1mm) the region of QGP created is correspondingly small. Due to this tiny size, it is impossible to place detectors inside this region, and, in any case, they wouldn’t work because they’d instantly melt into the plasma phase! The plasma “fireball” also expands and cools incredibly rapidly, so it quickly returns to the normal state of matter where quarks are tightly bound.

So to understand the QGP, physicists have to rely on observations of particles ejected from the fireball region out into the “normal” phase of matter and into their detectors and then “reverse engineer” the physics of the QGP. To make the connection from the detected particles back to the fireball, it is therefore essential to understand the QGP’s “transport” properties, i.e. how it expands and flows as a bulk material.

One such property is the electrical conductivity, which is what the quantity this project has calculated. This requires supercomputers, such as those provided by the DiRAC Consortium, in order to simulate the many degrees of freedom of the strong Interaction. Our results, shown in the figure, are the first time anyone has calculated the temperature dependency of the conductivity of matter across transition temperature. In this figure, this transition temperature corresponds to around 180 MeV.


HPQCD:  Light is right – working with up and down quarks at physical masses

hpqcd111.png

Figure 1. A meson annihilates to a lepton and antineutrino (from a virtual W meson).

The world of quarks and gluons, currently the most fundamental building blocks of matter known, is hidden from us. Quarks are never seen as free particles; instead we see their bound states, known as hadrons, in experiments such as those at CERN’s Large Hadron Collider. Accurate calculations with the theory that describes quark and gluon interactions, Quantum Chromodynamics (QCD), are critical to connect the world of quarks to that of hadrons in a quantitative way.

hpqcd112.png

Figure 2. Our results for decay constants of K and pi mesons as a function of the up/down quark mass. The result shown at 0.036 on the x-axis correspond to the real-world masses.

Calculations are made tractable through a numerical technique known as lattice QCD, but they are computationally very challenging. The numerical cost increases as the quark mass falls and so one issue has been that of handling up and down quarks. These have very small masses (a few percent of that of the proton) in the real world. In the past calculations have been done with several values of heavier masses and then an extrapolation to the physical point is performed.

With the computing resources of DiRAC phase II (the Data Analytic Cluster in Cambridge), we are now able to perform calculations with up and down quarks having their physically light masses and this means that more accurate results can be obtained. Our formalism is numerically very efficient and has particularly small disretisation errors.

Key results that we have obtained this way during 2013 are those of the decay constants of the π, K, B and Bs mesons. The decay constant parameterizes the probability of the quark and antiquark in the meson being at the same point and annihilating, for example to a W boson of the weak force, as seen in Fig. 1.

Our results for the decay constants (Fig. 2) are the world’s most accurate. They have enabled us to test the CKM matrix of couplings between quarks and the W boson with unprecedented accuracy. We have also improved the prediction of the rate from known physics of the very rare process, Bs → μ+ μ, recently seen by the LHCb experiment. The sensitivity of this process to new physics makes a future comparison of the experimental and theoretical rates potentially very exciting.


UKQCD BSM:  Beyond the Standard Model- is there anything more than the Higgs?

The Standard Model of particle interaction describes the interactions of all the constituent of the matter to an impressive degree of accuracy. One of the successes of this model is the unification of the electromagnetic and the weak interactions into a new sector called the Electroweak sector. In this model, the Electroweak sector is characterised by a breaking of the SU(2) × U(1) gauge group, which explains why the photon is massless while the W and Z bosons (the mediators of the weak force) have a mass. The electroweak breaking is due to the widely known and notoriously elusive Higgs sector, which describes the interactions of a new particle, the Higgs boson. In addition to giving mass to the mediators of the weak force, the Higgs boson provides mass for ordinary fermionic matter (leptons and quarks). However this elegant model is believed to be valid only up to a certain energy scale, above which new physics is bound to manifest.

ukqcd

Figure 1. The standard model of particle physics (left) and technicolor particles (right).

Many models have been proposed over the years to explain the theory at the scales of new physics. Technicolor is the framework according to which Electroweak symmetry breaking is due to the breaking of the chiral symmetry in a new strong interaction. The model proposes a different answer to the origin of all particles masses, by means of a new mechanism to generate mass for the leptons. These ideas are inspired a similar mechanism that is already at work in the theory of the strong interactions, Quantum Chromodynamics (QCD). A fundamental requirement for any beyond the Standard Model theory is that the framework does not spoil any lower energy prediction, i.e. that they are compatible with current observation. This is a severe constraint, which in Technicolor is implemented by the mechanism of walking, i.e. the slow running of the gauge coupling in an intermediate range of energies. This happens for near-conformal gauge theories. The question then becomes: is there a near-conformal gauge theory that can account for the observed Electroweak symmetry breaking?

The resources offered by the Dirac consortium allowed us to perform Monte Carlo simulation for a theory that has been conjectured be a candidate for the realisation of the Technicolor framework. The model, called Minimal Walking Technicolor, is an SU(2) gauge theory coupled with two adjoint Dirac fermion flavours. We proved that (near-)conformality can be rephrased as a mass spectrum with constant mass ratios between the particles when the constituent fermion mass goes to zero and we observed this feature numerically.


UKQCD DWF

dwf.png

Figure 1. This figure displays the new result for the neutral kaon mixing amplitude BK in green. This new result was calculated in the continuum limit and directly at physical quark masses. The line and the curve represent two fits to previous data points wich were used to extrapolate to quark masses. With our new algorithm and greater machine power we have been able to both eliminate the systematic extrapolation error and simultaneously reduce the statistical error on this important result.

We have carried out simulations of lattice QCD including up, down and strange quark loops using a chiral fermion discretization which is unique in fully preserving the meaning of left handed spin orientation. The W boson only interacts with left handed fermions, and so preserving this symmetry is deeply important to the calculation of the complex weak matrix elements required to support the experiments such as the Large Hadron Collider.

One important example of these is our determination of the two pion decay amplitudes of the Kaon. This is the process in which the asymmetry between matter and antimatter was discovered. Our calculations, which won the 2012 International Ken Wilson Lattice Award, involve mixing between many operators in the effective weak Hamiltonian. The calculation is only tractable with chiral Fermions, but gives rise to a wholly new constraint on the difference between matter and anti-matter in the standard model. Such constraints enable experiments to search for the effects of undiscovered physics beyond the standard model. Our calculation was updated in journal papers, one of which observed in detail a numerical cancellation that explains a long standing puzzle about the likelihood of decay into different pion charges known as the ΔI=1/2 rule – a case of numerical simulation leading to deeper understanding.

This progress over the last year has only been possible due to the powerful DiRAC BlueGene/Q supercomputer in Edinburgh, which our scientists helped IBM to develop, to simulate chiral fermions at the quark masses seen in nature and with spacings for the space-time grid (including all effects up to energy scales of 1.75 and 2.3 GeV). This has allowed us to predict continuum physics with the complete elimination of systematic errors arising from mass extrapolation. The calculation was shared between UKQCD and our USJapan international collaborators in the Riken-Brookhaven-Columbia collaboration, and also made use of BlueGene/Q systems at Argonne and Brookhaven National Laboratories in the US.

Our calculations on neutral kaon mixing are also relevant to CP violation. We presented both the precise continuum results for the standard model process at physical quark masses and for processes that only arise beyond the standard model.


Nuclear Physics

np.png

Figure 1. 40-Ti one-neutron removal spectral strength distribution associated with J-pi = 1/2+.

It has proven extremely difficult to describe nuclei directly from a set of nuclear forces, apart from the very lightest elements. In the last decade there has been great progress, supported by computational advances, in various methods to tackle more complex nuclei. Many of these are specialised to systems with even number of neutrons and protons, or just closed shell nuclei, where many calculations simplify. Alternative phenomenological approaches, such as the nuclear shell model, can typically only work within a single major shell.

The complications in the problem arise from the structure of the nuclear force, which is noncentral and spin-dependent, as well as the superfluidity in atomic nuclei. One thus needs to work in large model spaces, take care of the supefluid Cooper pairs, and work with the complex operator structure of the nuclear force. One natural formalism to do all of this is based on the Gor’kov (Green’s function) approach to superfluid systems. Barbieri et al have recently introduced an approach based on the Gor’kov pattern that works for open-shell nuclei , away from the shell closure. A crucial issue for ab initio approaches concerns the ability of performing numerical calculations in increasingly large model spaces, with the aims of thoroughly checking the convergence and of constantly extending the reach to heavier systems.

A long-standing problem with self-consistent calculations of one-body propagators in finite systems concerns the rapid increase of the number of poles generated at each iterative step. The fast growth is expected as the Lehmann representation of one-body Green’s functions develops a continuous cut along the real energy axis in connection with unbound states. This cut is discretised by a growing number of discrete energy states as tthe size of the model space is increased. In practical calculations, one needs to limit the number of discretised poles. We use Lanczos algorithms to project the energy denominators onto smaller Krylov spaces.

As a result of such calculations we can calculate spectral strengths in nuclei that are not easily accessible by other theoretical means. As an example, we show below the spectral strength in 40Ti, with a cut-off ranging up to 13 major shells.

 

2014 Highlights

2014 was another productive year for our PPAN theory community researchers and some of the Science Challenges they tackled are highlighted below.

2014 list of publications


 

UKMHD Consortium:
The origins of magnetism in the quiet sun

jj.png

Figure 1. A volume rendering of the magnetic field distribution in a simulation of a region of quit Sun.

In the outer layers of the solar interior, vigorous convective motions interact with complex magnetic field structures. Regions of quiet Sun (which, by definition, contain no large-scale magnetic features such as sunspots) are characterized by a spatially intermittent distribution of small-scale magnetic fields. Explaining the origin and evolution of quiet Sun magnetic fields is one of the most important challenges in modern solar physics.

It is believed that a significant fraction of quiet Sun magnetism is generated in the surface layers by a convectively-driven dynamo. To simulate this dynamo process, it is necessary to solve the governing equations of Magneto HydroDynamics (MHD) for a compressible, electrically-conducting gas in three spatial dimensions and time. This is a computationally challenging problem that can only be tackled by using massively parallel simulations on the Dirac High Performance Computing facilities.

j1.png

Figure 2. The temperature distribution (left) and vertical magnetic field distribution (right) from a simulation of a region of quit Sun.

Using simulations of this type, we have confirmed that a dynamo mechanism could indeed account for the observations. In the absence of magnetic fields, the simulated convective flows are solar-like in many respects, with a “granular” pattern of broad warm upflows, surrounded by a network of cooler, narrow downflows. Furthermore, as suggested by the observations, these granules are organized on a larger (“mesogranular”) scale. When a seed magnetic field is introduced into this system, the convective motions drive a dynamo with a complex, intermittent field distribution that is comparable to that observed in the quiet Sun, with the most persistent magnetic structures accumulating preferentially at the mesogranular boundaries.


 

Kinetic Plasma Turbulence Highlight

Plasma turbulence is believed to be responsible for the fact that the solar wind – the supersonic expansion of the Sun’s corona – retains a high temperature throughout the heliosphere, even when it is expected to cool due to expansion. The solar wind may be heated by releasing magnetic energy to accelerate particles at thin magnetic discontinuities, or current sheets. In some models these current sheets are formed at the same time as the solar wind leaves the Sun, or they may form spontaneously as part of the turbulence seen in the solar wind.

jj14.png

Figure 1. Isosurfaces of current sheet density showing rippling due to ion drift-kink instability (right), and (left) virtual spacecraft data for magnetic field for the crossing of the current sheet shown in the top panel. (From Gingell et al., 2014, arXiv: 1411.4422.)

Using DiRAC we have carried out self-consistent, particle kinetic plasma simulations of the evolution of thin current sheets in a three-dimensional geometry. We find that there are two important instabilities which control energy release. The ion tearing instability releases energy via magnetic reconnection, and the drift-kink instability which, as it name implies, produces a rippled current. The drift-kink instability can reduce the rate of energy release, and lead to thicker current sheets which are less liable to reconnect due to the tearing instability. The competition between these two instabilities can only be studied with three-dimensional simulations, and these require large memory and processor numbers which would not be possible without DiRAC resources.

Our results indicate that the importance of thin current sheets to solar wind heating might be less than predicted by two-dimensional simulations which only capture the ion tearing instability. Energy release at current sheets could be important if they are actively produced at a high enough rate by large scale turbulence. We have also created virtual spacecraft data which can be used to test our simulations against actual data from spacecraft.


 

Leicester:
Merging Supermassive Black Holes

jj14bh.png

Figure 1. Numerical simulation of a gas cloud accreting on to a SMBH binary, performed on the DiRAC-2 Complexity cluster. The simulation followed the evolution of a turbulent, self-gravitating cloud of gas as it falls towards the black holes and forms an accretion disk around the binary.

Almost all galaxies have supermassive black holes (SMBHs) in their centres. These cosmic giants are millions to billions of times more massive than the Sun, and play a key role in shaping the formation and evolution of galaxies. Galaxies grow through collisions and mergers of smaller galaxies, and when pairs of galaxies merge their SMBHs slowly sink towards the centre of the merged galaxy. As they get closer together the “friction” dragging them inwards (actually gravitational encounters with stars) becomes progressively less and less efficient, and this process ultimately stalls when the SMBHs reach separations of around a parsec. However, no pairs of binary SMBHs are seen at these separations (despite many years of searching), suggesting that the process(es) that drive SMBH mergers are much more efficient than theory suggests. This long-standing discrepancy between theory and observations is known as the “last parsec problem”.

The most popular solution to this problem is to appeal to a gaseous accretion disc around the SMBH binary. These discs are commonly observed around single SMBHs (where they appear as quasars), and they allow SMBHs to grow by funnelling material down towards the event horizon. However, it is well-known that if the binary and disc are aligned (i.e., they orbit in the same plane and rotate in the same direction), then the disc cannot shrink the binary’s orbit quickly enough. The Theoretical Astrophysics Group in Leicester has recently pioneered the idea that misaligned accretion discs may be the key mechanism driving SMBH mergers. The interaction between an accretion disc and a misaligned SMBH binary is very complicated, and can only be studied using large computer simulations. The power of DiRAC-2 allowed us to build detailed simulations of how accretion discs form and evolve around SMBH binaries (see Figure 1, adapted from Dunhill et al 2014. Animations from these simulations are available here). These are the first models to study the disc formation process self-consistently (previous studies have simply assumed that a disc is present), and to look in detail at how the gravity of the disc shapes the evolution of the binary. As with previous studies, we find that prograde discs (i.e., disc which rotate in the same direction as the binary) do not shrink the binary’s orbit efficiently. However, when discs form in retrograde configurations (i.e., with the gas orbiting in the opposite direction to the binary) their interaction with the binary is much more efficient. Our simulations demonstrated that repeated retrograde accretion events can drive mergers between SMBH binaries on short time-scales, and this process offers an elegant potential solution to the last parsec problem.


 

UKQCD Beyond Standard Model

Novel strong interactions might provide an explanation for the mechanism of electroweak symmetry breaking. In order for a strong dynamics to be able to explain the electroweak breaking, the theory must be near the onset of the conformal window and possess an anomalous dimension of the chiral condensate of order one. Furthermore, after the experimental observation of a light Higgs and no unexpected particle, a crucial requirement for a Beyond the Standard Model (BSM) theory is the existence of a light scalar in the spectrum.

The a priori interesting theories can be divided into two classes: infrared conformal theories, which are inside the conformal window and are scale-invariant in the limit of infinite distance, and walking theories, showing confinement at large distance, but behaving as infrared conformal theories for a wide range of intermediate scales. Theories in both categories can be used to construct BSM models without violating the existing constraints from precision measurements.

14ukcdbsm

Figure 1. Selected spectral states of the theory in lattice units.

Our investigations of the SU(2) with one adjoint fermion model show that this theory displays crucial dynamical features attributed to phenomenologically viable models of electroweak symmetry breaking (see figure 1 of the spectrum from arXiv:1311.4155).

Indeed simulations performed for values of the couplings in the region connected with the continuum indicate that the system is close to the edge of the conformal window, with an anomalous dimension close to one and a scalar particle much lighter than the other particles in the spectrum.

At the present stage this is the only system showing such intriguing features. Moreover, our calculation provides the first evidence that concrete models with the non-perturbative features mandated by experimental constraints do exist.


 

HPQCD_banner

Searching for new physics in the magnetic moment of the muon

jj14hpcd

Figure 1. The interaction between a muon and a photon is complicated by the production of virtual particles.

A vacuum is never completely empty space but teems with particles that are created fleetingly by quantum fluctuations in energy and then disappear. The heavier the particle, the less time its brief existence can last. In that time, however, the particle can interact with the much lighter particles in our everyday world and thereby reveal its existence through tiny discrepancies in the properties of these particles from that expected in the Standard Model. The magnetic moment of the muon shows such a discrepancy, a tantalizing 25±9 parts in 1010. To pin this down more accurately a new experiment, E989 at Fermilab near Chicago, will reduce the experimental uncertainty by a factor of 4. An improved theoretical uncertainty from the Standard Model to match this needs lattice QCD calculations. This year HPQCD has developed a new method using DiRAC that will enable the accuracy needed to be achieved.

jj14hpcd2

Figure 1. Our results for the strange quark HVP contribution to aμ as a function of lattice spacing.

The muon is a heavier cousin of the electron with the same electric charge and spin, so that it carries a magnetic moment. The ‘gyromagnetic ratio’, g, measures the ratio of magnetic moment to spin (in units of e/2m). Naively, g=2 but this ignores the interactions of the muon with the cloud of virtual particles discussed above. The anomalous magnetic moment, aμ = (g – 2)/2, can be measured very accurately from the difference between the circulation and spin precession frequencies as muons fly round a ring transverse to a magnetic field.

The biggest effect in the Standard Model result for aμ comes from Quantum Electrodynamics where very high order calculations can be done. The largest theoretical uncertainty is from Quantum Chromodynamics and the process of Fig. 1 in which a muon radiates a photon that briefly becomes a quark-antiquark pair. This ‘hadronic vacuum polarisation’ (HVP) contribution needs to be calculated to better than 0.5% in lattice QCD to reduce the theoretical uncertainty. Our new method (arXiv:1403.1778, Phys. Rev D89:114501) makes this possible, having achieved 1% for the first time for the strange quark HVP contribution. Our method has low systematic errors and DiRAC enabled high statistics on realistic gluon field configurations. The up/down quark calculation is now underway. Errors will be reduced by a further factor of 3 as we increase statistics by a factor of 10 in 2015.


 

ExoMol

To find out what remote planets orbiting other stars are made of, astronomers analyse the way in which their atmospheres absorb starlight of different colours and compare it to a model, or ‘spectrum’, to identify different molecules.

In collaboration with the University of New South Wales we have developed a new spectrum for ‘hot’ methane which can be used to detect the molecule at temperatures above that of Earth, up to 1,500K/1220°C – something which was not possible before. Current models of methane are incomplete, leading to a severe underestimation of methane levels on planets. We anticipate our new model will have a big impact on the future study of planets and ‘cool’ stars external to our solar system, potentially helping scientists identify signs of extraterrestrial life. Our study, published in PNAS, describes how we used some of the UK’s most advanced supercomputers, provided by the Distributed Research utilising Advanced Computing (DiRAC) project and run by the University of Cambridge, to calculate nearly 10 billion spectroscopic lines, each with a distinct colour at which methane can absorb light. The new list of lines is 2000 times bigger than any previous study, which means it can give more accurate information across a broader range of temperatures than was previously possible. The comprehensive spectrum we have created has only been possible with the astonishing power of modern supercomputers which are needed for the billions of lines required for the modelling. We limited the temperature threshold to 1,500K to fit the capacity available, so more research could be done to expand the model to higher temperatures still. Our calculations required about 3 million CPU (central processing unit) hours alone; processing power only accessible to us through the DiRAC project. The new model has been tested and verified by successfully reproducing in detail the way in which the methane in failed stars, called brown dwarfs, absorbs light.


 

CLOVER: Baryon Mass Splittings

Light baryons such as the familiar proton and neutron make up most of the matter in the visible universe. These baryons have constituents of three types of ‘flavour’ of quarks: up, down and strange or u, d and s. Particles group together in families or multiplets, the octet multiplet is shown on the left-hand side of the figure.

The mass splitting between the proton and neutron is a very delicately balanced quantity, partly caused by the mass difference between the u and d quarks, and partly by QED effects. Small changes in this mass difference would have profound effects on the way the universe looks today. The initial balance between hydrogen and helium, established in the first half-hour after the Big Bang, depends strongly on the neutron half-life, and so on the p-n mass splitting. Later, the production of carbon and oxygen in stars also depends strongly on the proton-neutron splitting.

14clover

Figure 1. The octet multiplet (left) and a compilation of recent lattice determinations of baryon mass splittings (right).

The strong force binding the quarks together is described by quantum chromo-dynamics or QCD, which has to be numerically simulated (via lattice QCD). A compilation of recent lattice determinations of baryon mass splittings is given in the right panel of the figure. In particular there is also mass splitting between the particles at the centre of the octet the Sigma and Lambda. This is more complicated case as the states mix. We have now extended our previous results to include this case and determined the mixing angle and mass splitting. While the effects of mixing on the masses are very small (second order in the angle), it can be much larger for decay rates.

While the main force between the quarks and antiquarks comes from QCD there is also a contribution from the electromagnetic force, QED, which is usually left out of lattice calculations. We are also doing calculations with both forces included, to see how important the effects of QED are.


 

Black Holes

Firm observational evidence indicates that supermassive black holes are present at the centre of the majority of galaxies already from very early cosmic times all the way to the present day Universe. Feedback from these black holes in the form of large scale outflows is believed to be one of the key ingredients shaping the evolution and properties of galaxies in our Universe.

Recent observations (Cicone et al. 2014) have detected the emission from spatially extended cold gas around a bright quasar that existed when the Universe was less than 10% of its current age. This cold gas is moving at a very high velocity of the order of 1000km/s and has been detected up to 90,000 light years away from the rapidly accreting black hole that is powering the observed quasar. While this high velocity gas has been interpreted as the signature of a powerful black hole outflow, this observation is in tension with simple theoretical expectations which suggest that while rapidly moving, gas should be very hot instead of cold.

Figure 1. A galaxy hosting a black hole. The hot and rapidly moving outflow originating from the black hole is shown with a black contour, cold gas pockets containing up to a billion Solar masses and moving together with the hot outflow are depicted with grey pixels, and inflowing cosmic web filaments are illustrated in orange hues.

Using the moving-mesh code AREPO, researches at IoA/KICC, Cambridge have performed some of the most realistic cosmological simulations that follow black hole growth and feedback in the early Universe. In simulations, a galaxy hosting a bright quasar undergoes extremely high levels of star formation as it is located at the intersection of cosmic web filaments which deliver copious amount of gas to it. This eventually drives a “starburst phase” where a large number of stars explode as supernovae and their joint effect triggers a galaxy-scale wind. Thus, as the even more powerful black hole-driven outflow leaves the central region of galaxy and propagates outwards, it sweeps over the gas that has been polluted by metals, initially produced within stars and subsequently released by supernova blast waves. These “pockets” of metal enriched gas cause part of the hot quasar-driven outflow to cool to low temperatures. This is illustrated in the figure above where the hot and rapidly moving outflow originating from the black hole is shown with a black contour, cold gas pockets containing up to a billion Solar masses and moving together with the hot outflow are depicted with grey pixels, and inflowing cosmic web filaments are illustrated in orange hues. The black hole and its host galaxy are situated at the centre of the image.

Thanks to the simulations performed at the DiRAC-2 facility in Cambridge, researchers at IoA/KICC have found that self-consistent modelling of both supernovae and black hole outflows lead to the formation of cold gas pockets moving with 1000 km/s which are spatially extended over 90,000 light years (30kpc) in remarkable agreement with the puzzling observations of Cicone et al. 2014.


 

Virgo

14vigro.png

Figure 1. The image above is a slice through the simulation volume, with the intergalactic gas colour coded from blue to green to red with increasing temperature. The inset images show details in a small region of the simulation centered on a spiral galaxy seen face-on.

The EAGLE simulation project is a Virgo flagship program aimed at creating hi-fidelity hydrodynamic simulations of the galaxy population. It has been a major investment of intellectual effort, manpower and computing resources, with game-changing results, opening a window on the role and physics of baryons in the Universe.

EAGLE uses the latest developments in SPH techniques, with state of the art modelling of sub-grid processes such as star formation, feedback and black hole accretion. Unlike previous projects, our feedback scheme does not involve “de-coupling” feedback particles from hydrodynamic forces, or artificially suppressing cooling. The largest simulation, with a volume of 106 Mpc contains 1000s of well-resolved galaxies similar to the Milky Way and100s of galaxy groups and a handful of galaxy clusters. This huge statistical sample allows us to understand the variety of galaxy formation histories, the impact of galaxy environment and to study rare objects such as highly star forming sub-mm galaxies.

Obviously, a simulation of this size is of no use unless it is able to accurately reproduce the properties of the local galaxy population. EAGLE has been spectacularly successful: for example, the model correctly predicts the evolution of the galaxy mass function and the rising trend of galaxy specific star formation rates (Schaye et al. 2014 at z=0, Furlong et al. 2014 showing predictions at higher z). Numerous papers are in advanced stages of preparation comparing additional galaxy properties, such as HI mass functions (Bahe et al), galaxy colour distributions and luminosity functions (Trayford et al), OVI absorbers (Rahmati et al), AGN luminosity functions (Rosas-Guevara et al). The fidelity of the simulation also makes it a powerful tool for understanding the biases in observational surveys.

Having shown that the simulation’s fidelity is excellent, we can now exploit it as a tool for understanding the physical mechanisms by which galaxies form and evolve. The image above is a slice through the simulation volume, with the intergalactic gas colour coded from blue to green to red with increasing temperature. The inset images show details in a small region of the simulation centred on a spiral galaxy seen face-on.


 

Extreme QCD: Towards Quantitative Understanding of the QCD Phase Diagram

There are four fundamental forces that describe all known interactions in the universe: gravity; electromagnetism; the weak interaction; and the strong interaction, which is the topic of our research. The strong force binds quarks into hadrons such as protons and neutrons, which in turn form the nuclei of atoms, making up more than 99% of all the known matter in the universe.

Normally, the strong interaction binds quarks so tightly that they never escape from within hadrons. However, at temperatures greater than the deconfining temperature, TC (which is several trillion Celsius!) it undergoes a substantial change in character, becoming considerably weaker, and quarks become “free”. This new phase of matter is called the “quark-gluon plasma” (QGP) and is presumed to have occurred in the first few microseconds after the Big Bang. Despite the QGP being crucial to the development of the Universe (it was born in this state!) it is poorly understood.

Physicists re-create a mini-version of the QGP by colliding large nuclei (such as gold or lead) in particle accelerators at virtually the speed of light. These experiments have been performed in the Large Hadron Collider at CERN and at the Brookhaven National Laboratory in the USA. The region of QGP created in these experiments is incredibly small – just the size of the colliding nuclei. The QGP “fireball” formed expands and cools very rapidly, quickly returning to normal matter where quarks are tightly bound inside hadrons.

Most hadrons melt in the QGP meaning that the quarks that were inside them become free, like satellites escaping the earth’s gravity. However it was proposed 30 years ago that some, heavy “mesons” may remain bound up to quite high temperatures. Our collaboration has calculated the spectral functions of these bottomonium mesons (comprised of two bottom quarks) and we show the results above. Each pane has the results from two nearby temperatures, with the coldest pair in the top-left, and the hottest bottom-right. The strong peak at around 9.5 GeV shows a bound state which gets weaker as the temperatures increase, but remains in existence up to around 2TC. This confirms the decades-old hypothesis that these mesons do not melt as soon as the QGP is formed, and that measuring their features is an effective way of determining the system’s temperature, i.e. they can be used as a “thermometer” of the QGP.


 

COSMOS:

COSMOS consortium researchers have been exploiting the DiRAC HPC Facilities to make progress towards ambitious milestones in five key inter-related areas: (i) extreme universe, (ii) cosmic microwave sky, (iii) dark energy and (iv) galaxy formation and (v) black holes.

jj14cosmos.png

Figure 1. CMB temperature (left) and E-mode polarization (right) bispectrum reconstruction obtained from the Planck Full Mission Planck data using the MODAL estimator on COSMOS. Isosurfaces are shown both positive (red) and negative (blue).

Planck Satellite Science with Polarization:

The COSMOS Consortium flagship project to analyse and interpret data from ESA’s Planck Satellite has gone forward, building on the success of the 2013 First Data Release, to analyse the Planck Full Mission data with CMB polarization in results announced in December 2014. DiRAC resources were used to derive accurate estimates for cosmological parameters, showing interesting deviations from earlier WMAP results, checking consistency with temperature using the polarization data. Planck data was used to impose world-leading limits on cosmic non-Gaussianity that substantially constrain inflationary Universe paradigms. The three-point correlator (bispectrum) was evaluated to high precision for the first time in both temperature and polarization (and mixed bispectra) with a wide-ranging model survey undertaken, which yielded tight constraints on standard local, equilateral and orthogonal non-Gaussianity, as well as a measurement of the lensing bispectrum.

Extreme Universe – Blackholes:

Consortium members continue with world-leading black hole simulations, developing new methods to study black-hole collisions – events so violent their output eclipses the entire electromagnetic universe. Our simulations on Cosmos demonstrate that up to 50% of the total energy can be converted into gravitational waves in such collisions, supporting present assumptions in the analysis of collision search experiments at the LHC. We have also been preparing gravitational wave templates for the Advanced LIGO experiment which will begin operating in 2015. In addition, we have developed and tested a numerical relativity code GRChombo, the first full adaptive mesh refinement code which can be used to simulate black hole collisions and gravitational wave generation in early universe cosmology.

Galaxy Surveys, Dark Energy and Dark Matter:

Consortium resources have been deployed to analyse galaxy surveys through N-body simulations to better understand the properties of the universe and its perturbations. Much of the focus has been on preparing for The Dark Energy Survey (DES), as well as ambitious future surveys with DESI, Euclid and SKA. Power spectrum measurements from Baryon Oscillation Spectroscopic Survey (BOSS) data and mock catalogues corresponding to the Data Releases 10-12 samples were performed using COSMOS.


 

Domain Wall Highlight

We have pursued the first ever Quantum Chromodynamics (QCD), or strong force, lattice simulations of quarks and gluons with continuum limit results from chiral lattice quarks at their physical masses. Only left handed quarks couple to the electroweak forces. Our chiral approach reproduces the handedness symmetry of continuum QCD and is particularly good for determining the rates of weak decay processes involving left handed quark operators. This preference for left handed over right handed shows that the weak forces surprisingly break reflection (or parity) symmetry. Our results confirm the strong force as the confining theory of hadrons (particles made from quarks). The mass spectrum and pion decay constant agree with experimental determinations at the fraction of a percent level, and allow determination of the Vus quark flavor mixing parameter of the standard model. These and the following achievements are crucial ingredients in interpreting the results from the large experimental facilities (e.g. at CERN).

jj14domain

Figure 1. The figure above left shows the fit to simulation data for the decay constant from arXiv: 1411.7017 and right the action density for one of the gauge fields generated on the DiRAC facility.

Quark Masses:

We have determined the up/down and strange quark masses – fundamental constants of nature – to a fraction of a percent.

Neutral Kaon oscillations:

In the Standard Model (SM) of particle physics, weak interactions allow quarks to change from one type or flavour to another in a highly constrained pattern. Quark flavour-changing processes with no accompanying change in electric charge are very rare and present a small background against which to search for new physics. The SM allows electrically-neutral kaons to oscillate into their own antiparticles in a way which breaks the combined symmetry of particle-antiparticle exchange (charge) and mirror reflection (parity), known as CP symmetry. CP violation is essential to enable the dominance of matter over antimatter in the Universe and was first discovered in the neutral kaon system. We have calculated the neutral Kaon oscillation amplitude BK to 0.3% accuracy.

Algorithms:

Vigorous activity by UKQCD has leveraged STFC investment in DiRAC with around a 200x speedup from code and algorithmic improvements:

  • UKQCD scientists worked for five years with IBM research to help design the BlueGene/Q chip (being responsible for 8% of the chip area);
  • UKQCD have developed new multi-grid algorithms that adaptively accelerate the calculation of quark propagators;
  • A volume averaging technique developed by RBC-UKQCD, known as AMA, has lowered our statistical errors to around the 0.2% level.

 

ECOGAL

Large Scale Triggering of Star Formation and Molecular Clouds:

This project has received the bulk of our effort over the past 12 months. We have been running 127 million particle SPH runs of a spiral shock to study the formation of molecular clouds and establish the first self-consistent initial conditions for simulations of star formation. These simulations follow the formation of molecular clouds along a 4kpc length of a spiral arm with a total mass of 20 million solar masses and can resolve the formation of self-gravitating structures down to 10 solar masses. We have been studying the importance of the thermal physics in triggering star formation as well as several tests to ensure that the dynamics of the shocked regions are properly captured. A companion study using grid-based simulations has been completed which investigates the dynamics of single clouds entering into a spiral arm. This study showed that the gravitational potential, plus shock induced hydrodynamical instabilities, can drive turbulence into the cloud. This re-enforces the conclusions from our Smoothed Particle Hydrodynamics simulations.

We are now developing full galactic models of the ISM including live potentials. This work will form a significant part of our simulation work in 2015.

The Galactic Centre:

We have concluded studies on star formation in the Galactic centre and have expanded our scope to study the dynamics of the Central Molecular Zone including the “Moilinari” ring. These simulations show how a single cloud can reproduce the overall features. A detailed comparison with observations is ongoing. We have also used larger scale simulations of the inner 500pc of the galaxy to study how the non-axisymmetric structures combine with infalling gas interact, in order to probe the relationship between the CMZ and the inner star formation activity.

2015 Highlights

At DiRAC we are proud to have made some major contributions to the areas of theoretical modelling and HPC-based research in the STFC Fontier Science agenda. Below you can find information on the Science Highlights from some of the projects that DiRAC is involved in…

2015 list of publications


 

HPQCD_banner

Stress Testing the Standard Model

A vacuum is never completely empty space but teems with particles that are created fleetingly by quantum fluctuations in energy and then disappear. The heavier the particle, the less time its brief existence can last. In that time, however, the particle can interact with the much lighter particles in our everyday world and thereby reveal its existence through tiny discrepancies in the properties of these particles from that expected in the Standard Model. The magnetic moment of the muon shows such a discrepancy, a tantalizing 25±9 parts in 1010. To pin this down more accurately a new experiment, E989 at Fermilab near Chicago, will reduce the experimental uncertainty by a factor of 4. An improved theoretical uncertainty from the Standard Model to match this needs lattice QCD calculations. This year HPQCD has developed a new method using DiRAC that will enable the accuracy needed to be achieved.

HPQCD15

Figure 1. Determination of meson decay constants, compared to experimental results, where available. HPQCD, arXiv: 1503.05762, 1408.5768.

The muon is a heavier cousin of the electron with the same electric charge and spin, so that it carries a magnetic moment. The ‘gyromagnetic ratio’, g, measures the ratio of magnetic moment to spin (in units of e/2m). Naively, g=2 but this ignores the interactions of the muon with the cloud of virtual particles discussed above. The anomalous magnetic moment, aμ = (g – 2)/2, can be measured very accurately from the difference between the circulation and spin precession frequencies as muons fly round a ring transverse to a magnetic field.

The biggest effect in the Standard Model result for aμ comes from Quantum Electrodynamics where very high order calculations can be done. The largest theoretical uncertainty is from Quantum Chromodynamics and the process of Fig. 1 in which a muon radiates a photon that briefly becomes a quark-antiquark pair. This ‘hadronic vacuum polarisation’ (HVP) contribution needs to be calculated to better than 0.5% in lattice QCD to reduce the theoretical uncertainty. Our new method (arXiv:1403.1778, Phys. Rev D89:114501) makes this possible, having achieved 1% for the first time for the strange quark HVP contribution. Our method has low systematic errors and DiRAC enabled high statistics on realistic gluon field configurations. The up/down quark calculation is now underway. Errors will be reduced by a further factor of 3 as we increase statistics by a factor of 10 in 2015.


 

Kinetic Plasma Turbulence:  Determining the shape of Type Ia supernova explosions

Type Ia Supernovae (SNe Ia) are among the most energetic phenomena in the Universe and are important for many different astrophysical research fields (e.g. theories of binary stellar evolution, chemical enrichment of galaxies, and cosmology). These spectacular events are believed to be associated with thermonuclear explosions of white dwarf stars but answers to the questions of when, why and how the explosions are triggered remain unclear.

Kinetic15

Figure 1. Flux and polarization spectra (black) predicted 7 days before maximum for the merger model of Pakmor et al. (2012) viewed from a specific orientation. For comparison, grey lines show observed spectra of SN 2004dt at the same epoch.

In the past few decades spectropolarimetry has 
played a leading role in the search for a comprehensive picture of SNe Ia (Wang & Wheeler 2008). 
The particular value of this technique is its ability to probe the geometry of the explosion: linear polarization, attributed to electron scattering, has been detected in SNe Ia, suggesting that asymmetries 
in the progenitor system and/or explosion mechanism are 
present. However, interpreting polarimetric data is challenging and careful modelling is required to make the connections between observed spectropolarimetry and specific ejecta geometries. Our current research aims to address this challenge by making polarization prediction from modern multi-dimensional explosion models.

We recently developed a new Monte Carlo technique for use in our radiative transfer code (Kromer & Sim 2009) that allows us to calculate polarization signatures for hydrodynamical explosion models (Bulla et al. 2015). Compared to simpler techniques, this method leads to a substantial reduction in the Monte Carlo noise and thus in the required computing time (a factor of ~ 50). The combination of this new technique and access to the super-computing facilities provided by DiRAC has now made it feasible to compute polarization spectra for fully three-dimensional explosion models using simulations that run for only a few days.

In a recent paper (Bulla et al. 2016), we presented polarization calculations for the violent merger of two carbon-oxygen white dwarfs. The violent merger model has received considerable attention in the past few years, but our study is the first to investigate its spectropolarimetric signatures in detail. We demonstrated that the particular merger model of Pakmor et al. (2012) provides a remarkably good match with some highly-polarized SNe Ia (see Fig. 1), but is too asymmetric to reproduce the low levels of polarization commonly seen in many other SNe Ia. We are now extending our studies to alternative explosion scenarios: this will allow us to determine how spectropolarimetry can be most effectively used to distinguish and test explosion theories.

Bibliography: Bulla et al. 2015, MNRAS, 450, 967; Bulla et al. 2016, MNRAS, 455, 1060; Kromer & Sim 2009, MNRAS, 398, 1809; Pakmor et al. 2012, ApJL, 747, L10; Wang & Wheeler 2008, ARA&A, 46, 433.


 

HORIZON UK-Consortium:  Galaxy alignments from cosmological hydrodynamics simulations

Galaxies come in a variety of shapes. Nevertheless, in a simplified view of galaxy formation, one can classify them either as spirals or ellipticals. However, the orientation and ellipticity of these spirals and ellipticals is not random; it depends on their formation history and their environment. In particular, tidal forces from the large-scale structure of the Universe (see figure below) play a role in determining correlations between ellipticity, orientation and the underlying (dark) matter density field. Such correlations are referred to as ‘intrinsic alignments’.

On the other hand, future imaging surveys from the Euclid satellite and the Large Synoptic Survey Telescope, aim to measure per cent level perturbations on galaxy ellipticities caused by a different effect: the deviation of photons from their original path due to the gravity of these same large-scale structures. This ‘gravitational lensing’ effect constitutes a powerful probe of the nature of the mysterious dark energy that drives the accelerated expansion of our Universe.

horizon15.png

Figure 1. Left panel: 28 Mpc slice through the Horizon-AGN simulation cube (140 Mpc on a side), at redshift z=0.5. Right panel: 10x zoom on a region located in the bottom left part of the cube. The different colours indicate gas density (in green), gas metallicity (in blue) and stars (in red). Galaxies are identified as clusters of stars. Their shapes are measured to investigate intrinsic ellipticity correlations that contaminate the weak gravitational lensing signal used by astronomers to constrain the nature of dark energy.

Therefore, the quest for improving our understanding of intrinsic alignments to avoid them contaminating dark energy observables is paramount. Thanks to the DiRAC facility, our group (Horizon-UK) lead an analysis of unprecedented statistical scope and detail of galaxy shape and spin alignments, mining the huge dataset from the state-of-the-art hydrodynamic cosmological simulation Horizon-AGN. These pioneering results are reported in Chisari et al, 2015, MNRAS, 454, 2736.


 

Leicester: First pre-infall mass estimate of a Milky Way satellite galaxy

We have used the DiRAC-2 Complexity cluster to obtain a mass estimate of the Carina dwarf spheroidal (dSph) satellite of the Milky Way at the time when it first fell into the gravitational field of our Galaxy (about 6 Gyr ago). This is the first primordial mass estimate that has been obtained directly from observed data without recourse to matching with cosmological simulations.

leicester15.png

Figure 1. Comparison of our mass estimate for Carina (black and red data points with error bars) with predictions for the relationship between stellar mass (y-axis) and dynamical mass (x-axis) obtained via abundance matching of observed galaxies and simulated halos. (Figure reproduced from Ural et al., 2015.)

The dSphs which surround the Milky Way are vital laboratories in which to test theories of galaxy formation. Not only are they are the lowest mass stellar systems known to contain dark matter but they also populate the faint end of the galaxy luminosity function. Over the past 15 years, several issues have been identified which appear to place cosmological galaxy formation simulations in tension with observations: (1) simulations which contain only dark matter over-predict the numbers of objects around a Milky Way-like galaxy by at least an order of magnitude (the “missing satellites” problem; Moore et al., 1998); (2) simulated haloes exhibit central density cusps in their dark matter profiles, while observations are often claimed to be more consistent with uniform density central cores (the “cusp-core problem); (3) simulations predict the existence of dark haloes whose masses should have allowed them to form stars but which are apparently missing from the census of observed galaxies (the “too big to fail problem”; Boylan-Kolchin et al., 2012). The resolution of all three issues requires knowledge of the masses of the dwarf satellites both at the present epoch and at the time of their formation.

Despite the availability of large observed data sets containing measured velocities for 1000s of stars in several of the brighter Milky Way satellites, previous analyses of their mass distributions have made many simplifying assumptions. In particular, most models assume that the dwarf galaxy is in dynamical equilibrium and isolated from external perturbations. For small Milky Way satellite galaxies, both these assumptions are of questionable validity. N-body simulations provide a natural way to relax these assumptions and explore the impact of the Milky Way on the evolution of a dwarf galaxy. We have combined a Markov-Chain-Monte-Carlo algorithm with an N-body simulation code to explore the parameter space of models of the Carina dwarf galaxy using 20,000 medium-resolution simulations.

We identified the Carina dSph as an interesting target for this initial study due to the large amount of available kinematic data and the fact that there were already hints that its outer regions are being disturbed by the Milky Way (e.g. Muñoz et al., 2006). Our algorithm was able to constrain the present-day mass of Carina to be (7±3)×107 solar masses. More interestingly, we were also able to constrain Carina’s mass at the time it first fell into the Milky Way to be 3.9 (−2.4;+3.9)×108 solar masses. Our mass estimate is significantly lower than the halo mass generally associated with dSphs of comparable luminosity to Carina. In Ural et al. (2015), we suggested that this may be evidence that galaxy formation becomes stochastic in low-mass haloes.

The calculations for this project consumed approximately 1M core hours on the Complexity cluster of DiRAC-2, as a very significant number of simulations were required to verify the efficacy of the method using artificial data sets before it could be applied to the observed data with confidence. By using the Complexity cluster we were able to run multiple MCMC chains in parallel which greatly reduced the overall time required to complete the project. The work is reported in Ural, Wilkinson, Read & Walker, 2015 (Nature Communications, 6, 7599).


 

Black Holes

Quasars are extremely bright objects that reside in the centres of galaxies. They are believed to consist of a supermassive black hole (some as large as a billion times the mass of our own Sun) that is surrounded by gas. These black holes grow in size by swallowing their reservoir of gas and, in doing so, release an enormous amount of energy into their host galaxies. This energy affects the growth of each galaxy, changing where and when stars form and playing an important role in the different appearances of galaxies that we observe in the Universe today.

BLACKHOLES2015

Figure 1.

Such quasars are very rare in the present day Universe, but were much more active in the past. Because of their brightness, some of the most distant objects we can see are some of the most massive quasars, existing barely 1 billion years after the Big Bang. The presence of these objects raises many questions: how were these objects able to grow to such a large mass so quickly? What do the galaxies that host such objects look like? How does the growth of such a massive black hole affect the surrounding environment?

Using the DiRAC supercomputers at Universities of Cambridge, Leicester and Durham, researchers at the Institute of Astronomy and Kavli Institute for Cosmology, Cambridge simulated the growth of a galaxy containing a quasar similar to those we observe. These results are summarised in a recent Letter to MNRAS by Curtis and Sijacki, accepted on December 10, 2015. Vital to the work was the use of a new computational technique to significantly increase the resolution of simulations around the central black hole, which Curtis and Sijacki developed over the last year (2015, MNRAS, 454, 3445) thanks to unique computational resources provided by DiRAC. This novel technique enabled the capture of the flow of gas around black holes with unprecedented accuracy both in term of mass and spatial resolution.

The work by Curtis and Sijacki provides new insights into what we might hope to observe in the near future around these distant quasars with the James Webb space telescope and the Atacama Large Millimeter/submillimeter Array. Long thin structures, called filaments, transport gas from the outskirts of the galaxy all the way to the central region. These filaments grow a disc of gas that orbits around the black hole. The gas in this disc is relatively cold, which allows it to become dense enough to form stars at a dramatic rate. A fraction of the gas escapes the disc and falls onto the black hole, releasing large quantities of energy. This drives a powerful flow of super-heated gas out of the central galaxy, oriented in such a way that it does not destroy the surrounding disc, which is the key novel result of the model.

In the coming years there will be many observations of quasars in the early Universe. The ways in which they compare with the predictions of Curtis and Sijacki will provide clues to the physical processes that we do not fully understand, allowing deeper insight into the role black holes play in shaping our Universe.


 

VIRGO: Modern cosmological test of gravity

A hot topic in cosmology is to test gravity on scales well beyond our Solar system. 2015 marks the centenary of Einstein’s publication of his General theory of Relativity (GR), which is still our standard gravity theory. Although GR has been tested rigorously in the Solar system, its application in cosmology is still largely an extrapolation. Previous cosmological observations did not have the precision to test it accurately, but the situation is changing quickly in light of the future galaxy and cluster surveys such as Euclid, LSST, DESI and eROSITA, all of which aim to test GR as a primary science objective. Over the next decade, these observations will determine key cosmological parameters with percent-level precision, and bring the cosmological tests of GR to a similar precision for the first time. They will also address another critical question in modern cosmology – whether the mysterious accelerated expansion of our Universe is due to some unknown exotic matter (dark energy) or a different behaviour of gravity on cosmological scales.

virgo15.png

Figure 1. The velocity field in a slice of the simulation box of the standard (left) and non-standard (right) gravity model. Colours are used to illustrate the amplitudes of the velocity field. Filamentary structures, towards which matter fall, are clearly visible, as well as the model differences around these filaments. It is by differences such as shown here that different gravity models can be tested against observational data.

To fully exploit those billion-dollar surveys and to achieve their scientific potential of testing gravity, it is crucial to be able to accurately predict the observable signatures of non-standard gravity theories. This is a numerically intensive problem, which depends critically on powerful supercomputing resources such as DiRAC-2. Indeed, DiRAC has enabled us to make significant progresses over the past few years [1-11]. These efforts have led to developments of state-of-the-art algorithms and simulation codes such as the first ever parallelised N-body code for such theories, ECOSMOG [1-3], as well as a recent code [8] implementing a novel algorithm to trace light rays across cosmological simulations on the fly. These codes have been used widely in studies of this field, and ruled out popular non-standard models such as Galileon gravity [7]. More recently, tests on DiRAC led an improvement of the performance of ECOSMOG by over 10 times for some theories [9], making it possible to study such theories by large and high-resolution simulations in the future.

The numerical studies can also have implications to other fields. For example, the Galileon model features nonlinear differential equations similar to the famous Monge-Ampere equation in the optimal transportation problem, and its study has led to a new stable algorithm to solve this type of equation. These previous studies have fully equipped us [8-10] with efficient tools to make accurate predictions for a number of cosmological observables, including weak gravitational lensing, redshift space distortions and clusters of galaxies, for which we expect precise observational data in the next decade. They have prepared us for accurate cosmological tests of gravity and will help to shed light on the understanding of the origin of the accelerated expansion of our Universe. More exciting progresses are expected for the next few years.

Bibliography: [1] Li B., Zhao G., Teyssier R., Koyama K., JCAP, 01, 051 (2012), [2] Li B., Zhao G., Koyama K., JCAP, 05, 023 (2013), [3] Li B., et al., JCAP, 11, 012 (2013), [4] Li B., et al., MNRAS, 428, 743 (2013) [5] Jennings E., et al., MNRAS, 425, 2128 (2012); Hellwing W. A., et al., Phys. Rev. Lett., 112, 221102 (2014) [6] Shi D., et al., MNRAS, 452, 3179 (2015); Cai et al., MNRAS, 451, 1036 (2015) [7] Barreira A., et al., JCAP, 08, 028 (2015); JCAP, 08, 059 (2014) [8] Barreira A.,et al., JCAP, submitted, [9] Barreira A., Bose S., Li B., JCAP, submitted, [10] He J., Li B., Phys. Rev. Lett., submitted


 

Exeter

Accretion is an important process relevant to various fields in astrophysics, from star formation to the study of compact binaries and Type I supernovae. Continuous theoretical and numerical efforts have been devoted to understanding the interaction between the inner edge of an accretion disk and the stellar surface. Complementary to these works, there are also many studies devoted to the effect of accretion on the structure and evolution of objects, using either simple semi-analytic approaches or simulations based on one dimensional (1D) stellar evolution codes. But no multi-dimensional numerical study has ever been devoted to the effect of accretion on the structure of accreting objects. Because of this lack of sophisticated modelling, great uncertainties still remain regarding the fraction of accretion luminosity imparted to an accreting object or radiated away, and how the accreted material is redistributed over the stellar surface and the depth it reaches. Early accretion history may impact the properties of stellar objects (luminosity, radius, and effective temperature) in star formation regions and young clusters even after several Myr when the accretion process has ceased. These effects could provide an explanation for several observed features in star-forming regions and young clusters, having important consequences on our general understanding of star formation.

exeter

Figure 1. Distribution of tracer particles accreted on a young sun for cold (e1) and hot accretion (e9) cases respectively. Color scale is the radial velocity in units of cm/s and gray scale points show the position of the tracer particles. The cold accretion case shows rapid mixing of the accreted particles within the convective envelope. The hot accretion case shows the formation of a hot surface layer where particles accumulate and do not sink.

The DiRAC Resource, Complexity, has been used to perform the first hydrodynamics simulations describing the multi-dimensional structure of accreting, convective low mass stars. The hydrodynamical simulations were computed using our fully compressible time implicit code MUSIC (Multidimensional Stellar Implicit Code) recently improved with the implementation of a Jacobian-free Newton-Krylov time integration method, which significantly improves the performance of the code. These developments and the benchmarking of the code were made using Complexity. These new accretion calculations have enabled us to understand the effect of so-called “hot” accretion, when the accreted material is characterised by a larger entropy than that of the bulk of the stellar material.

The multi-D simulations show an accumulation of hot material at the surface that does not sink and instead produces a hot surface layer, which tends to suppress convection in the stellar envelope. This enables us to derive an improved treatment of accretion in 1D stellar evolution models, based on an accretion boundary condition. Such treatment of accretion in a 1D stellar evolution code is more physical than the standard assumption of redistribution of the accreted energy within the stellar interior. These new results will be published in Geroux, Baraffe, Viallet et al. (2016, A&A, submitted). The impact of this more physical accretion boundary condition on the evolution of young low mass stars and brown dwarfs and on observations in young clusters will be explored in a follow-up studies. These pioneering simulations performed on the DiRAC facility shows the potential of using time implicit multi-D simulations to improve our understanding of stellar physics and evolution.

2016 Highlights

2016 was another productive year for our PPAN theory community researchers and some of the Science Challenges they tackled are highlighted below. 

2016 list of publications


 

Gravitational waves:  DiRAC simulations play a key role in gravitational-wave discovery

GW_signal

Figure 1. Wave signal detected by LIGO (top), waveforms predicted by general relativity (middle) and the residual from matching these (bottom).

On February 11 2016, the LIGO collaboration announced the first direct detection of gravitational waves and the first observaton of binary black holes. Accurate theoretical models of the signal were needed to find it and, more importantly, to decode the signal to work out what the source was. These models rely on large numbers of numerial solutions of Einstein’s equations for the last orbits and merger of two black holes, for a variety of binary configurations. The DiRAC Data Centric system, COSMA5, was used by researchers at Cardiff University to perform these simuations. With these results, along with international collaborators, they constructed the generic-binary model that was used to measure the masses of the two black holes that were detected, the mass of the final black hole, and to glean some basic information about how fast the black holes were spinning. Their model was crucial in measuring the properties of the gravitational-wave signal, and The DiRAC Data Centric system COSMA5 was crucial in producing that model. More information on the detection of gravitational waves can be found at the LIGO collaboration website.

In the figure to the right, the top plot shows the signal of gravitational waves detected by the LIGO observatory located in Hanford, USA whist the middle plot shows the waveforms predicted by general relativity. The X-axis plots time and the Y-axis plots the strain, which is the fractional amount by which distances are distorted by the passing gravitational wave. The bottom plot shows the LIGO data matches the predications very closely. (Adapted from Fig. 1 in Physics Review Letters 116, 061102 (2016))


 

HPQCD_banner

Lattice QCD and the Search for New Physics

HPQCD.png

Figure 1. The interaction between a muon and a photon is complicated by the production of virtual particles such as quarks and gluons.

If you could see deep into the subatomic world with slow-motion glasses, you would see empty space teeming with particles, appearing and disappearing in a tiny fraction of a second. If you were very lucky you might glimpse a particle so heavy and rare as to have escaped detection so far at CERN’s Large Hadron Collider. Such particles would be harbingers of new physics going beyond our existing Standard Model, but are proving very hard to find.

One way to demonstrate their existence indirectly is through the impact they have, as part of the teeming vacuum, on the properties of particles we can study directly. A new experiment will start shortly at Fermilab near Chicago to measure the magnetic moment of the muon (a heavier cousin of the electron) to an astonishing accuracy of 1 part in 1010. If we can calculate the effect of all the particles in the Standard Model on this magnetic moment, then a discrepancy with experiment would indicate the existence of new particles. An improved theoretical uncertainty from the Standard Model needs numerical calculations for the theory of the strong interaction, QCD. This is because the largest source of uncertainty comes from quarks, which are strongly interacting (see Figure 1); this contribution is known as the hadronic vacuum polarization (HVP). HPQCD has developed a new lattice QCD method using DiRAC to determine the HVP with improved accuracy

HPQCD2.png

Figure 2. A summary of results for the hadronic vacuum polarization contribution to the magnetic moment of the muon. Our new result (purple square), shows significant improvement on earlier lattice QCD results (pink square). Results in the lower half are derived from experimental data from electron-positron collisions. The green circle shows the value inferred from the current experimental result for the magnetic moment. The discrepancy may be a first sign of new physics.

We used our method to determine the strange and charm quark contributions to the HVP (arXiv:1403:1778) for the first time from lattice QCD, and then the very small bottom quark contribution (1408.5768). In the past year have determined the up and down quark contribution (1601.03071) and estimated the effect of disconnected diagrams (lower picture in Figure 1) (1512.03270). Our final result, summing all of these contributions, has a total uncertainty of 2% (see Figure 2). The error is dominated by possible systematic effects from ignoring the electric charge of the quarks and taking the masses of the up and down quarks to be the same. Work is now underway to include both of these effects in the calculation to reduce the uncertainty further, ahead of the new experimental results in 2018.


 

Post-periapsis Pancakes: Sustenance for Self-gravity in Tidal Disruption Events

A Tidal Disruption Event (TDE), which occurs when a star is destroyed by the gravitational field of a supermassive black hole, produces a stream of debris, the evolution of which ultimately determines the observational properties of the event. Typically, investigations of this process resort to predicting future accretion of material assuming the debris orbits are Keplerian or simulating restricted parameter regimes where the computational cost is a small fraction of that for more realistic parameters. In a series of papers that took advantage of Complexity@DiRAC, we simulated the long-term evolution of the debris stream for realistic TDE parameters, revealing new effects that can significantly affect the observable properties of these events (Coughlin & Nixon 2015; Coughlin, Nixon et al. 2016a,b). The figure to the right is taken from Coughlin, Nixon et al. (2016a: MNRAS 455, 4, 3612). In this work, we showed that a post-periapsis caustic – a location where the locus of gas parcels comprising the stream would collapse into a two-dimensional surface if they evolved solely in the gravitational field of the hole – occurs when the pericentre distance of the star is of the order of the tidal radius of the hole. We showed that this ‘pancake’ induces significant density perturbations in the debris stream, and these fluctuations can be sufficient to gravitationally destabilize the stream, resulting in its fragmentation into bound clumps.

merging005web

Figure 1. Column density rendering of the stream of debris produced by a Tidal Disruption Event, showing both the full stream (left) and a zoom-in (right). In this case stream is strongly self-gravitating, and fragments into bound clumps (Coughlin, Nixon et al., 2016).

As these clumps fall back towards the black hole their accretion can result in strong bursts of radiation, while the low-density regions between clumps can yield relatively little. This makes the luminosity of the event highly time variable, and may be an explanation for the rapid time variability of systems such as Swift J1644. However, the detailed behaviour of the clumps is not yet well understood, as it depends on their internal thermodynamics and how quickly they can cool and condense. If they can cool fast enough, they may be able to survive a second pericentre passage and potentially form planets orbiting the black hole, otherwise they will be re-disrupted at pericentre and join the accretion flow. These processes will be the subject of future DiRAC simulations. Over the next few years new observing missions will provide a substantial increase in the number of events for which we have data. Determining how the luminosity of a TDE varies with time is critical to interpreting these events and discerning the progenitor properties, such as black hole mass and spin.


 

Exomol

To find out what remote planets orbiting other stars are made of, astronomers analyse the way in which their atmospheres absorb starlight of different colours and compare it to a model, or ‘spectrum’, to identify different molecules. One of such objects is Kepler-69c, which is 70 percent larger than the size of Earth. Its orbit of 242 days around a sun-like star resembles that of our neighboring planet Venus, however the composition of Kepler-69c is still unknown.

742553main_Kepler69c_full.jpg

Figure 1. Artists impression of a super-Venus Kepler-69c. Image credit: NASA Ames/JPL-Caltech/T.Pyle

In collaboration with NASA AMES we have developed molecular data to model (hot) super-Venuses, yet to be detected class of object with the sulphur dominated chemistry based on Volcanic activity. According to the atmospheric chemical models, among the main constituents of the atmospheres of the rocky super-Earth or super-Venus are sulphur dioxide (SO2) and sulphur trioxide (SO3). These spectroscopically important molecules are generally not included in terrestrial exoplanet atmospheric models due to the lack of the laboratory data. We anticipate our new data will have a big impact on the future study of rocky super-Earths and super-Venuses. The recently discovered system of exoplanets TRAPIST-1 is one of the existing examples where our data will play important role in future atmospheric retrievals.

Our SO3 data set, named UYT2, is one of the biggest line lists in the ExoMol database. It contains 21 billion vibration-rotation transitions and can be used to model transit IR spectra of volcanic atmospheres for temperatures up to 800 K. In order to accomplish this data intensive task, we used some of the UK’s most advanced supercomputers, provided by the Distributed Research utilising Advanced Computing (DiRAC) project and run by the University of Cambridge. Our calculations required millions CPU hours, hundreds thousands GPU hours, and up to 5 TB of RAM, the processing power only accessible to us through the DiRAC project.


 

COSMOS Consortium

COSMOS consortium researchers exploiting DiRAC HPC Facilities have made progress towards ambitious milestones in five key inter-related areas: (i) extreme gravity, (ii) inflation and the early universe, (iii) cosmic microwave sky, (iv) dark energy and (iv) galaxy archaeology. In the limited space available, we highlight recent breakthroughs in general relativistic simulations of black holes and inflation.

cosmos.png

Figure 1: GRchombo simulation of gravitational waves from a black hole merger (using LIGO GW150914 parameter values). COSMOS IPCC visualization movie using Intel OSPRay ray-tracing through AMR dataset (submitted to SC16 Scientific Visualization Showcase).

On 11 February 2016, the LIGO Scientific Collaboration announced the ground-breaking discovery of a gravitational wave signal (GW150914) representing the first incontrovertible observation of a black hole binary system, as well as the most energetic event ever observed in the Universe. Given recent advances such as our innovative GRChombo AMR code, COSMOS Consortium researchers (several also LSC members) are well-placed to exploit these new scientific opportunities in gravitational wave astronomy. We have continued our long-standing industrial collaboration through the COSMOS Intel Parallel Computing Centre (IPCC) to improve our research codes in cosmology and general relativity; we have particularly focussed on GRchombo and improving performance on multi- and many-core (Xeon Phi) systems, following in the wake of our 2015 HPCwire Award. Intel has also responded to the demands of ever increasing dataset size by developing a powerful software defined visualisation framework, including the OpenSWR software rasteriser and the OSPRay parallel ray tracer and volume renderer, a program in which COSMOS IPCC is directly involved. These tools hves allowed us to create unprecedented real-time visualisations of our science results using several TB datasets. In 2016 we demonstrated the adaptive mesh capabilities of ParaView powered by OSPRay and OpenSWR by creating videos of GRChombo black hole mergers for two of the largest HPC conferences, International Supercomputing 2016 and Supercomputing 2016, as publicised in insideHPC

Inflation is now the paradigmatic theory of the early universe, providing a dynamical mechanism to explain the many features of our current cosmology. In particular, it explains why the universe is homogenous over cosmological scales. Nevertheless, can inflation begin in the first place, if the initial conditions are not homogenous? To answer this question requires a numerical approach, as solving the full GR equations is not possible for generic inhomogenous initial conditions. We attacked this problem with the state-of-the-art numerical relativity code GRChombo and the results are reported in arXiv:1608.04408. We found that high scale inflation is generically robust – it will inflate despite large inhomogeneities. More interestingly, we found that low scale inflation – preferred by many string theory phenomenological models – is not robust, failing to inflate in the presence of large scale but subdominant inhomogeneities.

cosmos2

Figure 2. Spinning black hole in 5D.

One of the biggest unresolved problems in Einstein’s theory of gravity is the weak cosmic censorship conjecture, which posits that spacetime singularities are always hidden inside a black hole horizon. Since nothing can escape a black hole, the breakdown of Einstein’s theory at singularities would be ‘censored’ from the outside world. In our numerical work with GRChombo, we find that the conjecture appears to be false if space has four or more dimensions. For example, the late stages in the evolution of a rapidly spinning black hole in six spacetime dimensions is shown (left); this is a disk-like structure with a great deal of fractal microstructure – the thinner rings are connected by ever thinner membranes that ultimately lead to a ‘naked singularity’. Given these results in extra dimensions, it is clear that, if cosmic censorship is true, then it must be a property specific to the three-dimensional space that we live in; without this property, Einstein’s theory of general relativity could lose its predictive power.


 

Understanding the Properties and Stability of Matter

clover1.png

Figure 1. Snapshot of the QCD plus QED vacuum.

The fundamental constituents of the strong force are quarks and gluons, which themselves bind together to form the familiar building blocks of nuclear physics, protons and neutrons. The two most common forms of quarks are the up quark and the down quark. The electric charge of the up quark is +2/3, whereas the down quark carries charge -1/3. A proton is composed of two up quarks and one down quark, adding to a net charge of +1, whereas the neutron has two down and one up quark, producing a chargeneutral object. Lattice QCD simulations have reached a precision now, where isospin breaking effects become important. This has two sources, the mass difference of up and down quarks, and the electromagnetic interactions. Both effects are of the same order of magnitude, so a direct calculation from QCD and QED is necessary.

Most important for our understanding of the salient features of QCD, such as quark confinement and spontaneous chiral symmetry breaking, is the understanding of the properties of the vacuum. Simulations of QCD and QED allow us to study the effect of dynamical quarks on the vacuum. A snapshot of the vacuum fields is shown in Figure 1, in form of a three-dimensional slice of space-time. The topological charge density of the QCD fields is rendered with the magnetic field of QED. The positive topological charge lump at the lower left of the image is accompanied by large magnetic field strength, presenting an opportunity to observe the chiral magnetic effect which separates right- and left-handed quarks. It has been predicted theoretically, but never observed.

clover2

Figure 2. The allowed region for the quark mass ratio.

Isospin breaking effects are crucial for our existence. The Universe would not exist in the present form if the neutron — proton mass difference were only slightly — — different. If it would be larger than the binding energy of the deuteron, no fusion would take place. If it would be a little smaller, all hydrogen would have been burned to helium. Knowing the mass of neutron and proton and how it depends on the mass and charge of the individual quarks, we can express the allowed region in terms of the u and d quark masses and the electromagnetic coupling, as shown in the Figure 2. It turns out that both αEM and the ratio of light quark masses must be finely tuned.

Bibliography:
[1] R. Horsley, Y. Nakamura, H. Perlt, D. Pleiter, P.E.L. Rakow, G. Schierholz, A. Schiller, R. Stokes, H. Stüben, R.D. Young and J.M. Zanotti, J. Phys. G43 (2016) 10LT02.
[2] R. Horsley, Y. Nakamura, H. Perlt, D. Pleiter, P.E.L. Rakow, G. Schierholz, A. Schiller, R. Stokes, H. Stüben, R.D. Young and J.M. Zanotti, JHEP1604 (2016) 093


 

Virgo
Black Holes as the Nemesis of Galaxy Formation

virgo1.png

Figure 1. An image of a galaxy undergoing an AGN outburst in the Eagle simulation. The image shows gas density, coulored by temperature, and captures the moment of galaxy transformation. A small merger triggers the growth of a black hole, generating a huge outburst of energy that heats the surrounding gas and disperses the cold gas inflows.

Galaxies fall into two clearly distinct types: active, blue-sequence galaxies that are rapidly forming young stars, and passive red-sequence galaxies in which star formation has almost completely ceased. These sequences are closely related to the visual galaxy classification system, based on the prominence of spiral arms first suggested by Edwin Hubble.

Numerical simulations by the EAGLE collaboration shows that these sequences, and thus the Hubble sequence of galaxies, are created by a competition between star formation-driven outflows and gas accretion onto the supermassive black hole at a galaxy’s centre.

In galaxies containing fewer than 30 billion solar-like stars, young stars, exploding as supernovae, are able to drive buoyant outflows that prevents high density gas building up around the black hole. As the galaxies increase in stellar mass, they convert a larger fraction of the inflowing gas into stars and grow along the blue sequence.

More massive galaxies, however, are surrounded by a hot corona and this has important ramifications. The supernova-driven outflow is no longer buoyant and star formation is unable to prevent the build up of gas in the galaxy, and around the black hole in particular. This triggers a strongly non-linear response from the black hole. Its accretion rate rises rapidly, heating the galaxy’s corona and disrupting the incoming supply of cool gas. The galaxy is starved of the fuel for star formation and makes a transition to the red sequence, and further growth predominantly occurs through galaxy mergers. Interestingly, our own Milky Way galaxy lies at the boundary between these two galaxy sequences suggesting that it will undergo a transition to a red and `dead’ early-type galaxy in the near future.

This model has deep implications for understanding the Universe. Far from being exotic predictions of General Relativity, the growth of supermassive black holes sets a fundamental limit to the mass of star forming galaxies.

virgo004Richard2

Figure 2. Formation timescales of galaxies as a function of stellar mass in the Eagle simulation (points) and observational data (Ilbert 2015).

The figure on the left shows formation timescales of galaxies as a function of stellar mass in the Eagle simulation (points) and observational data (Ilbert 2015). The separation of galaxies into blue (rapidly star forming) and red (passive) sequences is clearly seen. Most low mass galaxies follow the star forming blue galaxy sequence, doubling their stellar mass every 3 billion years, but more massive galaxies have much longer star formation growth timescales (a horizontal dotted line shows the present day age of the Universe; galaxies with longer star formation timescales are shown above the line). The transition between the sequences occurs at a stellar mass of round 30 billion solar masses. While it is not possible to reliably measure the masses of black holes in observed galaxies, it is possible to do this for galaxies in the EAGLE cosmological simulation. The simulated galaxies follow the observed data closely and points are coloured by the mass of the black hole relative to that of the host halo. In red-sequence galaxies, black holes account for around 0.01% of the halo mass, but the fraction is a factor of 100 smaller in blue-sequence galaxies. Around a galaxy mass of 30 billion solar masses, there is considerable scatter in the relative mass of the black hole and in the star formation growth timescale. However, at a fixed galaxy mass, systems with a higher black hole mass have substantially longer growth timescales, implying the existence of an intimate connection between black hole mass and galaxy type.


 

Black Holes:  Illuminating the Early Universe with Radiative Transfer Simulations

State-of-the-art observations show that within approximately 1 billion years after the Big Bang, all of the primordial hydrogen that permeated the immense space between galaxies had been destroyed by high energy photons in a process known as reionization. This time period in the history of the Universe, also known as ‘the dawn of galaxies’ remains one of the most active frontiers in modern astrophysics from both an observational and theoretical point of view.

Much remains unknown about the sources which provided the high energy photons needed for reionization. The standard picture suggests it was likely the first generation of galaxies with a possible additional contribution coming from radiating supermassive black holes. More exotic solutions have been proposed as well, including annihilating dark matter. Identifying the sources responsible for reionization as well as understanding their properties is key to constraining the conditions which governed all subsequent galaxy formation in the Universe.

blackholes.png

Unfortunately, our current telescopes only probe the tail end of this epoch. For this reason, theoretical and computer models are relied upon for insight into the physics during the reionization. Simulating this regime is both complicated and computationally expensive; however, over the past two years, researchers at the Institute for Astronomy (IoA) and Kavli Institute for Cosmology Cambridge (KICC), have designed a new, state-of-the-art computer algorithm which they are exploiting on DiRAC supercomputers at the Universities of Cambridge, Leicester and Durham to better understand the epoch of reionization. This new algorithm (presented in a recently submitted paper by Katz et al. 2016) follows the flow of high energy photons which are directly and on-the-fly coupled to the chemical and thermodynamic state of the gas at different speeds depending on the density of the medium. This has overcome many issues relating to standard algorithms in the field at much reduced computational cost, allowing us to simulate significantly more complex systems than previously possible.

The team at IoA and KICC has thus for the first time generated a multi-scale view of galaxy formation during this epoch – from the large scale cosmic web all the way down to individual star forming regions inside of primordial galaxies. Because the flow of radiation is followed explicitly, the researchers have used an inhomogeneous radiation field to compute the distribution of emission lines coming from ionized metals (such as [CII] and [OIII]) which can be observed by some of our most sensitive telescopes. Likewise, they have self-consistently tracked the location and mass of ionized, neutral, and molecular gas to make testable predictions about these quantities which control star formation in the Early Universe.

In the coming years, new observations from the James Webb Space Telescope, Atacama Large Millimeter Array, and Square Kilometer Array will shed light on this exciting frontier and be compared to these simulated models to give insight into the physical processes which drive ‘the dawn of galaxy’ formation.


 

Hadron Spectroscopy:  Baryon Mass Splittings

In Cheung et al [JHEP 1612 (2016) 089, arXiv:1610.01073] we computed spectra of highly excited charmonia and charmed mesons with light (up and down) quark masses, corresponding to mπ ≈ 240 MeV, closer to their physical masses than in our earlier study where mπ ≈ 390 MeV. A highlight of the results was the presence of hybrid mesons (where the gluonic field is excited), some of which had exotic quantum numbers (cannot be solely a quark-antiquark pair). The resulting phenomenology of hybrid mesons gave insight into the effective low-energy gluonic degrees of freedom. One main conclusion was that the spectra showed only mild dependence on the light-quark mass, and the pattern of states, in particular the hybrid mesons, did not change significantly as the light-quark mass was reduced. This is an important systematic check and suggests that the interesting phenomenology of excited and hybrid mesons we found is not an artefact of working with unphysically-heavy light-quark masses.

hadspecnew.png

Figure 1. The spectrum of excited D mesons (containing a charm quark and a light antiquark).

Figure 1 shows the spectrum of excited D mesons (containing a charm quark and a light antiquark) labelled by JP (J is spin, P is parity): green and red boxes are the computed masses and one-sigma statistical uncertainties with red highlighting states identified as hybrid mesons; black lines are experimental values.

hadspec1.png

Figure 2. The various S-wave (orbital angular momentum = 0) cross sections: upper (lower) panel shows quantities proportional to the diagonal (off-diagonal) cross sections.

In Moir et al [JHEP 1610 (2016) 011, arXiv:1607.07093], we presented the first ab-initio study of coupled-channel scattering involving charm mesons (and the first lattice QCD study of three coupled scattering channels). Working with light-quark masses corresponding to mπ ≈ 390 MeV, from computations of finite-volume spectra we determined infinite-volume Dπ,Dη,Ds K isospin-1/2 scattering amplitudes. Figure 2 shows the various S-wave (orbital angular momentum = 0) cross sections: upper (lower) panel shows quantities proportional to the diagonal (off-diagonal) cross sections. The singularity structure of the amplitudes showed a near-threshold bound state in JP=0+ corresponding to the physical D0*(2400) resonance, a deeply bound state with JP=1 corresponding to the D*, and evidence for a narrow JP=2+ resonance. This study represents a significant step toward addressing some puzzles in the spectroscopy of charmed mesons.


 

Horizon

Initial conditions for galaxy formation simulations are specified within the Λ-CDM scenario by a Gaussian random field. This leads to a troubling lack of control: we often cannot ask precisely what leads to certain outcomes for a galaxy. What is it about a galaxy’s mass, environment or history that gives it its shape or its colour? Imagine we could control the initial conditions of a galaxy in cosmological simulations and tweak them to alter one aspect of its formation while keeping the other aspects the same.

horizon.png

The projected density of dark matter at redshift 3 (top row) and 2.3 (middle row) for three simulations with nearly identical initial conditions, but “genetically modified” to produce slightly different merging histories of the main galaxy in the center of the dark matter images. Bottom row shows the main galaxy in the IVU rest frame colours. The galaxy becomes increasingly quiescent, red and spheroidal as as the merger ratio increases from left to right.

Thanks to the DiRAC facility, our group has run the first generation of high resolution cosmological simulations that alters the “DNA” of a galaxy (Roth et al. 2016, MNRAS, 455, 974; Pontzen et al., arXiv:1608.02507). Roth et al. 2016 sets out the “genetic modification” approach and Pontzen et al. showcases its power (see figure) to understand what quenches star formation in a galaxy. By minimally altering the linear overdensity in the cosmological initial conditions, Pontzen et al changes the merger history of a galaxy while leaving its final halo mass, large scale structure and local environment unchanged. The changing merger history influences the impact of feedback from Active Galactic Nuclei leading to three very different fates for the galaxies: star forming, temporarily quenched, permanently quenched.


 

Heavy Elements:  Linking the Nuclear Interaction to the Structure of Heavy elements

heavy.png

Figure 5 from Phys. Rev. Lett. 117, 052501 (2016) (Editor’s suggestion.)

A recent international collaboration between the UK, France, US and Canada have performed systematic studies of both nuclear radii and binding energies in (even) oxygen isotopes from the valley of stability to the neutron drip line. Both charge and matter radii are compared to state-of-the-art ab initio calculations along with binding energy systematics. Experimental matter radii are obtained through a complete evaluation of the available elastic proton scattering data of oxygen isotopes. The theoretical ab initio calculations were possible thanks to exploiting High Performance Computing resources and showed that, in spite of a good reproduction of binding energies, conventional nuclear interactions derived within chiral effective field theory fail to provide a realistic description of charge and matter radii. A novel version of two- and three-nucleon forces leads to considerable improvement of the simultaneous description of the three observables for stable isotopes but shows deficiencies for the most neutron-rich systems. Thus, crucial challenges related to the development of nuclear interactions remain. Phys. Rev. Lett. 117, 052501 (2016) – Editor’s suggestion.