Call for Proposals

Access to DiRAC is co-ordinated by The STFC’s DiRAC Resource Allocation Committee, which puts out an annual Call for Proposals to request time and operates a Seedcorn Time programme for small requests.


12th Call for Proposals

The 12th Call for Proposals opened on May 24th 2019.  For each Call the RAC produces a set of documentation that includes a number of forms that every applicant must complete. The RAC also provides a comprehensive set of guidance notes.  

In addition to the scientific proposal application form, applicants must also submit a technical case (one per proposal).  The technical case must be submitted at least two months before the call closing date.

Deadline for submission of the Technical Case Application form: 5pm Thursday 1st August 2019: Please send to: dirac-support@epcc.ed.ac.uk

Deadline for submission of the Scientific Proposal Application form: 5pm Tuesday 1st October 2019: Please send to: DiRACRAC@stfc.ac.uk

The Announcement, Forms and Guidance Notes for the 12th Call can be accessed at the links below are below.

Allocations awarded in the 12th Call will commence on 1st April 2020. 


Seedcorn Time

More information on our Seedcorn Time programme can be found here.


Director’s Discretionary Proposals (closed)

The DiRAC Director invites the UK theory and modelling communities in Astrophysics, Particle Physics and Nuclear Physics to apply for Director’s Discretionary Proposals for computational resources on the STFC DiRAC HPC Facility. 

More information can be found here.


For help with application queries please contact the DiRAC Director: Dr Mark Wilkinson and/or the DiRAC Technical Director: Professor Peter Boyle.

For all other enquiries please contact our Project Office


Uncategorized

DiRAC Day

8th Annual DiRAC Science Day
12th September 2018
Swansea University

On September 12th, Swansea University will host the 8th Annual DiRAC Science Day. The day provides an opportunity to meet others from the DiRAC HPC community, learn about what is being done by all the different consortia, present and view posters on recent research achievements and discuss training and support needs for DiRAC. DiRAC users will also have the chance to meet members of the technical support teams in person as well as interact with the industry vendors who supplied the hardware we are now exploiting.

All attendees are encouraged to bring a poster to present (subject to space availability). There will be two prizes awarded to the posters by students or PDRAs judged to be ‘the best’.

Registration for DiRAC Day is via Swansea University’s dedicated DiRAC Day website, where you can also find a link to Accommodation Booking on the Swansea University campus.


This year there are three other interesting events being held at Swansea University which co-incide with DiRAC Day….


Co-located Events
NVidia Hackathon
9th, 10th & 11th September

DiRAC is pleased to announce that Nvidia have generously agreed to sponsor a 3-day GPU hackathon in Swansea prior to DiRAC Day 2018. This event will provide the DiRAC community with the opportunity to explore the potential for GPUs in supporting their science.

The Hackathon is open to all DiRAC HPC users and we expect to be able to offer places to 5 or 6 teams of between 3 and 5 people each. Over the three days, we hope that several major DiRAC science codes will be ported to GPUs and that the teams who attend will gain the skills to assist other DiRAC researchers to port additional codes in the future.

More information, including the application form can be found here.


Hands-on Introduction to Data Visualisation
13th & 14th September

Swansea University are hosting a stand-alone training course on Data Visualisation to run on the 13th and 14th September. The event will provide delegates with a concise introduction to the art and science of applied data visualisation.

More information on this course can be found here.


SuperComputing Wales Research Symposium 2018
13th September

The Swansea Academy of Advanced Computing are bringing together the Supercomputing Wales community to give users a forum to showcase their research to the wider HPC community and network with other supercomputing specialists.

The event will include research talks, a poster session, and opportunities for discussion about the Supercomputing Wales project and SUNBIRD machine.

More information on this event can be found here.


(Featured Image Credit: Swansea University, UK)

Uncategorized

Science

DiRAC caters for a significant portion of STFC’s science, providing simulation and data modelling resources for the UK Frontier Science theory communities in Particlar Physics, astroparticle physics, Astrophysics, cosmology, solar system science and Nuclear physics (PPAN; collectively, STFC Frontier Science). Each year we published a selection of our science highlights and these can be found below.

For information on how our Science maps onto our Services, check out our Resources page.


2018 Highlights

HPQCD finds light quark flavour breaking effects in B meson decay constants. Read more…

High resolution magnetohydrodynamic models unravel the generation of magnetic fields in the interiors of the giant gas planets. Read more…


2017 Highlights

New research, led by the University of Leeds, offers an explanation for the discrepancy between the size and age of the Rosetta Nebula’s central cavity and that of its central stars. Read more…

2017 saw a series of impressive discoveries during the LIGO-Virgo second observing run. Early in January we made our third binary-black-hole observation (GW170104). Another black-hole signal arrived later in June (GW170608). Read more…


2016 Highlights

In February the LIGO collaboration announced the first direct detection of gravitational waves and the first observation of binary black holes, and the DiRAC Data Centric System COSMA5 was used Read more…

The HPQCD Group also continued their research into Lattice QCD with the team using DiRAC to develop a new method for measuring the hadronic vacuum polariation (HVP). They were able to determine Read more…


2015 Highlights

Our HPQCD group members continue the search for new physics in the magnetic moment of the muon. They used DiRAC simulations to develop a new method of determining  Read more…

Colleagues from the HORIZON UK-Consortium furthered their quest to improve the interpretation of future imaging surveys from the Euclid satellite and the Large Synoptic Survey Telescope. These surveys aim to Read more…


2014 Highlights

Our UKMHD Consortium members have been looking at the Origins of Magnetism in the Quiet Sun and used DiRAC to run computationally challenging massively parallel simulations of convective  Read more…

The ViRGO Consortium continued with its flagship EAGLE simulation project, which is opening a window on the role and physics of baryons in the universe by creating hi-fidelity hydrodynamic simulations  Read more…


Uncategorized

DiRAC Training All Levels

The DiRAC Training Program
At DiRAC, we believe one of the facility’s most important responsibilities is to offer its user-base comprehensive training.  A well-trained user-base increases the volume and complexity of research that can be carried out and means we can use systems that are closer to the edge of technology, which in turn means that more and better research can be carried out.  

The DiRAC training program has been designed to guide users along a path from very little knowledge to a really good understanding of HPC code development. We recognise that our users predominantly fall into two categories: those who run existing codes, and those that develop and enhance codes, so we have broken up the training into 5 levels of increasingly advanced content.

We do not expect all users to complete the whole program but we do require all new users with less than 2 year’s experience to complete Level 1: Essentials, and the accompanying on-line test by the end of their first year. The program as a whole has been designed so that all new users of HPC systems would benefit from the first three levels, with researchers involved in code development progressing beyond Level 3 as an when the content becomes useful to their Research.

Each level has a corresponding body of material associated with it and currently these resources are sourced from academic institutions and the general IT community. We are still in the process of populating Levels 2 – 5 but we expect these to come on-line as the year progresses

  1. DiRAC – Essentials: An introduction to the key skills needed when using the UK National DiRAC HPC Service and a building block for the further levels.

  2. DiRAC – Researcher: This level will give users the skills to run codes on any DiRAC system.

  3. DiRAC – Advanced Researcher: This level tackles the challenges of replicating another code or advancing your own.

  4. DiRAC – HPC Developer: This level provides the skills needed to start creating robust, well structured and efficient code.

  5. DiRAC – Expert Practitioner: This will enable users to use these codes as efficiently as possible.

Uncategorized

Industrial Engagement

Working with Industry

DiRAC has a long track record of collaborating with Industry on bleeding-edge technology and we are recognised as a global pioneer of scientific software and computing hardware co-design. We specialise in the design, deployment, management and utilisation of HPC for simulation and large-scale data analytics and we work closely with our industrial partners on the challenges of data intensive science, machine learning and artificial intelligence that are increasingly important in the modern world.

We can also provide easy access to the combined strength of the Universities of Cambridge, Durham, Edinburgh and Leicester in hardware, software and know-how in HPC and data management, as well offering as access to one of the largest HPC infrastructures in the UK public sector and the opportunity to consult with academic expertise from a pool of more than 20 Universities. This capability is unique in its scale, depth and breadth of knowledge and, most importantly, in its business-ready environment which allows organisations to accelerate their research and build real competitive advantage.

Uncategorized

2013 Highlights

Blackhole Highlight:  The environment of bright QSO’s at z ~ 6

cosmos12.png

Figure 1. The distribution of massive black holes in different patches of the 900 million year old Universe according to the recent simulations carried out at the IoA/KICC. Black holes are both more massive and more numerous as the overdensity increases.

Observations of nearby galaxies and high redshift quasars suggest that black holes are present in the majority of galaxies. The SDSS, UKIDSS and VISTA surveys have revealed numerous high-redshift QSOs harbouring black holes with masses in excess of a billion solar masses at a time when the Universe was less than one billion years old (about 10% of it’s current age). The fact that such massive black holes have managed to grow by this time provides a significant challenge for theoretical models of the formation and growth of supermassive black holes.

Researchers at the IoA/KICC have performed a suite of state-of-the-art numerical simulations which self-consistently follow the growth of black holes from very early times to the present day. By simulating a large volume of the Universe, they investigated the growth of massive black hole in a variety of cosmological environments. The images shown in Figure 1 illustratethe distribution of gas in different patches of the 900 million year old Universe. Black holes are far more numerous and massive in the most overdense regions of the Universe. The biggest black circles in the bottom row correspond to black holes with masses close to one billion solar masses. Only very massive black holes located in highly overdense regions were found to give rise to peak luminosities in agreement with observational estimates for the brightest, high redshift quasars, as shown in Figure 2. This finding indicates that these quasars should reside in some of the rarest dark matter halos at the time. Members of the IoA/KICC also used these simulations to study the so called AGN feedback, the interaction between massive black holes and gas in their surroundings. They found that the energy released due to accretion onto the most massive black holes gives rise to powerful outflows that push gas at speeds exceeding a thousand km/s. The comparison of simulated quasar outflows with observations will provide an essential tool to pin down the physical mechanisms underlying the interaction of massive black holes and their host galaxies.


COSMOS Highlight:  Planck Satellite Science – Searches for Non-Gaussianity

The ESA Planck satellite, launched in May 2009, provides an unprecedented high resolution survey of the temperature of the cosmic microwave background (CMB) radiation. COSMOS and HPCS@DiRAC resources were vital for the science exploitation efforts in several key Planck papers. This effort has led to new estimates of cosmological parameters – shown in Figure 1. Planck has also crossed important qualitative thresholds, making gravitational lensing and the study of non-Gaussian statistics truly quantitive subject areas for the first time.

Using the unique capabilities of the COSMOS system, consortium members have used Planck data to undertake the most stringent tests of the inflationary paradigm to date by studying the prediction that primordial fluctuations should have Gaussian statistics (most results are in the ground-breaking Planck Non-Gaussianity paper (dp002.03)). The threepoint correlator (“triangles on the sky”) or bispectrum was evaluated to high precision for the first time (Figure 1); it is a 3D tetrahedral object depending on three triangle multipoles l1, l2, l3. Searches were undertaken for the standard forms of local, equilateral and orthogonal bispectra, described by the ‘nonlinearity’ parameter fNL

These limits, up to four times stronger than WMAP, significantly constrain broad classes of alternative inflationary models. Despite the stringent constraints on scale-invariant models, the Planck bispectrum reconstructions exhibit large non-Gaussian signals (Figure 1), inspiring further investigation of other non-Gaussian models using separable modal methods developed on COSMOS. The two most promising classes of models appear to be those with excited initial states (non-Bunch Davies vacua) and those exhibiting periodic features with apparent high significance. Intensive non-Gaussian analysis of the Planck data is ongoing for a broad range of primordial models, as well as investigating second-order late-time effects such as ISW-lensing.

cosmos11.png

Figure 1. CMB bispectrum reconstruction obtained from the Planck data employing Fourier modes (left) and higher resolution polynomial modes (right); several isosurfaces are shown both positive (red) and negative (blue). The predicted ISW-lensing signal can be seen along the tetrahedral edges (for the first time) and apparent ‘oscillatory’ features in the interior.


ECOGAL Highlight:  Galactic scale models of star formation

Galactic scale models of star formation focus on resolving how large-scale gas flows in spiral galaxies form the dense gas clouds where star formation occurs on scales 10,000 times smaller. The rate at which stars form in galaxies appears to be linked to the global as well as local properties. Using DiRAC, we have performed the largest-scale numerical simulations that can resolve the dense regions where stars form, and hence directly study the physics that drives star formation. We are using these simulations to understand how star formation is initiated, and what determines the resulting properties including why it has such a low efficiency (~1%).

ecogal1.png

Figure 1. The dense gas formed from a spiral shock is viewed perpendicular to the plane of the galaxy.

There also exist ‘special’ regions such as the centre of our Galaxy, where star formation is more exotic and primarily forms only very massive stars. Using realistic models for the galaxy’s gravity, we are studying the dynamics of gas clouds and how they can produced the observed structures and star formation events, including the young stars that orbit the super-massive black hole.

In order to construct a full theory of star formation, we need to also include additional physics of magnetic fields and the radiative and kinetic feedback processes from young stars. Constructing these self-consistent models, akin to modelling a full galactic ecology, requires the ability to model 100 to 1000 million individual elements in the galaxy and can only be performed with the HPC resources provided through DiRAC.

ecogal2.png

Figure 2. A gas cloud moving on an eccentric orbit in the Galactic Center is tidally sheared to form nearly complete stream.


EXETER Highlight:  The formation of a star and its magnetized outflows in radiation magneto-hydrodynamic simulations

As a gravitationally unstable molecular cloud core undergoes collapse to form a star, it undergoes several distinct phases. Initially it collapses almost isothermally as energy is easily radiated away. However, eventually the density rises to the point where radio and infrared radiation cannot freely escape from the gas, and the temperature begins to increase. This substantially increases the pressure, resulting in the formation of a pressure supported gaseous object with a typical size of ~10 AU (known as the first core). This object increases in mass as it accretes surrounding gas and, as it does so, its central temperature continues to rise. When its central temperature reaches ~2000 K, the molecular hydrogen from which it is primarily composed dissociates into atomic hydrogen. This is an endothermic process which results in a second collapse occurring inside the first core. This second collapse continues until the hydrogen is completely atomic, where upon a second, or stellar, core is formed with a size similar to that of our Sun. This object then accretes to its final mass to become a star. If reasonably strong magnetic fields are present initially, outflows from both the first core and the stellar core can be launched at speeds of several km/s. These outflows can carry some of the mass, angular momentum, and energy away from the forming star.

exeter11.png

Figure 1. Visualizations of the magnetic field direction in the outflows from the first cores (top) and stellar cores (bottom) in three calculations with different initial magnetic field strengths (magnetic field strength increases from left to right). Published in Bate, Tricco & Price (2013).

The DiRAC facility, Complexity, has been used to perform the first fluid dynamical simulations that include both magnetic fields and radiation transport to follow this whole process. Such calculations are essential for predicting what may be observed in highresolution observations that will be made using the Attacama Large (Sub-)Millimetre Array (ALMA) over the next few years, and interpreting these observations. In the Figure, we show visualizations of the magnetic field structure in the outflows. Animations of the calculations are at: http://www.astro.ex.ac.uk/people/mbate/Animations/stellarcore.html and the full dataset from the calculations is on the Open Research Exeter archive at http://hdl.handle.net/10871/13883 .


Exomol

exomol11.png

Figure 1. Absorption of methane at T = 1500 K – new theoretical spectrum (10to10) compared to the experiment (HITRAN12).

The aim of the ExoMol project is to compute comprehensive spectroscopic line lists for hot molecules thought likely to occur in the atmospheres of exoplanets, brown dwarfs and cool stars. The list of such molecules is quite long but only those for four or more atoms require the use of DiRAC resources. Our calculations have been split between two computers located at Cambridge: Darwin, which was used to diagonalise small matrices and to compute transition intensities, and COSMOS which was essential for diagonalising larger matrices. We encountered some performance issues with COSMOS when performing large matrix diagonalisations. This problem was referred to SGI who have very recently supplied us with a new diagonaliser for COSMOS. Our initial tests with this new diagonaliser are very encouraging and suggest that our performance issues should largely be resolved. However we have yet to use diagonaliser in productions runs and the results discussed below did not employ it.

Work using DiRAC during 2013 focussed on four main molecules: methane (CH4), phosphine (PH3), formaldehyde (H2CO) and sulphur trioxide (SO3).Methane is a major absorber in hot Jupiter exoplanets, brown dwarfs and cool carbon stars. There is a huge demand for a reliable hot methane line list and there are several groups working towards this objective. This project was therefore given priority. Our main result is that we have generated a new methane line list, called 10to10, which contains just under 10 billion transitions. This line list is complete for wavelengths longer than 1 μm and temperatures up to 1500 K It is by some distance the most comprehensive line list available for methane (see Fig.1). It is currently being actively used by about a dozen groups worldwide to model methane in a variety of astronomical objects (and one group for studies of the earth’s atmosphere). Fig. 2 shows the spectrum of the brown dwarf spectrum of 2MASS J0559-1404 compared to our simulations.

Our tests on this line list show that for photometric studies of the K, H, J bands show that previously available line lists (a) agree well with 10to10 at 300 K for the K and H bands but significantly underestimate J-band absorption due to lack of experimental data in this region; and (b) seriously underestimate absorption by methane for all bands at temperatures above 1000 K. We have also completed initial, room temperature line lists for PH3 and SO3, and a full line list for hot H2CO.


HORIZON Highlight

Dark matter is one of the strangest puzzles in physics: the universe seems to be composed of far more stuff than we can see directly. The evidence is overwhelming, yet indirect.Over the past few decades, astronomers have been able to measure the tug of gravitation in a huge range of environments – from galaxies, through clusters of hundreds of galaxies, to the early universe as seen in the cosmic microwave background – and we keep coming to the same conclusion, that the tug is surprisingly strong. To explain that, one needs around six times more matter than is seen directly. The missing five sixths is known as dark matter.

But attempts to detect particles of this matter have given no clear result. That doesn’t mean dark matter doesn’t exist, but it does mean we can’t be certain what it’s made from. If we can’t manufacture or find particles of it here on Earth, the only way to make progress is to keep studying its behaviour in the sky.

horizon11.png

Figure 1.

As our ability to infer the presence of dark matter in the night sky has improved, we can now measure how the invisible mass is distributed through individual galaxies. In the meantime, increasingly powerful computers have allowed the expected distribution to be predicted (based on assumptions about the particle’s nature). It has proved exceptionally difficult to get these two methods to agree – perhaps pointing to an exotic dark matter particle which behaves in unexpected ways; or, alternatively, that the entire dark matter idea is on the wrong track.

Using the Dirac-2 Complexity cluster, the Horizon collaboration has been studying this problem from a different perspective. Perhaps the difficulty in making theoretical ideas agree with the real Universe stems from an incomplete physical picture of what goes on inside galaxies. We know that vast amounts of energy are available – young stars pump out light and occasionally explode in supernovae. If some of this energy is transferred into the dark matter, the distribution can be radically reshaped.

We established almost two years ago that there were viable physical mechanisms for transporting stellar energy through to the dark matter distribution (dp016.6). Since then, we have been studying with higher resolution simulations how sensitive this mechanism is to fine details of how the gas behaves (e.g. dp016.7) – always the hardest thing to get right in galaxy simulations, because there are so many factors (including resolution, the cooling rates and the coupling of stellar populations to the surrounding gas). Our latest work, (Pontzen et al 2013, MNRAS submitted), shows that even if one gets many of these details intentionally wrong, the dark matter reshaping remains possible.

This is an encouraging sign that despite remaining uncertainty, we can soon determine whether data from the real universe support – or perhaps refute – the simplest dark matter models.


Leicester Highlight: Tearing up the Disk: How Black Holes Accrete

Almost all galaxies have supermassive black holes in their centres. These holes grow because gas from the galaxy falls into them, a process which also makes the holes spin. In general the gas spirals slowly in to the hole in a disk, initially lying in a plane which is at an angle to its spin equator. But near to the hole, powerful forces drag it into this plane. This means that the gas flows through a warped disk, whose plane changes as it moves in. Until recently it was thought that this process occurred fairly smoothly, with a smooth warp gently straightening the disk plane. However work by Ogilvie in 1999 suggested that in some cases the warp could be much more abrupt, because the forces holding the disk together actually weaken in a warp. The disk would then break into a pair of disks, the outer ring in the original misaligned plane, and the inner one aligned to the black hole spin.

leicester11.png

Figure 1. Simulation of disk tearing. Gas spirals towards a supermassive black hole in a plane which is highly inclined to the black hole spin. Close to the hole the gas is forced to orbit in the equatorial plane, and this breaks the disk. The inner disk tumbles in space, and eventually is rotating partially in the opposite direction to the outer disk. Where the two disks touch their gas collides. This removes the rotation that supports it against the black hole gravity, so it falls to a new orbit much closer to the hole. But there it also encounters gas moving in the opposed direction, so the process repeats. This produces rapid gas infall on the black hole, making it grow rapidly, and producing an extremely luminous source of radiation at the center of the host galaxy.

Only detailed work with a powerful supercomputer can check this possibility. Our work with DiRAC2 shows for the first time that it actually occurs in many realistic cases. The spin forces near the black hole break the central regions of tilted disks around spinning black holes into a set of distinct planes with only tenuous flows connecting them. These component disks then precess – their planes tumble in space – independently of each other. In a most cases the continued precession of these disks eventually sets up partially counterrotating gas flows, so that gas in one disk is moving almost oppositely to gas in a nearby disk. As these opposed flows interact, this drives rapid infall towards the black hole. The process can repeat itself as disks form and interact even closer to the black hole. This produces a cascade of tumbling and precessing disks closer and closer to the black hole. This is important because rapid gas infall makes the black hole grow rapidly. This in turn makes the infalling gas extremely hot and bright, accounting for the most luminous objects in the entire Universe.


MEISKIN:
Simulations of the Circumgalactic Medium

meiskin1.png

Figure 1. Environment of galaxy haloes.

meiskin2.png

Figure 2. Halo mass distribution.

We are using cosmological numerical simulations to predict the expected physical properties of the Circumgalactic Medium (CGM) in the environment of masive starforming galaxies in the context of a ΛCDM cosmology. Our primary goal is to establish the gaseous flow pattern around the galaxies and the spatial extent to which winds impact on the CGM through comparison with the observations. Since we wish to compare with observations , as a first step in this pilot project we need to identify the sites of galaxies in the simulation. We do this by identifying their dark matter haloes. This is not straightforward, akin to identifying distinct clouds in the sky. An illustration of the complex regions in which galaxy haloes reside is shown in Figure 1. Different halo identification methods produce different halo catalogs, and different simulation numerical methods produce different haloes. We compare the haloes found using two different algorithms applied to results from two simulation methods to seek a range of halo masses for which the methods agree. Figure 2 shows agreement between these methods for haloes more massive than ten billion solar masses, corresponding to galaxy-size haloes. The flow fields around the haloes are currently being explored.


PLANET

We have examined the evolution of astrophysical disc models for which the temperature decreases as a function of radius, leading to an angular velocity profile that varies with both radius and height. This work demonstrates for the first time that growth of the vertical shear instability in discs leads to a sustained turbulent flow whose associated Reynolds stress leads to outward angular momentum transport. The results may have application to the outer regions of proto-planetary discs, influencing mass accretion and the formation of planets.

planet.png

Figure 1. Contours showing the growth of vertical velocity perturbations in a disk due to the development of the vertical shear instability.

We have examined the migration of low mass planets embedded in magnetized discs with a layered structure consisting of turbulent regions near the surface, and a non-turbulent “dead zone” near the disc midplane. A planet migrates because it experiences a net torque with two contributions: a Lindblad torque that drives inward migration; a corotation torque that slows or reverses migration. Our results show for the first time that the corotation torque in a dead zone is ineffective, with the implication being that low mass protoplanets will migrate rapidly as they grow to become super-Earths or Neptune-like planets. A paper describing these results is in preparation.

planet2.png

Figure 2. The left panel shows a 10 Earth mass planet embedded in the dead zone of a disc. The right panel shows the log of the accretion stress as a function of radius and height. Values near the midplane correspond to a ~ 0.0001, and near the surface a ~ 0.1.


Virgo
Constraining the nature of dark matter using the COCO simulations

The standard cosmological model, the ”Lambda cold dark matter model” (LCDM) is backed up by an impressive array of data that cover a huge range of scales, from the entire observable universe, probed by measurements of temperature anisotropies in the microwave background radiation, to the scales of galaxy clusters and individual bright galaxies, sampled by large galaxy surveys. On smaller scales than this, there is no strong evidence to support the standard model. Yet, it is on such scales that the nature of the dark matter is most clearly manifest. In the standard model, the dark matter consists of cold particles, such as the lightest supersymmetric particles. There are, however, models of particle physics that predict lighter particles, such as sterile neutrinos, that would behave as warm (WDM), rather than cold (CDM), dark matter. If the dark matter is warm, free streaming in the early universe would have erased primordial fluctuations below mass scales corresponding to dwarf galaxies. The abundance and properties of dwarf galaxies could then encode information about the nature of the dark matter.

14vigro

Figure 1. The image above is a slice through the simulation volume, with the intergalactic gas colour coded from blue to green to red with increasing temperature. The inset images show details in a small region of the simulation centered on a spiral galaxy seen face-on.Enter a caption

The Figure below shows two simulations from a Virgo Consortium project called “Copernicus Complexio” (COCO). These show the structure that grows in a region of diameter approximately 150 million light years in model universes in which the dark matter is cold (left) or warm (right). The WDM model is chosen to have as large a free streaming length as is allowed by observations of gas clouds seen in the early universe (the so-called “Lyman-alpha forest” in distant quasar sight-lines). There are about a hundred haloes in each volume with masses similar to that of the dark halo around the Milky Way galaxy.

Each of these is resolved with about 10 million particles making it possible for the first time to obtain a good statistical sample of well resolved subhaloes. On the scales apparent in this figure there is very little difference between the two models. However, on small scales there are large differences. In CDM tens of thousands of subhalos are visible; in WDM only a few tens are. In principle this difference should be detectable in observational studies.

The project made extensive use of the DiRAC-2 data centric facility because the simulations, with 13 billion particles each, require a machine that has both an exceptionally large memory per core and a large total memory.


CLOVER:  Charming Physics

Strongly interacting particles (hadrons) come in two classes: mesons made of quarks and antiquarks and baryons consisting of three quarks. Quarks come in six varieties or flavours: up, down, strange, charm (u, d, s, c) and much heavier bottom and top quarks (not considered here).

Symmetries between the different quark flavours mean that particles group together in families called multiplets. The proton and neutron are part of a multiplet with 20 members, shown in the left figure. Particles with no charm quark are in the bottom layer, followed by a layer of particles with one charm quark and then with two charm quarks. At present there is very little experimental knowledge about the doubly-charmed particles in the top layer.

14clover

Figure 1. The octet multiplet (left) and a compilation of recent lattice determinations of baryon mass splittings (right).

We want to understand how particle masses are related to their position in the multiplet, and the masses of the quarks they contain. In computer simulations we are not limited to the quark masses nature has given us – we can see what would happen if the quarks had quite other masses. As an example, in the second figure we show how the masses of the particles in the lower slice of the multiplet change as we move between a situation with all quark masses equal (the crossing point of the fan) and the physical point (at the left edge of the graph).

While the main force between the quarks and antiquarks comes from QCD there is also a contribution from the electromagnetic force, (quantum electrodynamics, QED), which is usually left out of lattice calculations. We are also doing calculations with both forces included, to see what the effects of QED are. We can see clearly that mesons in which the quark and antiquark have the same sign of electrical charge become heavier compared with mesons where the quark and antiquark have opposite charges, as you might expect from the fact that opposite charges attract, and like charges repel.

Simulating two forces needs more computing power than just looking at one force, so DiRAC II is important in making all aspects of this project possible.


HOT QCD:  Extreme QCD: Towards Quantitative Understanding of the QCD Phase Diagram

hotqcd11.png

Figure 1. Plot of the temperature dependency of the conductivity of matter across transition temperature.

Matter is usually comprised of protons and neutrons which consist of quarks bound tightly together by the Strong Interaction of Particle Physics. However at incredibly large temperatures of a few trillion Celsius, quarks become free and a new and poorly understood “Quark-Gluon Plasma” (QGP) phase is created. While the QGP is presumed to have existed soon after the Big Bang, it has also been produced in experiments where heavy nuclei (such as gold) are collided in particle accelerators at virtually the speed of light. This has been performed in the Large Hadron Collider in CERN and at the Brookhaven National Laboratory on Long Island, USA.

Because each nucleus is incredibly small (100 billion of them side-by-side would span a distance of 1mm) the region of QGP created is correspondingly small. Due to this tiny size, it is impossible to place detectors inside this region, and, in any case, they wouldn’t work because they’d instantly melt into the plasma phase! The plasma “fireball” also expands and cools incredibly rapidly, so it quickly returns to the normal state of matter where quarks are tightly bound.

So to understand the QGP, physicists have to rely on observations of particles ejected from the fireball region out into the “normal” phase of matter and into their detectors and then “reverse engineer” the physics of the QGP. To make the connection from the detected particles back to the fireball, it is therefore essential to understand the QGP’s “transport” properties, i.e. how it expands and flows as a bulk material.

One such property is the electrical conductivity, which is what the quantity this project has calculated. This requires supercomputers, such as those provided by the DiRAC Consortium, in order to simulate the many degrees of freedom of the strong Interaction. Our results, shown in the figure, are the first time anyone has calculated the temperature dependency of the conductivity of matter across transition temperature. In this figure, this transition temperature corresponds to around 180 MeV.


HPQCD:  Light is right – working with up and down quarks at physical masses

hpqcd111.png

Figure 1. A meson annihilates to a lepton and antineutrino (from a virtual W meson).

The world of quarks and gluons, currently the most fundamental building blocks of matter known, is hidden from us. Quarks are never seen as free particles; instead we see their bound states, known as hadrons, in experiments such as those at CERN’s Large Hadron Collider. Accurate calculations with the theory that describes quark and gluon interactions, Quantum Chromodynamics (QCD), are critical to connect the world of quarks to that of hadrons in a quantitative way.

hpqcd112.png

Figure 2. Our results for decay constants of K and pi mesons as a function of the up/down quark mass. The result shown at 0.036 on the x-axis correspond to the real-world masses.

Calculations are made tractable through a numerical technique known as lattice QCD, but they are computationally very challenging. The numerical cost increases as the quark mass falls and so one issue has been that of handling up and down quarks. These have very small masses (a few percent of that of the proton) in the real world. In the past calculations have been done with several values of heavier masses and then an extrapolation to the physical point is performed.

With the computing resources of DiRAC phase II (the Data Analytic Cluster in Cambridge), we are now able to perform calculations with up and down quarks having their physically light masses and this means that more accurate results can be obtained. Our formalism is numerically very efficient and has particularly small disretisation errors.

Key results that we have obtained this way during 2013 are those of the decay constants of the π, K, B and Bs mesons. The decay constant parameterizes the probability of the quark and antiquark in the meson being at the same point and annihilating, for example to a W boson of the weak force, as seen in Fig. 1.

Our results for the decay constants (Fig. 2) are the world’s most accurate. They have enabled us to test the CKM matrix of couplings between quarks and the W boson with unprecedented accuracy. We have also improved the prediction of the rate from known physics of the very rare process, Bs → μ+ μ, recently seen by the LHCb experiment. The sensitivity of this process to new physics makes a future comparison of the experimental and theoretical rates potentially very exciting.


UKQCD BSM:  Beyond the Standard Model- is there anything more than the Higgs?

The Standard Model of particle interaction describes the interactions of all the constituent of the matter to an impressive degree of accuracy. One of the successes of this model is the unification of the electromagnetic and the weak interactions into a new sector called the Electroweak sector. In this model, the Electroweak sector is characterised by a breaking of the SU(2) × U(1) gauge group, which explains why the photon is massless while the W and Z bosons (the mediators of the weak force) have a mass. The electroweak breaking is due to the widely known and notoriously elusive Higgs sector, which describes the interactions of a new particle, the Higgs boson. In addition to giving mass to the mediators of the weak force, the Higgs boson provides mass for ordinary fermionic matter (leptons and quarks). However this elegant model is believed to be valid only up to a certain energy scale, above which new physics is bound to manifest.

ukqcd

Figure 1. The standard model of particle physics (left) and technicolor particles (right).

Many models have been proposed over the years to explain the theory at the scales of new physics. Technicolor is the framework according to which Electroweak symmetry breaking is due to the breaking of the chiral symmetry in a new strong interaction. The model proposes a different answer to the origin of all particles masses, by means of a new mechanism to generate mass for the leptons. These ideas are inspired a similar mechanism that is already at work in the theory of the strong interactions, Quantum Chromodynamics (QCD). A fundamental requirement for any beyond the Standard Model theory is that the framework does not spoil any lower energy prediction, i.e. that they are compatible with current observation. This is a severe constraint, which in Technicolor is implemented by the mechanism of walking, i.e. the slow running of the gauge coupling in an intermediate range of energies. This happens for near-conformal gauge theories. The question then becomes: is there a near-conformal gauge theory that can account for the observed Electroweak symmetry breaking?

The resources offered by the Dirac consortium allowed us to perform Monte Carlo simulation for a theory that has been conjectured be a candidate for the realisation of the Technicolor framework. The model, called Minimal Walking Technicolor, is an SU(2) gauge theory coupled with two adjoint Dirac fermion flavours. We proved that (near-)conformality can be rephrased as a mass spectrum with constant mass ratios between the particles when the constituent fermion mass goes to zero and we observed this feature numerically.


UKQCD DWF

dwf.png

Figure 1. This figure displays the new result for the neutral kaon mixing amplitude BK in green. This new result was calculated in the continuum limit and directly at physical quark masses. The line and the curve represent two fits to previous data points wich were used to extrapolate to quark masses. With our new algorithm and greater machine power we have been able to both eliminate the systematic extrapolation error and simultaneously reduce the statistical error on this important result.

We have carried out simulations of lattice QCD including up, down and strange quark loops using a chiral fermion discretization which is unique in fully preserving the meaning of left handed spin orientation. The W boson only interacts with left handed fermions, and so preserving this symmetry is deeply important to the calculation of the complex weak matrix elements required to support the experiments such as the Large Hadron Collider.

One important example of these is our determination of the two pion decay amplitudes of the Kaon. This is the process in which the asymmetry between matter and antimatter was discovered. Our calculations, which won the 2012 International Ken Wilson Lattice Award, involve mixing between many operators in the effective weak Hamiltonian. The calculation is only tractable with chiral Fermions, but gives rise to a wholly new constraint on the difference between matter and anti-matter in the standard model. Such constraints enable experiments to search for the effects of undiscovered physics beyond the standard model. Our calculation was updated in journal papers, one of which observed in detail a numerical cancellation that explains a long standing puzzle about the likelihood of decay into different pion charges known as the ΔI=1/2 rule – a case of numerical simulation leading to deeper understanding.

This progress over the last year has only been possible due to the powerful DiRAC BlueGene/Q supercomputer in Edinburgh, which our scientists helped IBM to develop, to simulate chiral fermions at the quark masses seen in nature and with spacings for the space-time grid (including all effects up to energy scales of 1.75 and 2.3 GeV). This has allowed us to predict continuum physics with the complete elimination of systematic errors arising from mass extrapolation. The calculation was shared between UKQCD and our USJapan international collaborators in the Riken-Brookhaven-Columbia collaboration, and also made use of BlueGene/Q systems at Argonne and Brookhaven National Laboratories in the US.

Our calculations on neutral kaon mixing are also relevant to CP violation. We presented both the precise continuum results for the standard model process at physical quark masses and for processes that only arise beyond the standard model.


Nuclear Physics

np.png

Figure 1. 40-Ti one-neutron removal spectral strength distribution associated with J-pi = 1/2+.

It has proven extremely difficult to describe nuclei directly from a set of nuclear forces, apart from the very lightest elements. In the last decade there has been great progress, supported by computational advances, in various methods to tackle more complex nuclei. Many of these are specialised to systems with even number of neutrons and protons, or just closed shell nuclei, where many calculations simplify. Alternative phenomenological approaches, such as the nuclear shell model, can typically only work within a single major shell.

The complications in the problem arise from the structure of the nuclear force, which is noncentral and spin-dependent, as well as the superfluidity in atomic nuclei. One thus needs to work in large model spaces, take care of the supefluid Cooper pairs, and work with the complex operator structure of the nuclear force. One natural formalism to do all of this is based on the Gor’kov (Green’s function) approach to superfluid systems. Barbieri et al have recently introduced an approach based on the Gor’kov pattern that works for open-shell nuclei , away from the shell closure. A crucial issue for ab initio approaches concerns the ability of performing numerical calculations in increasingly large model spaces, with the aims of thoroughly checking the convergence and of constantly extending the reach to heavier systems.

A long-standing problem with self-consistent calculations of one-body propagators in finite systems concerns the rapid increase of the number of poles generated at each iterative step. The fast growth is expected as the Lehmann representation of one-body Green’s functions develops a continuous cut along the real energy axis in connection with unbound states. This cut is discretised by a growing number of discrete energy states as tthe size of the model space is increased. In practical calculations, one needs to limit the number of discretised poles. We use Lanczos algorithms to project the energy denominators onto smaller Krylov spaces.

As a result of such calculations we can calculate spectral strengths in nuclei that are not easily accessible by other theoretical means. As an example, we show below the spectral strength in 40Ti, with a cut-off ranging up to 13 major shells.

 

Uncategorized

2014 Highlights

2014 was another productive year for our PPAN theory community researchers and some of the Science Challenges they tackled are highlighted below.

2014 list of publications


 

UKMHD Consortium:
The origins of magnetism in the quiet sun

jj.png

Figure 1. A volume rendering of the magnetic field distribution in a simulation of a region of quit Sun.

In the outer layers of the solar interior, vigorous convective motions interact with complex magnetic field structures. Regions of quiet Sun (which, by definition, contain no large-scale magnetic features such as sunspots) are characterized by a spatially intermittent distribution of small-scale magnetic fields. Explaining the origin and evolution of quiet Sun magnetic fields is one of the most important challenges in modern solar physics.

It is believed that a significant fraction of quiet Sun magnetism is generated in the surface layers by a convectively-driven dynamo. To simulate this dynamo process, it is necessary to solve the governing equations of Magneto HydroDynamics (MHD) for a compressible, electrically-conducting gas in three spatial dimensions and time. This is a computationally challenging problem that can only be tackled by using massively parallel simulations on the Dirac High Performance Computing facilities.

j1.png

Figure 2. The temperature distribution (left) and vertical magnetic field distribution (right) from a simulation of a region of quit Sun.

Using simulations of this type, we have confirmed that a dynamo mechanism could indeed account for the observations. In the absence of magnetic fields, the simulated convective flows are solar-like in many respects, with a “granular” pattern of broad warm upflows, surrounded by a network of cooler, narrow downflows. Furthermore, as suggested by the observations, these granules are organized on a larger (“mesogranular”) scale. When a seed magnetic field is introduced into this system, the convective motions drive a dynamo with a complex, intermittent field distribution that is comparable to that observed in the quiet Sun, with the most persistent magnetic structures accumulating preferentially at the mesogranular boundaries.


 

Kinetic Plasma Turbulence Highlight

Plasma turbulence is believed to be responsible for the fact that the solar wind – the supersonic expansion of the Sun’s corona – retains a high temperature throughout the heliosphere, even when it is expected to cool due to expansion. The solar wind may be heated by releasing magnetic energy to accelerate particles at thin magnetic discontinuities, or current sheets. In some models these current sheets are formed at the same time as the solar wind leaves the Sun, or they may form spontaneously as part of the turbulence seen in the solar wind.

jj14.png

Figure 1. Isosurfaces of current sheet density showing rippling due to ion drift-kink instability (right), and (left) virtual spacecraft data for magnetic field for the crossing of the current sheet shown in the top panel. (From Gingell et al., 2014, arXiv: 1411.4422.)

Using DiRAC we have carried out self-consistent, particle kinetic plasma simulations of the evolution of thin current sheets in a three-dimensional geometry. We find that there are two important instabilities which control energy release. The ion tearing instability releases energy via magnetic reconnection, and the drift-kink instability which, as it name implies, produces a rippled current. The drift-kink instability can reduce the rate of energy release, and lead to thicker current sheets which are less liable to reconnect due to the tearing instability. The competition between these two instabilities can only be studied with three-dimensional simulations, and these require large memory and processor numbers which would not be possible without DiRAC resources.

Our results indicate that the importance of thin current sheets to solar wind heating might be less than predicted by two-dimensional simulations which only capture the ion tearing instability. Energy release at current sheets could be important if they are actively produced at a high enough rate by large scale turbulence. We have also created virtual spacecraft data which can be used to test our simulations against actual data from spacecraft.


 

Leicester:
Merging Supermassive Black Holes

jj14bh.png

Figure 1. Numerical simulation of a gas cloud accreting on to a SMBH binary, performed on the DiRAC-2 Complexity cluster. The simulation followed the evolution of a turbulent, self-gravitating cloud of gas as it falls towards the black holes and forms an accretion disk around the binary.

Almost all galaxies have supermassive black holes (SMBHs) in their centres. These cosmic giants are millions to billions of times more massive than the Sun, and play a key role in shaping the formation and evolution of galaxies. Galaxies grow through collisions and mergers of smaller galaxies, and when pairs of galaxies merge their SMBHs slowly sink towards the centre of the merged galaxy. As they get closer together the “friction” dragging them inwards (actually gravitational encounters with stars) becomes progressively less and less efficient, and this process ultimately stalls when the SMBHs reach separations of around a parsec. However, no pairs of binary SMBHs are seen at these separations (despite many years of searching), suggesting that the process(es) that drive SMBH mergers are much more efficient than theory suggests. This long-standing discrepancy between theory and observations is known as the “last parsec problem”.

The most popular solution to this problem is to appeal to a gaseous accretion disc around the SMBH binary. These discs are commonly observed around single SMBHs (where they appear as quasars), and they allow SMBHs to grow by funnelling material down towards the event horizon. However, it is well-known that if the binary and disc are aligned (i.e., they orbit in the same plane and rotate in the same direction), then the disc cannot shrink the binary’s orbit quickly enough. The Theoretical Astrophysics Group in Leicester has recently pioneered the idea that misaligned accretion discs may be the key mechanism driving SMBH mergers. The interaction between an accretion disc and a misaligned SMBH binary is very complicated, and can only be studied using large computer simulations. The power of DiRAC-2 allowed us to build detailed simulations of how accretion discs form and evolve around SMBH binaries (see Figure 1, adapted from Dunhill et al 2014. Animations from these simulations are available here). These are the first models to study the disc formation process self-consistently (previous studies have simply assumed that a disc is present), and to look in detail at how the gravity of the disc shapes the evolution of the binary. As with previous studies, we find that prograde discs (i.e., disc which rotate in the same direction as the binary) do not shrink the binary’s orbit efficiently. However, when discs form in retrograde configurations (i.e., with the gas orbiting in the opposite direction to the binary) their interaction with the binary is much more efficient. Our simulations demonstrated that repeated retrograde accretion events can drive mergers between SMBH binaries on short time-scales, and this process offers an elegant potential solution to the last parsec problem.


 

UKQCD Beyond Standard Model

Novel strong interactions might provide an explanation for the mechanism of electroweak symmetry breaking. In order for a strong dynamics to be able to explain the electroweak breaking, the theory must be near the onset of the conformal window and possess an anomalous dimension of the chiral condensate of order one. Furthermore, after the experimental observation of a light Higgs and no unexpected particle, a crucial requirement for a Beyond the Standard Model (BSM) theory is the existence of a light scalar in the spectrum.

The a priori interesting theories can be divided into two classes: infrared conformal theories, which are inside the conformal window and are scale-invariant in the limit of infinite distance, and walking theories, showing confinement at large distance, but behaving as infrared conformal theories for a wide range of intermediate scales. Theories in both categories can be used to construct BSM models without violating the existing constraints from precision measurements.

14ukcdbsm

Figure 1. Selected spectral states of the theory in lattice units.

Our investigations of the SU(2) with one adjoint fermion model show that this theory displays crucial dynamical features attributed to phenomenologically viable models of electroweak symmetry breaking (see figure 1 of the spectrum from arXiv:1311.4155).

Indeed simulations performed for values of the couplings in the region connected with the continuum indicate that the system is close to the edge of the conformal window, with an anomalous dimension close to one and a scalar particle much lighter than the other particles in the spectrum.

At the present stage this is the only system showing such intriguing features. Moreover, our calculation provides the first evidence that concrete models with the non-perturbative features mandated by experimental constraints do exist.


 

HPQCD_banner

Searching for new physics in the magnetic moment of the muon

jj14hpcd

Figure 1. The interaction between a muon and a photon is complicated by the production of virtual particles.

A vacuum is never completely empty space but teems with particles that are created fleetingly by quantum fluctuations in energy and then disappear. The heavier the particle, the less time its brief existence can last. In that time, however, the particle can interact with the much lighter particles in our everyday world and thereby reveal its existence through tiny discrepancies in the properties of these particles from that expected in the Standard Model. The magnetic moment of the muon shows such a discrepancy, a tantalizing 25±9 parts in 1010. To pin this down more accurately a new experiment, E989 at Fermilab near Chicago, will reduce the experimental uncertainty by a factor of 4. An improved theoretical uncertainty from the Standard Model to match this needs lattice QCD calculations. This year HPQCD has developed a new method using DiRAC that will enable the accuracy needed to be achieved.

jj14hpcd2

Figure 1. Our results for the strange quark HVP contribution to aμ as a function of lattice spacing.

The muon is a heavier cousin of the electron with the same electric charge and spin, so that it carries a magnetic moment. The ‘gyromagnetic ratio’, g, measures the ratio of magnetic moment to spin (in units of e/2m). Naively, g=2 but this ignores the interactions of the muon with the cloud of virtual particles discussed above. The anomalous magnetic moment, aμ = (g – 2)/2, can be measured very accurately from the difference between the circulation and spin precession frequencies as muons fly round a ring transverse to a magnetic field.

The biggest effect in the Standard Model result for aμ comes from Quantum Electrodynamics where very high order calculations can be done. The largest theoretical uncertainty is from Quantum Chromodynamics and the process of Fig. 1 in which a muon radiates a photon that briefly becomes a quark-antiquark pair. This ‘hadronic vacuum polarisation’ (HVP) contribution needs to be calculated to better than 0.5% in lattice QCD to reduce the theoretical uncertainty. Our new method (arXiv:1403.1778, Phys. Rev D89:114501) makes this possible, having achieved 1% for the first time for the strange quark HVP contribution. Our method has low systematic errors and DiRAC enabled high statistics on realistic gluon field configurations. The up/down quark calculation is now underway. Errors will be reduced by a further factor of 3 as we increase statistics by a factor of 10 in 2015.


 

ExoMol

To find out what remote planets orbiting other stars are made of, astronomers analyse the way in which their atmospheres absorb starlight of different colours and compare it to a model, or ‘spectrum’, to identify different molecules.

In collaboration with the University of New South Wales we have developed a new spectrum for ‘hot’ methane which can be used to detect the molecule at temperatures above that of Earth, up to 1,500K/1220°C – something which was not possible before. Current models of methane are incomplete, leading to a severe underestimation of methane levels on planets. We anticipate our new model will have a big impact on the future study of planets and ‘cool’ stars external to our solar system, potentially helping scientists identify signs of extraterrestrial life. Our study, published in PNAS, describes how we used some of the UK’s most advanced supercomputers, provided by the Distributed Research utilising Advanced Computing (DiRAC) project and run by the University of Cambridge, to calculate nearly 10 billion spectroscopic lines, each with a distinct colour at which methane can absorb light. The new list of lines is 2000 times bigger than any previous study, which means it can give more accurate information across a broader range of temperatures than was previously possible. The comprehensive spectrum we have created has only been possible with the astonishing power of modern supercomputers which are needed for the billions of lines required for the modelling. We limited the temperature threshold to 1,500K to fit the capacity available, so more research could be done to expand the model to higher temperatures still. Our calculations required about 3 million CPU (central processing unit) hours alone; processing power only accessible to us through the DiRAC project. The new model has been tested and verified by successfully reproducing in detail the way in which the methane in failed stars, called brown dwarfs, absorbs light.


 

CLOVER: Baryon Mass Splittings

Light baryons such as the familiar proton and neutron make up most of the matter in the visible universe. These baryons have constituents of three types of ‘flavour’ of quarks: up, down and strange or u, d and s. Particles group together in families or multiplets, the octet multiplet is shown on the left-hand side of the figure.

The mass splitting between the proton and neutron is a very delicately balanced quantity, partly caused by the mass difference between the u and d quarks, and partly by QED effects. Small changes in this mass difference would have profound effects on the way the universe looks today. The initial balance between hydrogen and helium, established in the first half-hour after the Big Bang, depends strongly on the neutron half-life, and so on the p-n mass splitting. Later, the production of carbon and oxygen in stars also depends strongly on the proton-neutron splitting.

14clover

Figure 1. The octet multiplet (left) and a compilation of recent lattice determinations of baryon mass splittings (right).

The strong force binding the quarks together is described by quantum chromo-dynamics or QCD, which has to be numerically simulated (via lattice QCD). A compilation of recent lattice determinations of baryon mass splittings is given in the right panel of the figure. In particular there is also mass splitting between the particles at the centre of the octet the Sigma and Lambda. This is more complicated case as the states mix. We have now extended our previous results to include this case and determined the mixing angle and mass splitting. While the effects of mixing on the masses are very small (second order in the angle), it can be much larger for decay rates.

While the main force between the quarks and antiquarks comes from QCD there is also a contribution from the electromagnetic force, QED, which is usually left out of lattice calculations. We are also doing calculations with both forces included, to see how important the effects of QED are.


 

Black Holes

Firm observational evidence indicates that supermassive black holes are present at the centre of the majority of galaxies already from very early cosmic times all the way to the present day Universe. Feedback from these black holes in the form of large scale outflows is believed to be one of the key ingredients shaping the evolution and properties of galaxies in our Universe.

Recent observations (Cicone et al. 2014) have detected the emission from spatially extended cold gas around a bright quasar that existed when the Universe was less than 10% of its current age. This cold gas is moving at a very high velocity of the order of 1000km/s and has been detected up to 90,000 light years away from the rapidly accreting black hole that is powering the observed quasar. While this high velocity gas has been interpreted as the signature of a powerful black hole outflow, this observation is in tension with simple theoretical expectations which suggest that while rapidly moving, gas should be very hot instead of cold.

Figure 1. A galaxy hosting a black hole. The hot and rapidly moving outflow originating from the black hole is shown with a black contour, cold gas pockets containing up to a billion Solar masses and moving together with the hot outflow are depicted with grey pixels, and inflowing cosmic web filaments are illustrated in orange hues.

Using the moving-mesh code AREPO, researches at IoA/KICC, Cambridge have performed some of the most realistic cosmological simulations that follow black hole growth and feedback in the early Universe. In simulations, a galaxy hosting a bright quasar undergoes extremely high levels of star formation as it is located at the intersection of cosmic web filaments which deliver copious amount of gas to it. This eventually drives a “starburst phase” where a large number of stars explode as supernovae and their joint effect triggers a galaxy-scale wind. Thus, as the even more powerful black hole-driven outflow leaves the central region of galaxy and propagates outwards, it sweeps over the gas that has been polluted by metals, initially produced within stars and subsequently released by supernova blast waves. These “pockets” of metal enriched gas cause part of the hot quasar-driven outflow to cool to low temperatures. This is illustrated in the figure above where the hot and rapidly moving outflow originating from the black hole is shown with a black contour, cold gas pockets containing up to a billion Solar masses and moving together with the hot outflow are depicted with grey pixels, and inflowing cosmic web filaments are illustrated in orange hues. The black hole and its host galaxy are situated at the centre of the image.

Thanks to the simulations performed at the DiRAC-2 facility in Cambridge, researchers at IoA/KICC have found that self-consistent modelling of both supernovae and black hole outflows lead to the formation of cold gas pockets moving with 1000 km/s which are spatially extended over 90,000 light years (30kpc) in remarkable agreement with the puzzling observations of Cicone et al. 2014.


 

Virgo

14vigro.png

Figure 1. The image above is a slice through the simulation volume, with the intergalactic gas colour coded from blue to green to red with increasing temperature. The inset images show details in a small region of the simulation centered on a spiral galaxy seen face-on.

The EAGLE simulation project is a Virgo flagship program aimed at creating hi-fidelity hydrodynamic simulations of the galaxy population. It has been a major investment of intellectual effort, manpower and computing resources, with game-changing results, opening a window on the role and physics of baryons in the Universe.

EAGLE uses the latest developments in SPH techniques, with state of the art modelling of sub-grid processes such as star formation, feedback and black hole accretion. Unlike previous projects, our feedback scheme does not involve “de-coupling” feedback particles from hydrodynamic forces, or artificially suppressing cooling. The largest simulation, with a volume of 106 Mpc contains 1000s of well-resolved galaxies similar to the Milky Way and100s of galaxy groups and a handful of galaxy clusters. This huge statistical sample allows us to understand the variety of galaxy formation histories, the impact of galaxy environment and to study rare objects such as highly star forming sub-mm galaxies.

Obviously, a simulation of this size is of no use unless it is able to accurately reproduce the properties of the local galaxy population. EAGLE has been spectacularly successful: for example, the model correctly predicts the evolution of the galaxy mass function and the rising trend of galaxy specific star formation rates (Schaye et al. 2014 at z=0, Furlong et al. 2014 showing predictions at higher z). Numerous papers are in advanced stages of preparation comparing additional galaxy properties, such as HI mass functions (Bahe et al), galaxy colour distributions and luminosity functions (Trayford et al), OVI absorbers (Rahmati et al), AGN luminosity functions (Rosas-Guevara et al). The fidelity of the simulation also makes it a powerful tool for understanding the biases in observational surveys.

Having shown that the simulation’s fidelity is excellent, we can now exploit it as a tool for understanding the physical mechanisms by which galaxies form and evolve. The image above is a slice through the simulation volume, with the intergalactic gas colour coded from blue to green to red with increasing temperature. The inset images show details in a small region of the simulation centred on a spiral galaxy seen face-on.


 

Extreme QCD: Towards Quantitative Understanding of the QCD Phase Diagram

There are four fundamental forces that describe all known interactions in the universe: gravity; electromagnetism; the weak interaction; and the strong interaction, which is the topic of our research. The strong force binds quarks into hadrons such as protons and neutrons, which in turn form the nuclei of atoms, making up more than 99% of all the known matter in the universe.

Normally, the strong interaction binds quarks so tightly that they never escape from within hadrons. However, at temperatures greater than the deconfining temperature, TC (which is several trillion Celsius!) it undergoes a substantial change in character, becoming considerably weaker, and quarks become “free”. This new phase of matter is called the “quark-gluon plasma” (QGP) and is presumed to have occurred in the first few microseconds after the Big Bang. Despite the QGP being crucial to the development of the Universe (it was born in this state!) it is poorly understood.

Physicists re-create a mini-version of the QGP by colliding large nuclei (such as gold or lead) in particle accelerators at virtually the speed of light. These experiments have been performed in the Large Hadron Collider at CERN and at the Brookhaven National Laboratory in the USA. The region of QGP created in these experiments is incredibly small – just the size of the colliding nuclei. The QGP “fireball” formed expands and cools very rapidly, quickly returning to normal matter where quarks are tightly bound inside hadrons.

Most hadrons melt in the QGP meaning that the quarks that were inside them become free, like satellites escaping the earth’s gravity. However it was proposed 30 years ago that some, heavy “mesons” may remain bound up to quite high temperatures. Our collaboration has calculated the spectral functions of these bottomonium mesons (comprised of two bottom quarks) and we show the results above. Each pane has the results from two nearby temperatures, with the coldest pair in the top-left, and the hottest bottom-right. The strong peak at around 9.5 GeV shows a bound state which gets weaker as the temperatures increase, but remains in existence up to around 2TC. This confirms the decades-old hypothesis that these mesons do not melt as soon as the QGP is formed, and that measuring their features is an effective way of determining the system’s temperature, i.e. they can be used as a “thermometer” of the QGP.


 

COSMOS:

COSMOS consortium researchers have been exploiting the DiRAC HPC Facilities to make progress towards ambitious milestones in five key inter-related areas: (i) extreme universe, (ii) cosmic microwave sky, (iii) dark energy and (iv) galaxy formation and (v) black holes.

jj14cosmos.png

Figure 1. CMB temperature (left) and E-mode polarization (right) bispectrum reconstruction obtained from the Planck Full Mission Planck data using the MODAL estimator on COSMOS. Isosurfaces are shown both positive (red) and negative (blue).

Planck Satellite Science with Polarization:

The COSMOS Consortium flagship project to analyse and interpret data from ESA’s Planck Satellite has gone forward, building on the success of the 2013 First Data Release, to analyse the Planck Full Mission data with CMB polarization in results announced in December 2014. DiRAC resources were used to derive accurate estimates for cosmological parameters, showing interesting deviations from earlier WMAP results, checking consistency with temperature using the polarization data. Planck data was used to impose world-leading limits on cosmic non-Gaussianity that substantially constrain inflationary Universe paradigms. The three-point correlator (bispectrum) was evaluated to high precision for the first time in both temperature and polarization (and mixed bispectra) with a wide-ranging model survey undertaken, which yielded tight constraints on standard local, equilateral and orthogonal non-Gaussianity, as well as a measurement of the lensing bispectrum.

Extreme Universe – Blackholes:

Consortium members continue with world-leading black hole simulations, developing new methods to study black-hole collisions – events so violent their output eclipses the entire electromagnetic universe. Our simulations on Cosmos demonstrate that up to 50% of the total energy can be converted into gravitational waves in such collisions, supporting present assumptions in the analysis of collision search experiments at the LHC. We have also been preparing gravitational wave templates for the Advanced LIGO experiment which will begin operating in 2015. In addition, we have developed and tested a numerical relativity code GRChombo, the first full adaptive mesh refinement code which can be used to simulate black hole collisions and gravitational wave generation in early universe cosmology.

Galaxy Surveys, Dark Energy and Dark Matter:

Consortium resources have been deployed to analyse galaxy surveys through N-body simulations to better understand the properties of the universe and its perturbations. Much of the focus has been on preparing for The Dark Energy Survey (DES), as well as ambitious future surveys with DESI, Euclid and SKA. Power spectrum measurements from Baryon Oscillation Spectroscopic Survey (BOSS) data and mock catalogues corresponding to the Data Releases 10-12 samples were performed using COSMOS.


 

Domain Wall Highlight

We have pursued the first ever Quantum Chromodynamics (QCD), or strong force, lattice simulations of quarks and gluons with continuum limit results from chiral lattice quarks at their physical masses. Only left handed quarks couple to the electroweak forces. Our chiral approach reproduces the handedness symmetry of continuum QCD and is particularly good for determining the rates of weak decay processes involving left handed quark operators. This preference for left handed over right handed shows that the weak forces surprisingly break reflection (or parity) symmetry. Our results confirm the strong force as the confining theory of hadrons (particles made from quarks). The mass spectrum and pion decay constant agree with experimental determinations at the fraction of a percent level, and allow determination of the Vus quark flavor mixing parameter of the standard model. These and the following achievements are crucial ingredients in interpreting the results from the large experimental facilities (e.g. at CERN).

jj14domain

Figure 1. The figure above left shows the fit to simulation data for the decay constant from arXiv: 1411.7017 and right the action density for one of the gauge fields generated on the DiRAC facility.

Quark Masses:

We have determined the up/down and strange quark masses – fundamental constants of nature – to a fraction of a percent.

Neutral Kaon oscillations:

In the Standard Model (SM) of particle physics, weak interactions allow quarks to change from one type or flavour to another in a highly constrained pattern. Quark flavour-changing processes with no accompanying change in electric charge are very rare and present a small background against which to search for new physics. The SM allows electrically-neutral kaons to oscillate into their own antiparticles in a way which breaks the combined symmetry of particle-antiparticle exchange (charge) and mirror reflection (parity), known as CP symmetry. CP violation is essential to enable the dominance of matter over antimatter in the Universe and was first discovered in the neutral kaon system. We have calculated the neutral Kaon oscillation amplitude BK to 0.3% accuracy.

Algorithms:

Vigorous activity by UKQCD has leveraged STFC investment in DiRAC with around a 200x speedup from code and algorithmic improvements:

  • UKQCD scientists worked for five years with IBM research to help design the BlueGene/Q chip (being responsible for 8% of the chip area);
  • UKQCD have developed new multi-grid algorithms that adaptively accelerate the calculation of quark propagators;
  • A volume averaging technique developed by RBC-UKQCD, known as AMA, has lowered our statistical errors to around the 0.2% level.

 

ECOGAL

Large Scale Triggering of Star Formation and Molecular Clouds:

This project has received the bulk of our effort over the past 12 months. We have been running 127 million particle SPH runs of a spiral shock to study the formation of molecular clouds and establish the first self-consistent initial conditions for simulations of star formation. These simulations follow the formation of molecular clouds along a 4kpc length of a spiral arm with a total mass of 20 million solar masses and can resolve the formation of self-gravitating structures down to 10 solar masses. We have been studying the importance of the thermal physics in triggering star formation as well as several tests to ensure that the dynamics of the shocked regions are properly captured. A companion study using grid-based simulations has been completed which investigates the dynamics of single clouds entering into a spiral arm. This study showed that the gravitational potential, plus shock induced hydrodynamical instabilities, can drive turbulence into the cloud. This re-enforces the conclusions from our Smoothed Particle Hydrodynamics simulations.

We are now developing full galactic models of the ISM including live potentials. This work will form a significant part of our simulation work in 2015.

The Galactic Centre:

We have concluded studies on star formation in the Galactic centre and have expanded our scope to study the dynamics of the Central Molecular Zone including the “Moilinari” ring. These simulations show how a single cloud can reproduce the overall features. A detailed comparison with observations is ongoing. We have also used larger scale simulations of the inner 500pc of the galaxy to study how the non-axisymmetric structures combine with infalling gas interact, in order to probe the relationship between the CMZ and the inner star formation activity.

Uncategorized

2015 Highlights

At DiRAC we are proud to have made some major contributions to the areas of theoretical modelling and HPC-based research in the STFC Fontier Science agenda. Below you can find information on the Science Highlights from some of the projects that DiRAC is involved in…

2015 list of publications


 

HPQCD_banner

Stress Testing the Standard Model

A vacuum is never completely empty space but teems with particles that are created fleetingly by quantum fluctuations in energy and then disappear. The heavier the particle, the less time its brief existence can last. In that time, however, the particle can interact with the much lighter particles in our everyday world and thereby reveal its existence through tiny discrepancies in the properties of these particles from that expected in the Standard Model. The magnetic moment of the muon shows such a discrepancy, a tantalizing 25±9 parts in 1010. To pin this down more accurately a new experiment, E989 at Fermilab near Chicago, will reduce the experimental uncertainty by a factor of 4. An improved theoretical uncertainty from the Standard Model to match this needs lattice QCD calculations. This year HPQCD has developed a new method using DiRAC that will enable the accuracy needed to be achieved.

HPQCD15

Figure 1. Determination of meson decay constants, compared to experimental results, where available. HPQCD, arXiv: 1503.05762, 1408.5768.

The muon is a heavier cousin of the electron with the same electric charge and spin, so that it carries a magnetic moment. The ‘gyromagnetic ratio’, g, measures the ratio of magnetic moment to spin (in units of e/2m). Naively, g=2 but this ignores the interactions of the muon with the cloud of virtual particles discussed above. The anomalous magnetic moment, aμ = (g – 2)/2, can be measured very accurately from the difference between the circulation and spin precession frequencies as muons fly round a ring transverse to a magnetic field.

The biggest effect in the Standard Model result for aμ comes from Quantum Electrodynamics where very high order calculations can be done. The largest theoretical uncertainty is from Quantum Chromodynamics and the process of Fig. 1 in which a muon radiates a photon that briefly becomes a quark-antiquark pair. This ‘hadronic vacuum polarisation’ (HVP) contribution needs to be calculated to better than 0.5% in lattice QCD to reduce the theoretical uncertainty. Our new method (arXiv:1403.1778, Phys. Rev D89:114501) makes this possible, having achieved 1% for the first time for the strange quark HVP contribution. Our method has low systematic errors and DiRAC enabled high statistics on realistic gluon field configurations. The up/down quark calculation is now underway. Errors will be reduced by a further factor of 3 as we increase statistics by a factor of 10 in 2015.


 

Kinetic Plasma Turbulence:  Determining the shape of Type Ia supernova explosions

Type Ia Supernovae (SNe Ia) are among the most energetic phenomena in the Universe and are important for many different astrophysical research fields (e.g. theories of binary stellar evolution, chemical enrichment of galaxies, and cosmology). These spectacular events are believed to be associated with thermonuclear explosions of white dwarf stars but answers to the questions of when, why and how the explosions are triggered remain unclear.

Kinetic15

Figure 1. Flux and polarization spectra (black) predicted 7 days before maximum for the merger model of Pakmor et al. (2012) viewed from a specific orientation. For comparison, grey lines show observed spectra of SN 2004dt at the same epoch.

In the past few decades spectropolarimetry has 
played a leading role in the search for a comprehensive picture of SNe Ia (Wang & Wheeler 2008). 
The particular value of this technique is its ability to probe the geometry of the explosion: linear polarization, attributed to electron scattering, has been detected in SNe Ia, suggesting that asymmetries 
in the progenitor system and/or explosion mechanism are 
present. However, interpreting polarimetric data is challenging and careful modelling is required to make the connections between observed spectropolarimetry and specific ejecta geometries. Our current research aims to address this challenge by making polarization prediction from modern multi-dimensional explosion models.

We recently developed a new Monte Carlo technique for use in our radiative transfer code (Kromer & Sim 2009) that allows us to calculate polarization signatures for hydrodynamical explosion models (Bulla et al. 2015). Compared to simpler techniques, this method leads to a substantial reduction in the Monte Carlo noise and thus in the required computing time (a factor of ~ 50). The combination of this new technique and access to the super-computing facilities provided by DiRAC has now made it feasible to compute polarization spectra for fully three-dimensional explosion models using simulations that run for only a few days.

In a recent paper (Bulla et al. 2016), we presented polarization calculations for the violent merger of two carbon-oxygen white dwarfs. The violent merger model has received considerable attention in the past few years, but our study is the first to investigate its spectropolarimetric signatures in detail. We demonstrated that the particular merger model of Pakmor et al. (2012) provides a remarkably good match with some highly-polarized SNe Ia (see Fig. 1), but is too asymmetric to reproduce the low levels of polarization commonly seen in many other SNe Ia. We are now extending our studies to alternative explosion scenarios: this will allow us to determine how spectropolarimetry can be most effectively used to distinguish and test explosion theories.

Bibliography: Bulla et al. 2015, MNRAS, 450, 967; Bulla et al. 2016, MNRAS, 455, 1060; Kromer & Sim 2009, MNRAS, 398, 1809; Pakmor et al. 2012, ApJL, 747, L10; Wang & Wheeler 2008, ARA&A, 46, 433.


 

HORIZON UK-Consortium:  Galaxy alignments from cosmological hydrodynamics simulations

Galaxies come in a variety of shapes. Nevertheless, in a simplified view of galaxy formation, one can classify them either as spirals or ellipticals. However, the orientation and ellipticity of these spirals and ellipticals is not random; it depends on their formation history and their environment. In particular, tidal forces from the large-scale structure of the Universe (see figure below) play a role in determining correlations between ellipticity, orientation and the underlying (dark) matter density field. Such correlations are referred to as ‘intrinsic alignments’.

On the other hand, future imaging surveys from the Euclid satellite and the Large Synoptic Survey Telescope, aim to measure per cent level perturbations on galaxy ellipticities caused by a different effect: the deviation of photons from their original path due to the gravity of these same large-scale structures. This ‘gravitational lensing’ effect constitutes a powerful probe of the nature of the mysterious dark energy that drives the accelerated expansion of our Universe.

horizon15.png

Figure 1. Left panel: 28 Mpc slice through the Horizon-AGN simulation cube (140 Mpc on a side), at redshift z=0.5. Right panel: 10x zoom on a region located in the bottom left part of the cube. The different colours indicate gas density (in green), gas metallicity (in blue) and stars (in red). Galaxies are identified as clusters of stars. Their shapes are measured to investigate intrinsic ellipticity correlations that contaminate the weak gravitational lensing signal used by astronomers to constrain the nature of dark energy.

Therefore, the quest for improving our understanding of intrinsic alignments to avoid them contaminating dark energy observables is paramount. Thanks to the DiRAC facility, our group (Horizon-UK) lead an analysis of unprecedented statistical scope and detail of galaxy shape and spin alignments, mining the huge dataset from the state-of-the-art hydrodynamic cosmological simulation Horizon-AGN. These pioneering results are reported in Chisari et al, 2015, MNRAS, 454, 2736.


 

Leicester: First pre-infall mass estimate of a Milky Way satellite galaxy

We have used the DiRAC-2 Complexity cluster to obtain a mass estimate of the Carina dwarf spheroidal (dSph) satellite of the Milky Way at the time when it first fell into the gravitational field of our Galaxy (about 6 Gyr ago). This is the first primordial mass estimate that has been obtained directly from observed data without recourse to matching with cosmological simulations.

leicester15.png

Figure 1. Comparison of our mass estimate for Carina (black and red data points with error bars) with predictions for the relationship between stellar mass (y-axis) and dynamical mass (x-axis) obtained via abundance matching of observed galaxies and simulated halos. (Figure reproduced from Ural et al., 2015.)

The dSphs which surround the Milky Way are vital laboratories in which to test theories of galaxy formation. Not only are they are the lowest mass stellar systems known to contain dark matter but they also populate the faint end of the galaxy luminosity function. Over the past 15 years, several issues have been identified which appear to place cosmological galaxy formation simulations in tension with observations: (1) simulations which contain only dark matter over-predict the numbers of objects around a Milky Way-like galaxy by at least an order of magnitude (the “missing satellites” problem; Moore et al., 1998); (2) simulated haloes exhibit central density cusps in their dark matter profiles, while observations are often claimed to be more consistent with uniform density central cores (the “cusp-core problem); (3) simulations predict the existence of dark haloes whose masses should have allowed them to form stars but which are apparently missing from the census of observed galaxies (the “too big to fail problem”; Boylan-Kolchin et al., 2012). The resolution of all three issues requires knowledge of the masses of the dwarf satellites both at the present epoch and at the time of their formation.

Despite the availability of large observed data sets containing measured velocities for 1000s of stars in several of the brighter Milky Way satellites, previous analyses of their mass distributions have made many simplifying assumptions. In particular, most models assume that the dwarf galaxy is in dynamical equilibrium and isolated from external perturbations. For small Milky Way satellite galaxies, both these assumptions are of questionable validity. N-body simulations provide a natural way to relax these assumptions and explore the impact of the Milky Way on the evolution of a dwarf galaxy. We have combined a Markov-Chain-Monte-Carlo algorithm with an N-body simulation code to explore the parameter space of models of the Carina dwarf galaxy using 20,000 medium-resolution simulations.

We identified the Carina dSph as an interesting target for this initial study due to the large amount of available kinematic data and the fact that there were already hints that its outer regions are being disturbed by the Milky Way (e.g. Muñoz et al., 2006). Our algorithm was able to constrain the present-day mass of Carina to be (7±3)×107 solar masses. More interestingly, we were also able to constrain Carina’s mass at the time it first fell into the Milky Way to be 3.9 (−2.4;+3.9)×108 solar masses. Our mass estimate is significantly lower than the halo mass generally associated with dSphs of comparable luminosity to Carina. In Ural et al. (2015), we suggested that this may be evidence that galaxy formation becomes stochastic in low-mass haloes.

The calculations for this project consumed approximately 1M core hours on the Complexity cluster of DiRAC-2, as a very significant number of simulations were required to verify the efficacy of the method using artificial data sets before it could be applied to the observed data with confidence. By using the Complexity cluster we were able to run multiple MCMC chains in parallel which greatly reduced the overall time required to complete the project. The work is reported in Ural, Wilkinson, Read & Walker, 2015 (Nature Communications, 6, 7599).


 

Black Holes

Quasars are extremely bright objects that reside in the centres of galaxies. They are believed to consist of a supermassive black hole (some as large as a billion times the mass of our own Sun) that is surrounded by gas. These black holes grow in size by swallowing their reservoir of gas and, in doing so, release an enormous amount of energy into their host galaxies. This energy affects the growth of each galaxy, changing where and when stars form and playing an important role in the different appearances of galaxies that we observe in the Universe today.

BLACKHOLES2015

Figure 1.

Such quasars are very rare in the present day Universe, but were much more active in the past. Because of their brightness, some of the most distant objects we can see are some of the most massive quasars, existing barely 1 billion years after the Big Bang. The presence of these objects raises many questions: how were these objects able to grow to such a large mass so quickly? What do the galaxies that host such objects look like? How does the growth of such a massive black hole affect the surrounding environment?

Using the DiRAC supercomputers at Universities of Cambridge, Leicester and Durham, researchers at the Institute of Astronomy and Kavli Institute for Cosmology, Cambridge simulated the growth of a galaxy containing a quasar similar to those we observe. These results are summarised in a recent Letter to MNRAS by Curtis and Sijacki, accepted on December 10, 2015. Vital to the work was the use of a new computational technique to significantly increase the resolution of simulations around the central black hole, which Curtis and Sijacki developed over the last year (2015, MNRAS, 454, 3445) thanks to unique computational resources provided by DiRAC. This novel technique enabled the capture of the flow of gas around black holes with unprecedented accuracy both in term of mass and spatial resolution.

The work by Curtis and Sijacki provides new insights into what we might hope to observe in the near future around these distant quasars with the James Webb space telescope and the Atacama Large Millimeter/submillimeter Array. Long thin structures, called filaments, transport gas from the outskirts of the galaxy all the way to the central region. These filaments grow a disc of gas that orbits around the black hole. The gas in this disc is relatively cold, which allows it to become dense enough to form stars at a dramatic rate. A fraction of the gas escapes the disc and falls onto the black hole, releasing large quantities of energy. This drives a powerful flow of super-heated gas out of the central galaxy, oriented in such a way that it does not destroy the surrounding disc, which is the key novel result of the model.

In the coming years there will be many observations of quasars in the early Universe. The ways in which they compare with the predictions of Curtis and Sijacki will provide clues to the physical processes that we do not fully understand, allowing deeper insight into the role black holes play in shaping our Universe.


 

VIRGO: Modern cosmological test of gravity

A hot topic in cosmology is to test gravity on scales well beyond our Solar system. 2015 marks the centenary of Einstein’s publication of his General theory of Relativity (GR), which is still our standard gravity theory. Although GR has been tested rigorously in the Solar system, its application in cosmology is still largely an extrapolation. Previous cosmological observations did not have the precision to test it accurately, but the situation is changing quickly in light of the future galaxy and cluster surveys such as Euclid, LSST, DESI and eROSITA, all of which aim to test GR as a primary science objective. Over the next decade, these observations will determine key cosmological parameters with percent-level precision, and bring the cosmological tests of GR to a similar precision for the first time. They will also address another critical question in modern cosmology – whether the mysterious accelerated expansion of our Universe is due to some unknown exotic matter (dark energy) or a different behaviour of gravity on cosmological scales.

virgo15.png

Figure 1. The velocity field in a slice of the simulation box of the standard (left) and non-standard (right) gravity model. Colours are used to illustrate the amplitudes of the velocity field. Filamentary structures, towards which matter fall, are clearly visible, as well as the model differences around these filaments. It is by differences such as shown here that different gravity models can be tested against observational data.

To fully exploit those billion-dollar surveys and to achieve their scientific potential of testing gravity, it is crucial to be able to accurately predict the observable signatures of non-standard gravity theories. This is a numerically intensive problem, which depends critically on powerful supercomputing resources such as DiRAC-2. Indeed, DiRAC has enabled us to make significant progresses over the past few years [1-11]. These efforts have led to developments of state-of-the-art algorithms and simulation codes such as the first ever parallelised N-body code for such theories, ECOSMOG [1-3], as well as a recent code [8] implementing a novel algorithm to trace light rays across cosmological simulations on the fly. These codes have been used widely in studies of this field, and ruled out popular non-standard models such as Galileon gravity [7]. More recently, tests on DiRAC led an improvement of the performance of ECOSMOG by over 10 times for some theories [9], making it possible to study such theories by large and high-resolution simulations in the future.

The numerical studies can also have implications to other fields. For example, the Galileon model features nonlinear differential equations similar to the famous Monge-Ampere equation in the optimal transportation problem, and its study has led to a new stable algorithm to solve this type of equation. These previous studies have fully equipped us [8-10] with efficient tools to make accurate predictions for a number of cosmological observables, including weak gravitational lensing, redshift space distortions and clusters of galaxies, for which we expect precise observational data in the next decade. They have prepared us for accurate cosmological tests of gravity and will help to shed light on the understanding of the origin of the accelerated expansion of our Universe. More exciting progresses are expected for the next few years.

Bibliography: [1] Li B., Zhao G., Teyssier R., Koyama K., JCAP, 01, 051 (2012), [2] Li B., Zhao G., Koyama K., JCAP, 05, 023 (2013), [3] Li B., et al., JCAP, 11, 012 (2013), [4] Li B., et al., MNRAS, 428, 743 (2013) [5] Jennings E., et al., MNRAS, 425, 2128 (2012); Hellwing W. A., et al., Phys. Rev. Lett., 112, 221102 (2014) [6] Shi D., et al., MNRAS, 452, 3179 (2015); Cai et al., MNRAS, 451, 1036 (2015) [7] Barreira A., et al., JCAP, 08, 028 (2015); JCAP, 08, 059 (2014) [8] Barreira A.,et al., JCAP, submitted, [9] Barreira A., Bose S., Li B., JCAP, submitted, [10] He J., Li B., Phys. Rev. Lett., submitted


 

Exeter

Accretion is an important process relevant to various fields in astrophysics, from star formation to the study of compact binaries and Type I supernovae. Continuous theoretical and numerical efforts have been devoted to understanding the interaction between the inner edge of an accretion disk and the stellar surface. Complementary to these works, there are also many studies devoted to the effect of accretion on the structure and evolution of objects, using either simple semi-analytic approaches or simulations based on one dimensional (1D) stellar evolution codes. But no multi-dimensional numerical study has ever been devoted to the effect of accretion on the structure of accreting objects. Because of this lack of sophisticated modelling, great uncertainties still remain regarding the fraction of accretion luminosity imparted to an accreting object or radiated away, and how the accreted material is redistributed over the stellar surface and the depth it reaches. Early accretion history may impact the properties of stellar objects (luminosity, radius, and effective temperature) in star formation regions and young clusters even after several Myr when the accretion process has ceased. These effects could provide an explanation for several observed features in star-forming regions and young clusters, having important consequences on our general understanding of star formation.

exeter

Figure 1. Distribution of tracer particles accreted on a young sun for cold (e1) and hot accretion (e9) cases respectively. Color scale is the radial velocity in units of cm/s and gray scale points show the position of the tracer particles. The cold accretion case shows rapid mixing of the accreted particles within the convective envelope. The hot accretion case shows the formation of a hot surface layer where particles accumulate and do not sink.

The DiRAC Resource, Complexity, has been used to perform the first hydrodynamics simulations describing the multi-dimensional structure of accreting, convective low mass stars. The hydrodynamical simulations were computed using our fully compressible time implicit code MUSIC (Multidimensional Stellar Implicit Code) recently improved with the implementation of a Jacobian-free Newton-Krylov time integration method, which significantly improves the performance of the code. These developments and the benchmarking of the code were made using Complexity. These new accretion calculations have enabled us to understand the effect of so-called “hot” accretion, when the accreted material is characterised by a larger entropy than that of the bulk of the stellar material.

The multi-D simulations show an accumulation of hot material at the surface that does not sink and instead produces a hot surface layer, which tends to suppress convection in the stellar envelope. This enables us to derive an improved treatment of accretion in 1D stellar evolution models, based on an accretion boundary condition. Such treatment of accretion in a 1D stellar evolution code is more physical than the standard assumption of redistribution of the accreted energy within the stellar interior. These new results will be published in Geroux, Baraffe, Viallet et al. (2016, A&A, submitted). The impact of this more physical accretion boundary condition on the evolution of young low mass stars and brown dwarfs and on observations in young clusters will be explored in a follow-up studies. These pioneering simulations performed on the DiRAC facility shows the potential of using time implicit multi-D simulations to improve our understanding of stellar physics and evolution.

Uncategorized

2016 Highlights

2016 was another productive year for our PPAN theory community researchers and some of the Science Challenges they tackled are highlighted below. 

2016 list of publications


 

Gravitational waves:  DiRAC simulations play a key role in gravitational-wave discovery

GW_signal

Figure 1. Wave signal detected by LIGO (top), waveforms predicted by general relativity (middle) and the residual from matching these (bottom).

On February 11 2016, the LIGO collaboration announced the first direct detection of gravitational waves and the first observaton of binary black holes. Accurate theoretical models of the signal were needed to find it and, more importantly, to decode the signal to work out what the source was. These models rely on large numbers of numerial solutions of Einstein’s equations for the last orbits and merger of two black holes, for a variety of binary configurations. The DiRAC Data Centric system, COSMA5, was used by researchers at Cardiff University to perform these simuations. With these results, along with international collaborators, they constructed the generic-binary model that was used to measure the masses of the two black holes that were detected, the mass of the final black hole, and to glean some basic information about how fast the black holes were spinning. Their model was crucial in measuring the properties of the gravitational-wave signal, and The DiRAC Data Centric system COSMA5 was crucial in producing that model. More information on the detection of gravitational waves can be found at the LIGO collaboration website.

In the figure to the right, the top plot shows the signal of gravitational waves detected by the LIGO observatory located in Hanford, USA whist the middle plot shows the waveforms predicted by general relativity. The X-axis plots time and the Y-axis plots the strain, which is the fractional amount by which distances are distorted by the passing gravitational wave. The bottom plot shows the LIGO data matches the predications very closely. (Adapted from Fig. 1 in Physics Review Letters 116, 061102 (2016))


 

HPQCD_banner

Lattice QCD and the Search for New Physics

HPQCD.png

Figure 1. The interaction between a muon and a photon is complicated by the production of virtual particles such as quarks and gluons.

If you could see deep into the subatomic world with slow-motion glasses, you would see empty space teeming with particles, appearing and disappearing in a tiny fraction of a second. If you were very lucky you might glimpse a particle so heavy and rare as to have escaped detection so far at CERN’s Large Hadron Collider. Such particles would be harbingers of new physics going beyond our existing Standard Model, but are proving very hard to find.

One way to demonstrate their existence indirectly is through the impact they have, as part of the teeming vacuum, on the properties of particles we can study directly. A new experiment will start shortly at Fermilab near Chicago to measure the magnetic moment of the muon (a heavier cousin of the electron) to an astonishing accuracy of 1 part in 1010. If we can calculate the effect of all the particles in the Standard Model on this magnetic moment, then a discrepancy with experiment would indicate the existence of new particles. An improved theoretical uncertainty from the Standard Model needs numerical calculations for the theory of the strong interaction, QCD. This is because the largest source of uncertainty comes from quarks, which are strongly interacting (see Figure 1); this contribution is known as the hadronic vacuum polarization (HVP). HPQCD has developed a new lattice QCD method using DiRAC to determine the HVP with improved accuracy

HPQCD2.png

Figure 2. A summary of results for the hadronic vacuum polarization contribution to the magnetic moment of the muon. Our new result (purple square), shows significant improvement on earlier lattice QCD results (pink square). Results in the lower half are derived from experimental data from electron-positron collisions. The green circle shows the value inferred from the current experimental result for the magnetic moment. The discrepancy may be a first sign of new physics.

We used our method to determine the strange and charm quark contributions to the HVP (arXiv:1403:1778) for the first time from lattice QCD, and then the very small bottom quark contribution (1408.5768). In the past year have determined the up and down quark contribution (1601.03071) and estimated the effect of disconnected diagrams (lower picture in Figure 1) (1512.03270). Our final result, summing all of these contributions, has a total uncertainty of 2% (see Figure 2). The error is dominated by possible systematic effects from ignoring the electric charge of the quarks and taking the masses of the up and down quarks to be the same. Work is now underway to include both of these effects in the calculation to reduce the uncertainty further, ahead of the new experimental results in 2018.


 

Post-periapsis Pancakes: Sustenance for Self-gravity in Tidal Disruption Events

A Tidal Disruption Event (TDE), which occurs when a star is destroyed by the gravitational field of a supermassive black hole, produces a stream of debris, the evolution of which ultimately determines the observational properties of the event. Typically, investigations of this process resort to predicting future accretion of material assuming the debris orbits are Keplerian or simulating restricted parameter regimes where the computational cost is a small fraction of that for more realistic parameters. In a series of papers that took advantage of Complexity@DiRAC, we simulated the long-term evolution of the debris stream for realistic TDE parameters, revealing new effects that can significantly affect the observable properties of these events (Coughlin & Nixon 2015; Coughlin, Nixon et al. 2016a,b). The figure to the right is taken from Coughlin, Nixon et al. (2016a: MNRAS 455, 4, 3612). In this work, we showed that a post-periapsis caustic – a location where the locus of gas parcels comprising the stream would collapse into a two-dimensional surface if they evolved solely in the gravitational field of the hole – occurs when the pericentre distance of the star is of the order of the tidal radius of the hole. We showed that this ‘pancake’ induces significant density perturbations in the debris stream, and these fluctuations can be sufficient to gravitationally destabilize the stream, resulting in its fragmentation into bound clumps.

merging005web

Figure 1. Column density rendering of the stream of debris produced by a Tidal Disruption Event, showing both the full stream (left) and a zoom-in (right). In this case stream is strongly self-gravitating, and fragments into bound clumps (Coughlin, Nixon et al., 2016).

As these clumps fall back towards the black hole their accretion can result in strong bursts of radiation, while the low-density regions between clumps can yield relatively little. This makes the luminosity of the event highly time variable, and may be an explanation for the rapid time variability of systems such as Swift J1644. However, the detailed behaviour of the clumps is not yet well understood, as it depends on their internal thermodynamics and how quickly they can cool and condense. If they can cool fast enough, they may be able to survive a second pericentre passage and potentially form planets orbiting the black hole, otherwise they will be re-disrupted at pericentre and join the accretion flow. These processes will be the subject of future DiRAC simulations. Over the next few years new observing missions will provide a substantial increase in the number of events for which we have data. Determining how the luminosity of a TDE varies with time is critical to interpreting these events and discerning the progenitor properties, such as black hole mass and spin.


 

Exomol

To find out what remote planets orbiting other stars are made of, astronomers analyse the way in which their atmospheres absorb starlight of different colours and compare it to a model, or ‘spectrum’, to identify different molecules. One of such objects is Kepler-69c, which is 70 percent larger than the size of Earth. Its orbit of 242 days around a sun-like star resembles that of our neighboring planet Venus, however the composition of Kepler-69c is still unknown.

742553main_Kepler69c_full.jpg

Figure 1. Artists impression of a super-Venus Kepler-69c. Image credit: NASA Ames/JPL-Caltech/T.Pyle

In collaboration with NASA AMES we have developed molecular data to model (hot) super-Venuses, yet to be detected class of object with the sulphur dominated chemistry based on Volcanic activity. According to the atmospheric chemical models, among the main constituents of the atmospheres of the rocky super-Earth or super-Venus are sulphur dioxide (SO2) and sulphur trioxide (SO3). These spectroscopically important molecules are generally not included in terrestrial exoplanet atmospheric models due to the lack of the laboratory data. We anticipate our new data will have a big impact on the future study of rocky super-Earths and super-Venuses. The recently discovered system of exoplanets TRAPIST-1 is one of the existing examples where our data will play important role in future atmospheric retrievals.

Our SO3 data set, named UYT2, is one of the biggest line lists in the ExoMol database. It contains 21 billion vibration-rotation transitions and can be used to model transit IR spectra of volcanic atmospheres for temperatures up to 800 K. In order to accomplish this data intensive task, we used some of the UK’s most advanced supercomputers, provided by the Distributed Research utilising Advanced Computing (DiRAC) project and run by the University of Cambridge. Our calculations required millions CPU hours, hundreds thousands GPU hours, and up to 5 TB of RAM, the processing power only accessible to us through the DiRAC project.


 

COSMOS Consortium

COSMOS consortium researchers exploiting DiRAC HPC Facilities have made progress towards ambitious milestones in five key inter-related areas: (i) extreme gravity, (ii) inflation and the early universe, (iii) cosmic microwave sky, (iv) dark energy and (iv) galaxy archaeology. In the limited space available, we highlight recent breakthroughs in general relativistic simulations of black holes and inflation.

cosmos.png

Figure 1: GRchombo simulation of gravitational waves from a black hole merger (using LIGO GW150914 parameter values). COSMOS IPCC visualization movie using Intel OSPRay ray-tracing through AMR dataset (submitted to SC16 Scientific Visualization Showcase).

On 11 February 2016, the LIGO Scientific Collaboration announced the ground-breaking discovery of a gravitational wave signal (GW150914) representing the first incontrovertible observation of a black hole binary system, as well as the most energetic event ever observed in the Universe. Given recent advances such as our innovative GRChombo AMR code, COSMOS Consortium researchers (several also LSC members) are well-placed to exploit these new scientific opportunities in gravitational wave astronomy. We have continued our long-standing industrial collaboration through the COSMOS Intel Parallel Computing Centre (IPCC) to improve our research codes in cosmology and general relativity; we have particularly focussed on GRchombo and improving performance on multi- and many-core (Xeon Phi) systems, following in the wake of our 2015 HPCwire Award. Intel has also responded to the demands of ever increasing dataset size by developing a powerful software defined visualisation framework, including the OpenSWR software rasteriser and the OSPRay parallel ray tracer and volume renderer, a program in which COSMOS IPCC is directly involved. These tools hves allowed us to create unprecedented real-time visualisations of our science results using several TB datasets. In 2016 we demonstrated the adaptive mesh capabilities of ParaView powered by OSPRay and OpenSWR by creating videos of GRChombo black hole mergers for two of the largest HPC conferences, International Supercomputing 2016 and Supercomputing 2016, as publicised in insideHPC

Inflation is now the paradigmatic theory of the early universe, providing a dynamical mechanism to explain the many features of our current cosmology. In particular, it explains why the universe is homogenous over cosmological scales. Nevertheless, can inflation begin in the first place, if the initial conditions are not homogenous? To answer this question requires a numerical approach, as solving the full GR equations is not possible for generic inhomogenous initial conditions. We attacked this problem with the state-of-the-art numerical relativity code GRChombo and the results are reported in arXiv:1608.04408. We found that high scale inflation is generically robust – it will inflate despite large inhomogeneities. More interestingly, we found that low scale inflation – preferred by many string theory phenomenological models – is not robust, failing to inflate in the presence of large scale but subdominant inhomogeneities.

cosmos2

Figure 2. Spinning black hole in 5D.

One of the biggest unresolved problems in Einstein’s theory of gravity is the weak cosmic censorship conjecture, which posits that spacetime singularities are always hidden inside a black hole horizon. Since nothing can escape a black hole, the breakdown of Einstein’s theory at singularities would be ‘censored’ from the outside world. In our numerical work with GRChombo, we find that the conjecture appears to be false if space has four or more dimensions. For example, the late stages in the evolution of a rapidly spinning black hole in six spacetime dimensions is shown (left); this is a disk-like structure with a great deal of fractal microstructure – the thinner rings are connected by ever thinner membranes that ultimately lead to a ‘naked singularity’. Given these results in extra dimensions, it is clear that, if cosmic censorship is true, then it must be a property specific to the three-dimensional space that we live in; without this property, Einstein’s theory of general relativity could lose its predictive power.


 

Understanding the Properties and Stability of Matter

clover1.png

Figure 1. Snapshot of the QCD plus QED vacuum.

The fundamental constituents of the strong force are quarks and gluons, which themselves bind together to form the familiar building blocks of nuclear physics, protons and neutrons. The two most common forms of quarks are the up quark and the down quark. The electric charge of the up quark is +2/3, whereas the down quark carries charge -1/3. A proton is composed of two up quarks and one down quark, adding to a net charge of +1, whereas the neutron has two down and one up quark, producing a chargeneutral object. Lattice QCD simulations have reached a precision now, where isospin breaking effects become important. This has two sources, the mass difference of up and down quarks, and the electromagnetic interactions. Both effects are of the same order of magnitude, so a direct calculation from QCD and QED is necessary.

Most important for our understanding of the salient features of QCD, such as quark confinement and spontaneous chiral symmetry breaking, is the understanding of the properties of the vacuum. Simulations of QCD and QED allow us to study the effect of dynamical quarks on the vacuum. A snapshot of the vacuum fields is shown in Figure 1, in form of a three-dimensional slice of space-time. The topological charge density of the QCD fields is rendered with the magnetic field of QED. The positive topological charge lump at the lower left of the image is accompanied by large magnetic field strength, presenting an opportunity to observe the chiral magnetic effect which separates right- and left-handed quarks. It has been predicted theoretically, but never observed.

clover2

Figure 2. The allowed region for the quark mass ratio.

Isospin breaking effects are crucial for our existence. The Universe would not exist in the present form if the neutron — proton mass difference were only slightly — — different. If it would be larger than the binding energy of the deuteron, no fusion would take place. If it would be a little smaller, all hydrogen would have been burned to helium. Knowing the mass of neutron and proton and how it depends on the mass and charge of the individual quarks, we can express the allowed region in terms of the u and d quark masses and the electromagnetic coupling, as shown in the Figure 2. It turns out that both αEM and the ratio of light quark masses must be finely tuned.

Bibliography:
[1] R. Horsley, Y. Nakamura, H. Perlt, D. Pleiter, P.E.L. Rakow, G. Schierholz, A. Schiller, R. Stokes, H. Stüben, R.D. Young and J.M. Zanotti, J. Phys. G43 (2016) 10LT02.
[2] R. Horsley, Y. Nakamura, H. Perlt, D. Pleiter, P.E.L. Rakow, G. Schierholz, A. Schiller, R. Stokes, H. Stüben, R.D. Young and J.M. Zanotti, JHEP1604 (2016) 093


 

Virgo
Black Holes as the Nemesis of Galaxy Formation

virgo1.png

Figure 1. An image of a galaxy undergoing an AGN outburst in the Eagle simulation. The image shows gas density, coulored by temperature, and captures the moment of galaxy transformation. A small merger triggers the growth of a black hole, generating a huge outburst of energy that heats the surrounding gas and disperses the cold gas inflows.

Galaxies fall into two clearly distinct types: active, blue-sequence galaxies that are rapidly forming young stars, and passive red-sequence galaxies in which star formation has almost completely ceased. These sequences are closely related to the visual galaxy classification system, based on the prominence of spiral arms first suggested by Edwin Hubble.

Numerical simulations by the EAGLE collaboration shows that these sequences, and thus the Hubble sequence of galaxies, are created by a competition between star formation-driven outflows and gas accretion onto the supermassive black hole at a galaxy’s centre.

In galaxies containing fewer than 30 billion solar-like stars, young stars, exploding as supernovae, are able to drive buoyant outflows that prevents high density gas building up around the black hole. As the galaxies increase in stellar mass, they convert a larger fraction of the inflowing gas into stars and grow along the blue sequence.

More massive galaxies, however, are surrounded by a hot corona and this has important ramifications. The supernova-driven outflow is no longer buoyant and star formation is unable to prevent the build up of gas in the galaxy, and around the black hole in particular. This triggers a strongly non-linear response from the black hole. Its accretion rate rises rapidly, heating the galaxy’s corona and disrupting the incoming supply of cool gas. The galaxy is starved of the fuel for star formation and makes a transition to the red sequence, and further growth predominantly occurs through galaxy mergers. Interestingly, our own Milky Way galaxy lies at the boundary between these two galaxy sequences suggesting that it will undergo a transition to a red and `dead’ early-type galaxy in the near future.

This model has deep implications for understanding the Universe. Far from being exotic predictions of General Relativity, the growth of supermassive black holes sets a fundamental limit to the mass of star forming galaxies.

virgo004Richard2

Figure 2. Formation timescales of galaxies as a function of stellar mass in the Eagle simulation (points) and observational data (Ilbert 2015).

The figure on the left shows formation timescales of galaxies as a function of stellar mass in the Eagle simulation (points) and observational data (Ilbert 2015). The separation of galaxies into blue (rapidly star forming) and red (passive) sequences is clearly seen. Most low mass galaxies follow the star forming blue galaxy sequence, doubling their stellar mass every 3 billion years, but more massive galaxies have much longer star formation growth timescales (a horizontal dotted line shows the present day age of the Universe; galaxies with longer star formation timescales are shown above the line). The transition between the sequences occurs at a stellar mass of round 30 billion solar masses. While it is not possible to reliably measure the masses of black holes in observed galaxies, it is possible to do this for galaxies in the EAGLE cosmological simulation. The simulated galaxies follow the observed data closely and points are coloured by the mass of the black hole relative to that of the host halo. In red-sequence galaxies, black holes account for around 0.01% of the halo mass, but the fraction is a factor of 100 smaller in blue-sequence galaxies. Around a galaxy mass of 30 billion solar masses, there is considerable scatter in the relative mass of the black hole and in the star formation growth timescale. However, at a fixed galaxy mass, systems with a higher black hole mass have substantially longer growth timescales, implying the existence of an intimate connection between black hole mass and galaxy type.


 

Black Holes:  Illuminating the Early Universe with Radiative Transfer Simulations

State-of-the-art observations show that within approximately 1 billion years after the Big Bang, all of the primordial hydrogen that permeated the immense space between galaxies had been destroyed by high energy photons in a process known as reionization. This time period in the history of the Universe, also known as ‘the dawn of galaxies’ remains one of the most active frontiers in modern astrophysics from both an observational and theoretical point of view.

Much remains unknown about the sources which provided the high energy photons needed for reionization. The standard picture suggests it was likely the first generation of galaxies with a possible additional contribution coming from radiating supermassive black holes. More exotic solutions have been proposed as well, including annihilating dark matter. Identifying the sources responsible for reionization as well as understanding their properties is key to constraining the conditions which governed all subsequent galaxy formation in the Universe.

blackholes.png

Unfortunately, our current telescopes only probe the tail end of this epoch. For this reason, theoretical and computer models are relied upon for insight into the physics during the reionization. Simulating this regime is both complicated and computationally expensive; however, over the past two years, researchers at the Institute for Astronomy (IoA) and Kavli Institute for Cosmology Cambridge (KICC), have designed a new, state-of-the-art computer algorithm which they are exploiting on DiRAC supercomputers at the Universities of Cambridge, Leicester and Durham to better understand the epoch of reionization. This new algorithm (presented in a recently submitted paper by Katz et al. 2016) follows the flow of high energy photons which are directly and on-the-fly coupled to the chemical and thermodynamic state of the gas at different speeds depending on the density of the medium. This has overcome many issues relating to standard algorithms in the field at much reduced computational cost, allowing us to simulate significantly more complex systems than previously possible.

The team at IoA and KICC has thus for the first time generated a multi-scale view of galaxy formation during this epoch – from the large scale cosmic web all the way down to individual star forming regions inside of primordial galaxies. Because the flow of radiation is followed explicitly, the researchers have used an inhomogeneous radiation field to compute the distribution of emission lines coming from ionized metals (such as [CII] and [OIII]) which can be observed by some of our most sensitive telescopes. Likewise, they have self-consistently tracked the location and mass of ionized, neutral, and molecular gas to make testable predictions about these quantities which control star formation in the Early Universe.

In the coming years, new observations from the James Webb Space Telescope, Atacama Large Millimeter Array, and Square Kilometer Array will shed light on this exciting frontier and be compared to these simulated models to give insight into the physical processes which drive ‘the dawn of galaxy’ formation.


 

Hadron Spectroscopy:  Baryon Mass Splittings

In Cheung et al [JHEP 1612 (2016) 089, arXiv:1610.01073] we computed spectra of highly excited charmonia and charmed mesons with light (up and down) quark masses, corresponding to mπ ≈ 240 MeV, closer to their physical masses than in our earlier study where mπ ≈ 390 MeV. A highlight of the results was the presence of hybrid mesons (where the gluonic field is excited), some of which had exotic quantum numbers (cannot be solely a quark-antiquark pair). The resulting phenomenology of hybrid mesons gave insight into the effective low-energy gluonic degrees of freedom. One main conclusion was that the spectra showed only mild dependence on the light-quark mass, and the pattern of states, in particular the hybrid mesons, did not change significantly as the light-quark mass was reduced. This is an important systematic check and suggests that the interesting phenomenology of excited and hybrid mesons we found is not an artefact of working with unphysically-heavy light-quark masses.

hadspecnew.png

Figure 1. The spectrum of excited D mesons (containing a charm quark and a light antiquark).

Figure 1 shows the spectrum of excited D mesons (containing a charm quark and a light antiquark) labelled by JP (J is spin, P is parity): green and red boxes are the computed masses and one-sigma statistical uncertainties with red highlighting states identified as hybrid mesons; black lines are experimental values.

hadspec1.png

Figure 2. The various S-wave (orbital angular momentum = 0) cross sections: upper (lower) panel shows quantities proportional to the diagonal (off-diagonal) cross sections.

In Moir et al [JHEP 1610 (2016) 011, arXiv:1607.07093], we presented the first ab-initio study of coupled-channel scattering involving charm mesons (and the first lattice QCD study of three coupled scattering channels). Working with light-quark masses corresponding to mπ ≈ 390 MeV, from computations of finite-volume spectra we determined infinite-volume Dπ,Dη,Ds K isospin-1/2 scattering amplitudes. Figure 2 shows the various S-wave (orbital angular momentum = 0) cross sections: upper (lower) panel shows quantities proportional to the diagonal (off-diagonal) cross sections. The singularity structure of the amplitudes showed a near-threshold bound state in JP=0+ corresponding to the physical D0*(2400) resonance, a deeply bound state with JP=1 corresponding to the D*, and evidence for a narrow JP=2+ resonance. This study represents a significant step toward addressing some puzzles in the spectroscopy of charmed mesons.


 

Horizon

Initial conditions for galaxy formation simulations are specified within the Λ-CDM scenario by a Gaussian random field. This leads to a troubling lack of control: we often cannot ask precisely what leads to certain outcomes for a galaxy. What is it about a galaxy’s mass, environment or history that gives it its shape or its colour? Imagine we could control the initial conditions of a galaxy in cosmological simulations and tweak them to alter one aspect of its formation while keeping the other aspects the same.

horizon.png

The projected density of dark matter at redshift 3 (top row) and 2.3 (middle row) for three simulations with nearly identical initial conditions, but “genetically modified” to produce slightly different merging histories of the main galaxy in the center of the dark matter images. Bottom row shows the main galaxy in the IVU rest frame colours. The galaxy becomes increasingly quiescent, red and spheroidal as as the merger ratio increases from left to right.

Thanks to the DiRAC facility, our group has run the first generation of high resolution cosmological simulations that alters the “DNA” of a galaxy (Roth et al. 2016, MNRAS, 455, 974; Pontzen et al., arXiv:1608.02507). Roth et al. 2016 sets out the “genetic modification” approach and Pontzen et al. showcases its power (see figure) to understand what quenches star formation in a galaxy. By minimally altering the linear overdensity in the cosmological initial conditions, Pontzen et al changes the merger history of a galaxy while leaving its final halo mass, large scale structure and local environment unchanged. The changing merger history influences the impact of feedback from Active Galactic Nuclei leading to three very different fates for the galaxies: star forming, temporarily quenched, permanently quenched.


 

Heavy Elements:  Linking the Nuclear Interaction to the Structure of Heavy elements

heavy.png

Figure 5 from Phys. Rev. Lett. 117, 052501 (2016) (Editor’s suggestion.)

A recent international collaboration between the UK, France, US and Canada have performed systematic studies of both nuclear radii and binding energies in (even) oxygen isotopes from the valley of stability to the neutron drip line. Both charge and matter radii are compared to state-of-the-art ab initio calculations along with binding energy systematics. Experimental matter radii are obtained through a complete evaluation of the available elastic proton scattering data of oxygen isotopes. The theoretical ab initio calculations were possible thanks to exploiting High Performance Computing resources and showed that, in spite of a good reproduction of binding energies, conventional nuclear interactions derived within chiral effective field theory fail to provide a realistic description of charge and matter radii. A novel version of two- and three-nucleon forces leads to considerable improvement of the simultaneous description of the three observables for stable isotopes but shows deficiencies for the most neutron-rich systems. Thus, crucial challenges related to the development of nuclear interactions remain. Phys. Rev. Lett. 117, 052501 (2016) – Editor’s suggestion.

Uncategorized