Blackhole Highlight:  The environment of bright QSO’s at z ~ 6

cosmos12.png

Figure 1. The distribution of massive black holes in different patches of the 900 million year old Universe according to the recent simulations carried out at the IoA/KICC. Black holes are both more massive and more numerous as the overdensity increases.

Observations of nearby galaxies and high redshift quasars suggest that black holes are present in the majority of galaxies. The SDSS, UKIDSS and VISTA surveys have revealed numerous high-redshift QSOs harbouring black holes with masses in excess of a billion solar masses at a time when the Universe was less than one billion years old (about 10% of it’s current age). The fact that such massive black holes have managed to grow by this time provides a significant challenge for theoretical models of the formation and growth of supermassive black holes.

Researchers at the IoA/KICC have performed a suite of state-of-the-art numerical simulations which self-consistently follow the growth of black holes from very early times to the present day. By simulating a large volume of the Universe, they investigated the growth of massive black hole in a variety of cosmological environments. The images shown in Figure 1 illustratethe distribution of gas in different patches of the 900 million year old Universe. Black holes are far more numerous and massive in the most overdense regions of the Universe. The biggest black circles in the bottom row correspond to black holes with masses close to one billion solar masses. Only very massive black holes located in highly overdense regions were found to give rise to peak luminosities in agreement with observational estimates for the brightest, high redshift quasars, as shown in Figure 2. This finding indicates that these quasars should reside in some of the rarest dark matter halos at the time. Members of the IoA/KICC also used these simulations to study the so called AGN feedback, the interaction between massive black holes and gas in their surroundings. They found that the energy released due to accretion onto the most massive black holes gives rise to powerful outflows that push gas at speeds exceeding a thousand km/s. The comparison of simulated quasar outflows with observations will provide an essential tool to pin down the physical mechanisms underlying the interaction of massive black holes and their host galaxies.


COSMOS Highlight:  Planck Satellite Science – Searches for Non-Gaussianity

The ESA Planck satellite, launched in May 2009, provides an unprecedented high resolution survey of the temperature of the cosmic microwave background (CMB) radiation. COSMOS and HPCS@DiRAC resources were vital for the science exploitation efforts in several key Planck papers. This effort has led to new estimates of cosmological parameters – shown in Figure 1. Planck has also crossed important qualitative thresholds, making gravitational lensing and the study of non-Gaussian statistics truly quantitive subject areas for the first time.

Using the unique capabilities of the COSMOS system, consortium members have used Planck data to undertake the most stringent tests of the inflationary paradigm to date by studying the prediction that primordial fluctuations should have Gaussian statistics (most results are in the ground-breaking Planck Non-Gaussianity paper (dp002.03)). The threepoint correlator (“triangles on the sky”) or bispectrum was evaluated to high precision for the first time (Figure 1); it is a 3D tetrahedral object depending on three triangle multipoles l1, l2, l3. Searches were undertaken for the standard forms of local, equilateral and orthogonal bispectra, described by the ‘nonlinearity’ parameter fNL

These limits, up to four times stronger than WMAP, significantly constrain broad classes of alternative inflationary models. Despite the stringent constraints on scale-invariant models, the Planck bispectrum reconstructions exhibit large non-Gaussian signals (Figure 1), inspiring further investigation of other non-Gaussian models using separable modal methods developed on COSMOS. The two most promising classes of models appear to be those with excited initial states (non-Bunch Davies vacua) and those exhibiting periodic features with apparent high significance. Intensive non-Gaussian analysis of the Planck data is ongoing for a broad range of primordial models, as well as investigating second-order late-time effects such as ISW-lensing.

cosmos11.png

Figure 1. CMB bispectrum reconstruction obtained from the Planck data employing Fourier modes (left) and higher resolution polynomial modes (right); several isosurfaces are shown both positive (red) and negative (blue). The predicted ISW-lensing signal can be seen along the tetrahedral edges (for the first time) and apparent ‘oscillatory’ features in the interior.


ECOGAL Highlight:  Galactic scale models of star formation

Galactic scale models of star formation focus on resolving how large-scale gas flows in spiral galaxies form the dense gas clouds where star formation occurs on scales 10,000 times smaller. The rate at which stars form in galaxies appears to be linked to the global as well as local properties. Using DiRAC, we have performed the largest-scale numerical simulations that can resolve the dense regions where stars form, and hence directly study the physics that drives star formation. We are using these simulations to understand how star formation is initiated, and what determines the resulting properties including why it has such a low efficiency (~1%).

ecogal1.png

Figure 1. The dense gas formed from a spiral shock is viewed perpendicular to the plane of the galaxy.

There also exist ‘special’ regions such as the centre of our Galaxy, where star formation is more exotic and primarily forms only very massive stars. Using realistic models for the galaxy’s gravity, we are studying the dynamics of gas clouds and how they can produced the observed structures and star formation events, including the young stars that orbit the super-massive black hole.

In order to construct a full theory of star formation, we need to also include additional physics of magnetic fields and the radiative and kinetic feedback processes from young stars. Constructing these self-consistent models, akin to modelling a full galactic ecology, requires the ability to model 100 to 1000 million individual elements in the galaxy and can only be performed with the HPC resources provided through DiRAC.

ecogal2.png

Figure 2. A gas cloud moving on an eccentric orbit in the Galactic Center is tidally sheared to form nearly complete stream.


EXETER Highlight:  The formation of a star and its magnetized outflows in radiation magneto-hydrodynamic simulations

As a gravitationally unstable molecular cloud core undergoes collapse to form a star, it undergoes several distinct phases. Initially it collapses almost isothermally as energy is easily radiated away. However, eventually the density rises to the point where radio and infrared radiation cannot freely escape from the gas, and the temperature begins to increase. This substantially increases the pressure, resulting in the formation of a pressure supported gaseous object with a typical size of ~10 AU (known as the first core). This object increases in mass as it accretes surrounding gas and, as it does so, its central temperature continues to rise. When its central temperature reaches ~2000 K, the molecular hydrogen from which it is primarily composed dissociates into atomic hydrogen. This is an endothermic process which results in a second collapse occurring inside the first core. This second collapse continues until the hydrogen is completely atomic, where upon a second, or stellar, core is formed with a size similar to that of our Sun. This object then accretes to its final mass to become a star. If reasonably strong magnetic fields are present initially, outflows from both the first core and the stellar core can be launched at speeds of several km/s. These outflows can carry some of the mass, angular momentum, and energy away from the forming star.

exeter11.png

Figure 1. Visualizations of the magnetic field direction in the outflows from the first cores (top) and stellar cores (bottom) in three calculations with different initial magnetic field strengths (magnetic field strength increases from left to right). Published in Bate, Tricco & Price (2013).

The DiRAC facility, Complexity, has been used to perform the first fluid dynamical simulations that include both magnetic fields and radiation transport to follow this whole process. Such calculations are essential for predicting what may be observed in highresolution observations that will be made using the Attacama Large (Sub-)Millimetre Array (ALMA) over the next few years, and interpreting these observations. In the Figure, we show visualizations of the magnetic field structure in the outflows. Animations of the calculations are at: http://www.astro.ex.ac.uk/people/mbate/Animations/stellarcore.html and the full dataset from the calculations is on the Open Research Exeter archive at http://hdl.handle.net/10871/13883 .


Exomol

exomol11.png

Figure 1. Absorption of methane at T = 1500 K – new theoretical spectrum (10to10) compared to the experiment (HITRAN12).

The aim of the ExoMol project is to compute comprehensive spectroscopic line lists for hot molecules thought likely to occur in the atmospheres of exoplanets, brown dwarfs and cool stars. The list of such molecules is quite long but only those for four or more atoms require the use of DiRAC resources. Our calculations have been split between two computers located at Cambridge: Darwin, which was used to diagonalise small matrices and to compute transition intensities, and COSMOS which was essential for diagonalising larger matrices. We encountered some performance issues with COSMOS when performing large matrix diagonalisations. This problem was referred to SGI who have very recently supplied us with a new diagonaliser for COSMOS. Our initial tests with this new diagonaliser are very encouraging and suggest that our performance issues should largely be resolved. However we have yet to use diagonaliser in productions runs and the results discussed below did not employ it.

Work using DiRAC during 2013 focussed on four main molecules: methane (CH4), phosphine (PH3), formaldehyde (H2CO) and sulphur trioxide (SO3).Methane is a major absorber in hot Jupiter exoplanets, brown dwarfs and cool carbon stars. There is a huge demand for a reliable hot methane line list and there are several groups working towards this objective. This project was therefore given priority. Our main result is that we have generated a new methane line list, called 10to10, which contains just under 10 billion transitions. This line list is complete for wavelengths longer than 1 μm and temperatures up to 1500 K It is by some distance the most comprehensive line list available for methane (see Fig.1). It is currently being actively used by about a dozen groups worldwide to model methane in a variety of astronomical objects (and one group for studies of the earth’s atmosphere). Fig. 2 shows the spectrum of the brown dwarf spectrum of 2MASS J0559-1404 compared to our simulations.

Our tests on this line list show that for photometric studies of the K, H, J bands show that previously available line lists (a) agree well with 10to10 at 300 K for the K and H bands but significantly underestimate J-band absorption due to lack of experimental data in this region; and (b) seriously underestimate absorption by methane for all bands at temperatures above 1000 K. We have also completed initial, room temperature line lists for PH3 and SO3, and a full line list for hot H2CO.


HORIZON Highlight

Dark matter is one of the strangest puzzles in physics: the universe seems to be composed of far more stuff than we can see directly. The evidence is overwhelming, yet indirect.Over the past few decades, astronomers have been able to measure the tug of gravitation in a huge range of environments – from galaxies, through clusters of hundreds of galaxies, to the early universe as seen in the cosmic microwave background – and we keep coming to the same conclusion, that the tug is surprisingly strong. To explain that, one needs around six times more matter than is seen directly. The missing five sixths is known as dark matter.

But attempts to detect particles of this matter have given no clear result. That doesn’t mean dark matter doesn’t exist, but it does mean we can’t be certain what it’s made from. If we can’t manufacture or find particles of it here on Earth, the only way to make progress is to keep studying its behaviour in the sky.

horizon11.png

Figure 1.

As our ability to infer the presence of dark matter in the night sky has improved, we can now measure how the invisible mass is distributed through individual galaxies. In the meantime, increasingly powerful computers have allowed the expected distribution to be predicted (based on assumptions about the particle’s nature). It has proved exceptionally difficult to get these two methods to agree – perhaps pointing to an exotic dark matter particle which behaves in unexpected ways; or, alternatively, that the entire dark matter idea is on the wrong track.

Using the Dirac-2 Complexity cluster, the Horizon collaboration has been studying this problem from a different perspective. Perhaps the difficulty in making theoretical ideas agree with the real Universe stems from an incomplete physical picture of what goes on inside galaxies. We know that vast amounts of energy are available – young stars pump out light and occasionally explode in supernovae. If some of this energy is transferred into the dark matter, the distribution can be radically reshaped.

We established almost two years ago that there were viable physical mechanisms for transporting stellar energy through to the dark matter distribution (dp016.6). Since then, we have been studying with higher resolution simulations how sensitive this mechanism is to fine details of how the gas behaves (e.g. dp016.7) – always the hardest thing to get right in galaxy simulations, because there are so many factors (including resolution, the cooling rates and the coupling of stellar populations to the surrounding gas). Our latest work, (Pontzen et al 2013, MNRAS submitted), shows that even if one gets many of these details intentionally wrong, the dark matter reshaping remains possible.

This is an encouraging sign that despite remaining uncertainty, we can soon determine whether data from the real universe support – or perhaps refute – the simplest dark matter models.


Leicester Highlight: Tearing up the Disk: How Black Holes Accrete

Almost all galaxies have supermassive black holes in their centres. These holes grow because gas from the galaxy falls into them, a process which also makes the holes spin. In general the gas spirals slowly in to the hole in a disk, initially lying in a plane which is at an angle to its spin equator. But near to the hole, powerful forces drag it into this plane. This means that the gas flows through a warped disk, whose plane changes as it moves in. Until recently it was thought that this process occurred fairly smoothly, with a smooth warp gently straightening the disk plane. However work by Ogilvie in 1999 suggested that in some cases the warp could be much more abrupt, because the forces holding the disk together actually weaken in a warp. The disk would then break into a pair of disks, the outer ring in the original misaligned plane, and the inner one aligned to the black hole spin.

leicester11.png

Figure 1. Simulation of disk tearing. Gas spirals towards a supermassive black hole in a plane which is highly inclined to the black hole spin. Close to the hole the gas is forced to orbit in the equatorial plane, and this breaks the disk. The inner disk tumbles in space, and eventually is rotating partially in the opposite direction to the outer disk. Where the two disks touch their gas collides. This removes the rotation that supports it against the black hole gravity, so it falls to a new orbit much closer to the hole. But there it also encounters gas moving in the opposed direction, so the process repeats. This produces rapid gas infall on the black hole, making it grow rapidly, and producing an extremely luminous source of radiation at the center of the host galaxy.

Only detailed work with a powerful supercomputer can check this possibility. Our work with DiRAC2 shows for the first time that it actually occurs in many realistic cases. The spin forces near the black hole break the central regions of tilted disks around spinning black holes into a set of distinct planes with only tenuous flows connecting them. These component disks then precess – their planes tumble in space – independently of each other. In a most cases the continued precession of these disks eventually sets up partially counterrotating gas flows, so that gas in one disk is moving almost oppositely to gas in a nearby disk. As these opposed flows interact, this drives rapid infall towards the black hole. The process can repeat itself as disks form and interact even closer to the black hole. This produces a cascade of tumbling and precessing disks closer and closer to the black hole. This is important because rapid gas infall makes the black hole grow rapidly. This in turn makes the infalling gas extremely hot and bright, accounting for the most luminous objects in the entire Universe.


MEISKIN:
Simulations of the Circumgalactic Medium

meiskin1.png

Figure 1. Environment of galaxy haloes.

meiskin2.png

Figure 2. Halo mass distribution.

We are using cosmological numerical simulations to predict the expected physical properties of the Circumgalactic Medium (CGM) in the environment of masive starforming galaxies in the context of a ΛCDM cosmology. Our primary goal is to establish the gaseous flow pattern around the galaxies and the spatial extent to which winds impact on the CGM through comparison with the observations. Since we wish to compare with observations , as a first step in this pilot project we need to identify the sites of galaxies in the simulation. We do this by identifying their dark matter haloes. This is not straightforward, akin to identifying distinct clouds in the sky. An illustration of the complex regions in which galaxy haloes reside is shown in Figure 1. Different halo identification methods produce different halo catalogs, and different simulation numerical methods produce different haloes. We compare the haloes found using two different algorithms applied to results from two simulation methods to seek a range of halo masses for which the methods agree. Figure 2 shows agreement between these methods for haloes more massive than ten billion solar masses, corresponding to galaxy-size haloes. The flow fields around the haloes are currently being explored.


PLANET

We have examined the evolution of astrophysical disc models for which the temperature decreases as a function of radius, leading to an angular velocity profile that varies with both radius and height. This work demonstrates for the first time that growth of the vertical shear instability in discs leads to a sustained turbulent flow whose associated Reynolds stress leads to outward angular momentum transport. The results may have application to the outer regions of proto-planetary discs, influencing mass accretion and the formation of planets.

planet.png

Figure 1. Contours showing the growth of vertical velocity perturbations in a disk due to the development of the vertical shear instability.

We have examined the migration of low mass planets embedded in magnetized discs with a layered structure consisting of turbulent regions near the surface, and a non-turbulent “dead zone” near the disc midplane. A planet migrates because it experiences a net torque with two contributions: a Lindblad torque that drives inward migration; a corotation torque that slows or reverses migration. Our results show for the first time that the corotation torque in a dead zone is ineffective, with the implication being that low mass protoplanets will migrate rapidly as they grow to become super-Earths or Neptune-like planets. A paper describing these results is in preparation.

planet2.png

Figure 2. The left panel shows a 10 Earth mass planet embedded in the dead zone of a disc. The right panel shows the log of the accretion stress as a function of radius and height. Values near the midplane correspond to a ~ 0.0001, and near the surface a ~ 0.1.


Virgo
Constraining the nature of dark matter using the COCO simulations

The standard cosmological model, the ”Lambda cold dark matter model” (LCDM) is backed up by an impressive array of data that cover a huge range of scales, from the entire observable universe, probed by measurements of temperature anisotropies in the microwave background radiation, to the scales of galaxy clusters and individual bright galaxies, sampled by large galaxy surveys. On smaller scales than this, there is no strong evidence to support the standard model. Yet, it is on such scales that the nature of the dark matter is most clearly manifest. In the standard model, the dark matter consists of cold particles, such as the lightest supersymmetric particles. There are, however, models of particle physics that predict lighter particles, such as sterile neutrinos, that would behave as warm (WDM), rather than cold (CDM), dark matter. If the dark matter is warm, free streaming in the early universe would have erased primordial fluctuations below mass scales corresponding to dwarf galaxies. The abundance and properties of dwarf galaxies could then encode information about the nature of the dark matter.

14vigro

Figure 1. The image above is a slice through the simulation volume, with the intergalactic gas colour coded from blue to green to red with increasing temperature. The inset images show details in a small region of the simulation centered on a spiral galaxy seen face-on.Enter a caption

The Figure below shows two simulations from a Virgo Consortium project called “Copernicus Complexio” (COCO). These show the structure that grows in a region of diameter approximately 150 million light years in model universes in which the dark matter is cold (left) or warm (right). The WDM model is chosen to have as large a free streaming length as is allowed by observations of gas clouds seen in the early universe (the so-called “Lyman-alpha forest” in distant quasar sight-lines). There are about a hundred haloes in each volume with masses similar to that of the dark halo around the Milky Way galaxy.

Each of these is resolved with about 10 million particles making it possible for the first time to obtain a good statistical sample of well resolved subhaloes. On the scales apparent in this figure there is very little difference between the two models. However, on small scales there are large differences. In CDM tens of thousands of subhalos are visible; in WDM only a few tens are. In principle this difference should be detectable in observational studies.

The project made extensive use of the DiRAC-2 data centric facility because the simulations, with 13 billion particles each, require a machine that has both an exceptionally large memory per core and a large total memory.


CLOVER:  Charming Physics

Strongly interacting particles (hadrons) come in two classes: mesons made of quarks and antiquarks and baryons consisting of three quarks. Quarks come in six varieties or flavours: up, down, strange, charm (u, d, s, c) and much heavier bottom and top quarks (not considered here).

Symmetries between the different quark flavours mean that particles group together in families called multiplets. The proton and neutron are part of a multiplet with 20 members, shown in the left figure. Particles with no charm quark are in the bottom layer, followed by a layer of particles with one charm quark and then with two charm quarks. At present there is very little experimental knowledge about the doubly-charmed particles in the top layer.

14clover

Figure 1. The octet multiplet (left) and a compilation of recent lattice determinations of baryon mass splittings (right).

We want to understand how particle masses are related to their position in the multiplet, and the masses of the quarks they contain. In computer simulations we are not limited to the quark masses nature has given us – we can see what would happen if the quarks had quite other masses. As an example, in the second figure we show how the masses of the particles in the lower slice of the multiplet change as we move between a situation with all quark masses equal (the crossing point of the fan) and the physical point (at the left edge of the graph).

While the main force between the quarks and antiquarks comes from QCD there is also a contribution from the electromagnetic force, (quantum electrodynamics, QED), which is usually left out of lattice calculations. We are also doing calculations with both forces included, to see what the effects of QED are. We can see clearly that mesons in which the quark and antiquark have the same sign of electrical charge become heavier compared with mesons where the quark and antiquark have opposite charges, as you might expect from the fact that opposite charges attract, and like charges repel.

Simulating two forces needs more computing power than just looking at one force, so DiRAC II is important in making all aspects of this project possible.


HOT QCD:  Extreme QCD: Towards Quantitative Understanding of the QCD Phase Diagram

hotqcd11.png

Figure 1. Plot of the temperature dependency of the conductivity of matter across transition temperature.

Matter is usually comprised of protons and neutrons which consist of quarks bound tightly together by the Strong Interaction of Particle Physics. However at incredibly large temperatures of a few trillion Celsius, quarks become free and a new and poorly understood “Quark-Gluon Plasma” (QGP) phase is created. While the QGP is presumed to have existed soon after the Big Bang, it has also been produced in experiments where heavy nuclei (such as gold) are collided in particle accelerators at virtually the speed of light. This has been performed in the Large Hadron Collider in CERN and at the Brookhaven National Laboratory on Long Island, USA.

Because each nucleus is incredibly small (100 billion of them side-by-side would span a distance of 1mm) the region of QGP created is correspondingly small. Due to this tiny size, it is impossible to place detectors inside this region, and, in any case, they wouldn’t work because they’d instantly melt into the plasma phase! The plasma “fireball” also expands and cools incredibly rapidly, so it quickly returns to the normal state of matter where quarks are tightly bound.

So to understand the QGP, physicists have to rely on observations of particles ejected from the fireball region out into the “normal” phase of matter and into their detectors and then “reverse engineer” the physics of the QGP. To make the connection from the detected particles back to the fireball, it is therefore essential to understand the QGP’s “transport” properties, i.e. how it expands and flows as a bulk material.

One such property is the electrical conductivity, which is what the quantity this project has calculated. This requires supercomputers, such as those provided by the DiRAC Consortium, in order to simulate the many degrees of freedom of the strong Interaction. Our results, shown in the figure, are the first time anyone has calculated the temperature dependency of the conductivity of matter across transition temperature. In this figure, this transition temperature corresponds to around 180 MeV.


HPQCD:  Light is right – working with up and down quarks at physical masses

hpqcd111.png

Figure 1. A meson annihilates to a lepton and antineutrino (from a virtual W meson).

The world of quarks and gluons, currently the most fundamental building blocks of matter known, is hidden from us. Quarks are never seen as free particles; instead we see their bound states, known as hadrons, in experiments such as those at CERN’s Large Hadron Collider. Accurate calculations with the theory that describes quark and gluon interactions, Quantum Chromodynamics (QCD), are critical to connect the world of quarks to that of hadrons in a quantitative way.

hpqcd112.png

Figure 2. Our results for decay constants of K and pi mesons as a function of the up/down quark mass. The result shown at 0.036 on the x-axis correspond to the real-world masses.

Calculations are made tractable through a numerical technique known as lattice QCD, but they are computationally very challenging. The numerical cost increases as the quark mass falls and so one issue has been that of handling up and down quarks. These have very small masses (a few percent of that of the proton) in the real world. In the past calculations have been done with several values of heavier masses and then an extrapolation to the physical point is performed.

With the computing resources of DiRAC phase II (the Data Analytic Cluster in Cambridge), we are now able to perform calculations with up and down quarks having their physically light masses and this means that more accurate results can be obtained. Our formalism is numerically very efficient and has particularly small disretisation errors.

Key results that we have obtained this way during 2013 are those of the decay constants of the π, K, B and Bs mesons. The decay constant parameterizes the probability of the quark and antiquark in the meson being at the same point and annihilating, for example to a W boson of the weak force, as seen in Fig. 1.

Our results for the decay constants (Fig. 2) are the world’s most accurate. They have enabled us to test the CKM matrix of couplings between quarks and the W boson with unprecedented accuracy. We have also improved the prediction of the rate from known physics of the very rare process, Bs → μ+ μ, recently seen by the LHCb experiment. The sensitivity of this process to new physics makes a future comparison of the experimental and theoretical rates potentially very exciting.


UKQCD BSM:  Beyond the Standard Model- is there anything more than the Higgs?

The Standard Model of particle interaction describes the interactions of all the constituent of the matter to an impressive degree of accuracy. One of the successes of this model is the unification of the electromagnetic and the weak interactions into a new sector called the Electroweak sector. In this model, the Electroweak sector is characterised by a breaking of the SU(2) × U(1) gauge group, which explains why the photon is massless while the W and Z bosons (the mediators of the weak force) have a mass. The electroweak breaking is due to the widely known and notoriously elusive Higgs sector, which describes the interactions of a new particle, the Higgs boson. In addition to giving mass to the mediators of the weak force, the Higgs boson provides mass for ordinary fermionic matter (leptons and quarks). However this elegant model is believed to be valid only up to a certain energy scale, above which new physics is bound to manifest.

ukqcd

Figure 1. The standard model of particle physics (left) and technicolor particles (right).

Many models have been proposed over the years to explain the theory at the scales of new physics. Technicolor is the framework according to which Electroweak symmetry breaking is due to the breaking of the chiral symmetry in a new strong interaction. The model proposes a different answer to the origin of all particles masses, by means of a new mechanism to generate mass for the leptons. These ideas are inspired a similar mechanism that is already at work in the theory of the strong interactions, Quantum Chromodynamics (QCD). A fundamental requirement for any beyond the Standard Model theory is that the framework does not spoil any lower energy prediction, i.e. that they are compatible with current observation. This is a severe constraint, which in Technicolor is implemented by the mechanism of walking, i.e. the slow running of the gauge coupling in an intermediate range of energies. This happens for near-conformal gauge theories. The question then becomes: is there a near-conformal gauge theory that can account for the observed Electroweak symmetry breaking?

The resources offered by the Dirac consortium allowed us to perform Monte Carlo simulation for a theory that has been conjectured be a candidate for the realisation of the Technicolor framework. The model, called Minimal Walking Technicolor, is an SU(2) gauge theory coupled with two adjoint Dirac fermion flavours. We proved that (near-)conformality can be rephrased as a mass spectrum with constant mass ratios between the particles when the constituent fermion mass goes to zero and we observed this feature numerically.


UKQCD DWF

dwf.png

Figure 1. This figure displays the new result for the neutral kaon mixing amplitude BK in green. This new result was calculated in the continuum limit and directly at physical quark masses. The line and the curve represent two fits to previous data points wich were used to extrapolate to quark masses. With our new algorithm and greater machine power we have been able to both eliminate the systematic extrapolation error and simultaneously reduce the statistical error on this important result.

We have carried out simulations of lattice QCD including up, down and strange quark loops using a chiral fermion discretization which is unique in fully preserving the meaning of left handed spin orientation. The W boson only interacts with left handed fermions, and so preserving this symmetry is deeply important to the calculation of the complex weak matrix elements required to support the experiments such as the Large Hadron Collider.

One important example of these is our determination of the two pion decay amplitudes of the Kaon. This is the process in which the asymmetry between matter and antimatter was discovered. Our calculations, which won the 2012 International Ken Wilson Lattice Award, involve mixing between many operators in the effective weak Hamiltonian. The calculation is only tractable with chiral Fermions, but gives rise to a wholly new constraint on the difference between matter and anti-matter in the standard model. Such constraints enable experiments to search for the effects of undiscovered physics beyond the standard model. Our calculation was updated in journal papers, one of which observed in detail a numerical cancellation that explains a long standing puzzle about the likelihood of decay into different pion charges known as the ΔI=1/2 rule – a case of numerical simulation leading to deeper understanding.

This progress over the last year has only been possible due to the powerful DiRAC BlueGene/Q supercomputer in Edinburgh, which our scientists helped IBM to develop, to simulate chiral fermions at the quark masses seen in nature and with spacings for the space-time grid (including all effects up to energy scales of 1.75 and 2.3 GeV). This has allowed us to predict continuum physics with the complete elimination of systematic errors arising from mass extrapolation. The calculation was shared between UKQCD and our USJapan international collaborators in the Riken-Brookhaven-Columbia collaboration, and also made use of BlueGene/Q systems at Argonne and Brookhaven National Laboratories in the US.

Our calculations on neutral kaon mixing are also relevant to CP violation. We presented both the precise continuum results for the standard model process at physical quark masses and for processes that only arise beyond the standard model.


Nuclear Physics

np.png

Figure 1. 40-Ti one-neutron removal spectral strength distribution associated with J-pi = 1/2+.

It has proven extremely difficult to describe nuclei directly from a set of nuclear forces, apart from the very lightest elements. In the last decade there has been great progress, supported by computational advances, in various methods to tackle more complex nuclei. Many of these are specialised to systems with even number of neutrons and protons, or just closed shell nuclei, where many calculations simplify. Alternative phenomenological approaches, such as the nuclear shell model, can typically only work within a single major shell.

The complications in the problem arise from the structure of the nuclear force, which is noncentral and spin-dependent, as well as the superfluidity in atomic nuclei. One thus needs to work in large model spaces, take care of the supefluid Cooper pairs, and work with the complex operator structure of the nuclear force. One natural formalism to do all of this is based on the Gor’kov (Green’s function) approach to superfluid systems. Barbieri et al have recently introduced an approach based on the Gor’kov pattern that works for open-shell nuclei , away from the shell closure. A crucial issue for ab initio approaches concerns the ability of performing numerical calculations in increasingly large model spaces, with the aims of thoroughly checking the convergence and of constantly extending the reach to heavier systems.

A long-standing problem with self-consistent calculations of one-body propagators in finite systems concerns the rapid increase of the number of poles generated at each iterative step. The fast growth is expected as the Lehmann representation of one-body Green’s functions develops a continuous cut along the real energy axis in connection with unbound states. This cut is discretised by a growing number of discrete energy states as tthe size of the model space is increased. In practical calculations, one needs to limit the number of discretised poles. We use Lanczos algorithms to project the energy denominators onto smaller Krylov spaces.

As a result of such calculations we can calculate spectral strengths in nuclei that are not easily accessible by other theoretical means. As an example, we show below the spectral strength in 40Ti, with a cut-off ranging up to 13 major shells.

 

Categories: Highlights