Are they the future for your research
What is a GPU
August 31, 1999 marks the introduction of the Graphics Processing Unit (GPU) for the PC industry. The technical definition of a GPU is “a single chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines that is capable of processing a minimum of 10 million polygons per second.” Which sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world.
Architecturally, the CPU is composed of just few cores with lots of cache memory that can handle a few software threads at a time. In contrast, a GPU is composed of hundreds of cores that can handle thousands of threads simultaneously. The ability of a GPU with 100+ cores to process thousands of threads can accelerate some software by 100x over a CPU alone. What’s more, the GPU achieves this acceleration while being more power- and cost-efficient than a CPU.
In a nutshell: CPU concentrates a lot of space to single threaded performance, where as a GPU ignores single threaded performance and maximises overall multi thread performance.
What is the big fuss about GPU’s
In 2006, the creation of our CUDA® programming model and Tesla® GPU platform opened up the parallel-processing capabilities of the GPU to general-purpose computing. A powerful new approach to computing was born.
GPU computing is the most accessible and energy efficient path forward for HPC and data centers. Today, NVIDIA powers the world’s fastest supercomputer, as well as the most advanced systems in Europe and Japan. In the U.S., Oak Ridge National Labs introduced Summit, now the world’s smartest and most powerful supercomputer. 27,000 NVIDIA Volta Tensor Core GPUs accelerate Summit’s performance to more than 200 petaflops for HPC and 3 exaops for AI.

Artificial intelligence is the use of computers to simulate human intelligence. AI amplifies our cognitive abilities, letting us solve problems where the complexity is too great, the information is incomplete, or the details are too subtle and require expert training. Learning from data, a computer’s version of life experience, is how AI evolves. GPU computing powers the computation required for deep neural networks to learn to recognize patterns from massive amounts of data. This new, supercharged mode of computing sparked the AI era.
The AI race is on. Deep learning breakthroughs no longer come from scientific and research labs alone. Today, in trillion-dollar industries like transportation, healthcare, and manufacturing, companies are using AI to transform the ways they do business. Self-driving cars, intelligent medical imaging systems, and autonomous factory robots have moved quickly from ideas to reality. And it’s only the beginning.
GPUs and DiRAC
Currently the DiRAC family of Supercomputers has a small provision of GPUs, which is part of the Wilkes2 cluster and Cambridge University. This amount to 13% for DiRAC users, which is due to increase soon. At Edinburgh University, under the supervision of Dr Peter Boyle, the GRID project are procuring a new Volta based architecture system with improved GPU interlink via NVlink.

Wilkes2 GPU system at Cambridge
As a precursor to DiRAC Day back in September, we at DiRAC organised our first hackathon. With support from Nvidia, Swansea University and Cambridge University our first 3-day GPU hackathon took place. During the 3 days teams of researches supported by a Nvidia trainer and the RSEs from Swansea and DiRAC good first step in GPU development was attaind, with some teams getting a speedup of x10. It was a great successes, as reported by the AREGPU team, and others.

AREPGU results
Interested? then have a go
With the support of Nvidia there are two free courses available to anyone interested. The two courses are OpenACC 2x in 4 Steps and Accelerating Applications with CUDA C/C++ . These courses include an online virtual development environment, so there is nothing you need apart from curiosity. If you are still interested after these two course and would like to put what you have learnt into practice with your codes, then why not apply to be part of the GPU development project. This project give you access to the DiRAC GPU system at Cambridge. Giving you facilities, experience and preparing you for any future DiRAC project application.
DiRAC and Nvidia are joint hosting a GPU webinar in January, details to be announced.