Lead by Dr Ed Bennett
We were looking to develop a solution for Sergey Yurchenko (UCL) for matrix diagonalization and eigenvalues/eigenvectors. We need a GPU-diagonalizer for a double real, symmetric, dense (i.e. non-sparse), diagonal-dominated matrices, which would be very efficient for matrices with dimensions of the order N = 200,000-400,000 (and lower) and would work for N up to 1,000,000. It should output eigenvalues and eigenvectors for at least 50% of the roots (counting from bottom), or, ideally, give all of them.
This should be generally applicable and not limited to Sergey’s research code.
None of the team has worked with Sergey’s research code before. Most of us have looked into how GPU development is done; one member has done a Udacity course on it, but none have any practical experience of writing any sort of production code targeting GPUs. Ed’s experience is primarily in lattice gauge theory, making use of MPI in C and Fortran, as well as Python. Mark and Chenna’s backgrounds are in computational engineering, including finite element and finite volume methods for computational electromagnetics and computational fluid dynamics. Colin’s background is in remote sensing, robotics, and data management.
Target for acceleration is the diagonalization and finding eigenvalues and eigenvectors of very large matrices (size N^2 with N=2×10^5 10^6 )
Learning and sharing experience was an important part of the process for the team, since previous GPU experience was varied. The problem was intended primarily as a learning experience, which increased the confidence of working with ScaLAPACK, BLAS, cuBLAS and CUDA source. Using pair-programming allowed unfamiliar libraries to be understood quickly, which reduced the iteration time of the build and testing process. The build setup and workflows will be useful for future code development.