Does your brain stimulate in the name of high-end graphics games? Are you critical about choosing the best video expansion card or gaming laptop to spend hours in playing AC unity, Skyrim, FC 3, Crysis 3, GTA and COD series? Then probably you will be amazed to know that the power running behind your games, which means your geeky graphics card, is a tool for people to run complex mathematical computation. Just killed your interest? Well if we hypothetically believe in the Mathematical Universe, then graphics cards definitely justify the conjunction point where gaming converges at one end to emerge as mathematics.
Fair enough? Now putting the philosophy aside, a group of researchers from the KAUST, Extreme Computing Research Center has literally used these graphics cards to solve the kinds of simultaneous equations involving countless variables. According to Professor David Keyes, from the dept. of Applied Mathematics and Computational Science, such compound occurrences appear in statistics, optimization, electrostatics, chemistry, and astronomical calculation where scientists depend on heavy computation where results depend on the mercy of computational execution time and the concentrated energy consumption.
This is the point where Keyes and team targeted to change. The Graphics Processing Unit are energy efficient compared to the conventional computer processors because the surplus hardware components do not intervene in the processing. However, customizing a supporting software came forward as a challenge which was taken care of with the help of Ali Charara’s solver design, a Ph.D. student. Taking a reference from Professor Keyes’ words, Ali has gathered a good amount of knowledge from his internship in NVIDIA that finally pinched the trade-off between memory storage and a number of processors.
Hatem Ltaief, a Senior Research Scientist at the team explained that the developed solver does not require extra memory and directly processes information. Explaining simply, the solver converted the sequential column based information to a triangular matrix derived formation. The upgraded solver will soon be added with the NVIDIA GPUs next library. The corresponding research paper is published in the European Conference on Parallel Processing.