Scientists at Swiss university ETH Zurich and IBM Research, in collaboration with the Technical University of Munich and the Lawrence Livermore National Laboratory (LLNL), have set a new record in supercomputing in fluid dynamics using 6,4-million threads on LLNL’s 96 rack “Sequoia” IBM BlueGene/Q, one of the fastest supercomputers in the world.
The simulations resolved unique phenomena associated with clouds of collapsing bubbles which have several potential applications including:
* Improving the design of high pressure fuel injectors and propellers;
* Shattering kidney stones using the high pressure of the collapsing bubbles; and
* Emerging therapeutic modality for cancer treatment by using bursting bubbles to destroy tumorous cells and precise drug delivery.
The team of scientists performed the largest simulation ever in fluid dynamics by employing 13 Trillion cells and reaching an unprecedented, for flow simulations, 14.4 Petaflop sustained performance on Sequoia — 73 percent of the supercomputer’s theoretical peak.
The simulations resolved 15 000 bubbles, a 150-fold improvement over previous research and a 20-fold reduction in time to solution. These are crucial improvements which pave the way for the investigation of a complex phenomenon called cloud cavitation collapse.
This occurs when vapour cavities or bubbles form in a liquid due to changes in pressure and when the bubbles implode they can generate damaging shockwaves, which can be harnessed for applications in healthcare and industrial technology.
The violent and short time scales of this process have made its quantitative understanding elusive for experimentalists and computational scientists. And while supercomputers have always been considered as a solution, the large scale flow simulations have not been effective on massively parallel architectures.
“In the last 10 years we have addressed a fundamental problem of computational science: the ever increasing gap of hardware capabilities and their effective utilisation to solve engineering problems,” says Petros Koumoutsakos, director of the Computational Science and Engineering Laboratory at ETH Zurich who led the project.
He adds: “We have based our developments on finite volume methods, perhaps the most established and widespread method for engineering flow simulations. We have also invested significant effort in designing software that takes advantage of today’s parallel computer architectures. It is the proper integration of computer science and numerical mathematics that enables such advances.”
“We were able to accomplish this using an array of pioneering hardware and software features within the IBM BlueGene/Q platform that allowed the fast development of ultra-scalable code which achieves an order of magnitude better performance than previous state-of-the-art,” says Alessandro Curioni, head of mathematical and computational sciences department at IBM Research – Zurich.
“While the Top500 list will continue to generate global interest, the applications of these machines and how they are used to tackle some of the world’s most pressing human and business issues more accurately quantifies the evolution of supercomputing.”
These simulations are one to two orders of magnitude faster than any previously reported flow simulation. The last major achievement was earlier this year by a team at Stanford University, which broke the 1-million core barrier, also on Sequoia.