Expanding the reach of the most advanced high performance computing (HPC) technologies and capabilities by bringing them From Exascale to Everyscale is a critical part of Lenovo’s commitment to create a more inclusive, insightful and sustainable digital society; a world with Smarter Technology for All.

For Harvard University’s Faculty of Arts and Sciences Research Computing unit (FASRC), “smarter” means energy-saving technology that cools servers without warming the planet.

FASRC was established in 2007 with the founding principle of facilitating the advancement of complex research by providing leading edge computing services.

FASRC recently announced its largest HPC cluster, Cannon, named after the legendary American astronomer Annie Jump Cannon. The FASRC Cannon cluster is a large-scale HPC system supporting Science, Engineering, Social Science, Public Health, and Education modeling and simulation for more than 600 lab groups and over 4 500 Harvard researchers.

Faster and more efficient data processing is critical to the thousands of researchers working to improve earthquake aftershock forecasting using machine learning, model black holes using event horizon telescope data, map invisible ocean pollutants, identify new methods for flu tracking and prediction, and develop a new statistical analysis technique to better understand the details of star formation.

Leveraging Lenovo and Intel’s long-standing collaboration to advance HPC and artificial intelligence (AI) in the data center, FASRC sought to refresh its previous cluster, Odyssey.

FASRC wanted to keep the processor count high and increase the performance of each individual processor, knowing that 25% of all calculations are run on a single core. Liquid cooling is paramount to support the increased levels of performance today, and the extra capacity needed to scale in the future.

Cannon, comprised of more than 30 000 2nd gen Intel Xeon Scalable processor cores, includes Lenovo’s Neptune liquid cooling technology, which uses the heat conducting efficiency of water versus air. Now, critical server components can operate at lower temperatures allowing for greater performance and energy savings.

The enhanced performance enabled by the new system reflects Lenovo’s focus of bringing exascale level technologies to a broad universe of users everywhere – what Lenovo has coined “From Exascale to Everyscale.”

Though the Cannon storage system is spread across multiple locations, the primary compute is housed in the Massachusetts Green High Performance Computing Centre, a LEED Platinum-certified data centre.

The Cannon cluster includes 670 Lenovo ThinkSystem SD650 servers featuring Lenovo Neptune direct-to-node water-cooling, and Intel Xeon Platinum 8268 processors consisting of 24 cores per socket and 48 cores per node.

Each Cannon node is now several times faster than any previous cluster node, with jobs like geophysics models of the Earth performing three-to-four times faster than the previous system. In the first four weeks of production operation, Cannon completed over 4,2-million jobs utilizing over 21-million CPU hours.

“Science is all about iteration and repeatability. But iteration is a luxury that is not always possible in the field of university research because you are often working against the clock to meet a deadline,” says Scott Yockel, director of research computing at Harvard University’s Faculty of Arts and Sciences.

“With the increased compute performance and faster processing of the Cannon cluster, our researchers now have the opportunity to try something in their data experiment, fail, and try again. Allowing failure to be an option makes our researchers more competitive.”

The additional cores and enhanced performance of the system are also attracting researchers from additional departments at the university, such as Psychology and the School of Public Health, to more frequently leverage its machine learning capabilities to speed and improve their discoveries.

Intel, Lenovo and some of the world’s biggest names in HPC are creating an exascale visionary council dedicated to bringing the advantages of exascale technology to users of all sizes, far beyond today’s top tier government and academic installations.

As part of its work to drive the broader adoption of exascale technology for a greater HPC community, the council, named Project Everyscale, will address the range of component technologies being developed to make exascale computing possible.

Areas of focus will touch all aspects of the design of HPC systems, including everything from alternative cooling technologies to efficiency, density, racks, storage, the convergence of traditional HPC and AI and more. The visionaries on the council will bring to bear their insights as customers to set the direction for exascale innovation that everyone can use, working together to bring to life a cohesive picture of the future for the industry.