In the race to exascale computing, it looks like the Department of Energy (DoE) will be the first to reach the pole. In a couple of years, the department plans on opening a supercomputer, known as Aurora, capable of performing a quintillion calculations per second, five times faster than the current champ. But in addition to beating the Chinese and everyone else to the exascale summit, Aurora will be significant for incorporating artificial intelligence into its repertoire, while pursuing a range of scientific and real-world problems.

DoE, Intel, and sub-contractor Cray Computing recently announced a $500 million contract to build Aurora, which they expect to deliver to Argonne National Laboratory in Chicago in 2021. Its one exaflop speed (“flop” is for floating point operations) puts it well ahead of the current leaders on the Top 500 list of supercomputers. Those leaders are Oak Ridge National Laboratory’s 143.5 petaflop IBM Summit and Lawrence Livermore National Lab’s 94.6 petaflop Sierra, slightly smaller than Summit but built with the same components. Last year, those two systems surpassed China’s Sunway TaihuLight, which had held the top spot for two years in what has become a back-and-forth battle for high-performance computing (HPC) supremacy between the two countries.

Aurora will be put to work on projects such as extreme-scale cosmological simulations, new approaches for drug response prediction, and creating more efficient organic solar cells, among others, DoE said in announcing Aurora. The system is being built with Intel processors, memory, and other technologies designed to enable HPC and artificial intelligence to work together at a supercomputing scale.

“Argonne’s Aurora system is built for next-generation artificial intelligence,” said Argonne Director, Paul Kearns, “and will…address real world problems, such as improving extreme weather forecasting, accelerating medical treatments, mapping the human brain, developing new materials, and further understanding the universe–and that is just the beginning.”

Join MeriTalk for an engaging half-day discussion on priority cloud computing issues and trends Read more
HPC and AI have been on the path toward convergence for a while now, with the introduction of AI–particularly in the form of machine learning–promising to open new frontiers for supercomputing, known for crunching mathematical problems such as simulating the decay of nuclear weapons without having to physically test them. Machine learning, for instance, can expand the possibilities for working with big data, and accelerate the use of supercomputing in the cloud, among other things, notes HPC Wire. It could add new programming languages and types of algorithms to HPC, as well as new approaches to simulations.

One of AI’s strengths is that it does more with less, performing levels of processing that typically require a lot more computing power. It can, for example, allow medical teams to test for diseases in remote locations with the equivalent of a smartphone, running programs that would otherwise require a high-powered workstation or an internet connection to the cloud. It will add processing to small, low-power devices on the Internet of Things, add an extra layer of cyber protection to mobile devices, and allow swarms of small drones to be controlled from a tablet. Combining machine learning or other AI techniques to the world’s most powerful supercomputer will open a lot of possibilities.

Aurora’s mix of AI and HPC also is another indication of how supercomputing, for decades the final word on computing prowess, is adapting to incorporate new technologies. The National Aeronautics and Space Administration and Google this year are having an HPC/quantum computing showdown, ostensibly to determine “quantum supremacy,” but also to find out what the two computing approaches, which operate differently, can learn from each other.

Read More About
About
Kate Polit
Kate Polit
Kate Polit is MeriTalk's Assistant Copy & Production Editor covering the intersection of government and technology.
Tags