• Fri. Nov 15th, 2024

InformationWeek, serving the information needs of the Business Technology Community

Byadmin

Jun 5, 2021




Supercomputing technology has indelibly changed how we approach complex issues in our world, from weather forecasting and climate modeling to protecting of the security of our nation from cyberattacks. All of the world’s most capable supercomputers now run on Linux, and with the 30th anniversary of the creation of Linux fast approaching this summer, it’s an important moment to consider how the US can strengthen its advanced cyberinfrastructure and invest in the next generation of supercomputers.
Although supercomputers were once a rarity, these high-performance machines now have a ubiquitous presence in our lives, whether or not we’re aware of it. Everything, from the design of water bottles to accelerating vaccine research for COVID-19, is made possible by the phenomenal capabilities of supercomputers. The ability of these machines to model and solve complex problems has become an essential backbone of global invention and innovation, providing economic benefits as well as performing important scientific breakthroughs. Yet as future emergencies and problems become more unpredictable and more complex, the technology — and especially American supercomputers — must catch up to the global competition. To truly improve our national competitiveness, we must increase investment into strategic computing technologies and make significant efforts to democratize the use of supercomputers.
Revolutionary Leap Forward
Decades ago, the Linux supercomputing movement was a revolutionary leap forward from available computing technologies. I built the first Linux supercomputer, named Roadrunner, for about $400,000. Earlier attempts at clusters of Linux PCs, such as Beowulf, existed, but they lacked important system components that distinguishes supercomputers from a pile of computers. While Beowulf clusters could solve some problems that were neatly divided into independent tasks, the technology didn’t yet achieve fast communication among processors, which was needed to support the large set of scientific applications that run on supercomputers. In contrast, Roadrunner would later become a node on the National Technology Grid, allowing researchers to access supercomputers for large-scale problem-solving from their desktops. The investment into developing Roadrunner quickly proved to be the catalyst for the Linux supercomputing moment, inspiring a new wave of supercomputers created for broader commercial use.
When Roadrunner went online, it was among the 100 fastest supercomputers in the world. Since then, the technology has only improved, and winning the global competition to build the top-ranked supercomputer has only intensified. Governments around the world have increased investment into developing state-of-the-art computing in order to compete with other countries. A symbolic representation of the global race, the Top500 list ranks the world’s fastest and most powerful supercomputers and reveals which countries recognize the importance of having a strong supercomputing infrastructure. While the technical capabilities of the ranked machines are certainly impressive on their own, make no mistake: they are indicators of the economic, military, and commercial capabilities of the countries represented. As the US Council on Competitiveness has said, “the country that wants to outcompete, must outcompute.”
When it comes to performing complex scientific tasks, supercomputing technology proves to be invaluable. Issues at the nexus of nature and civilization, such as the COVID-19 pandemic, will always be of relevance to researchers and will always require cutting-edge tools. In a recent study, a team of researchers, including my colleagues at New Jersey Institute of Technology, successfully built models to track the movement of COVID-19 particles in supermarkets; their simulations provide valuable information on how the virus spreads. How were the simulations made? They were made possible thanks to the San Diego Supercomputer Center at University of California-San Diego. Investment drives innovation and even life-saving discoveries.
Democratization
The second step is democratization: the problem-solving capabilities of supercomputers will only improve as more people gain access to and learn to use the technologies. Women and other underrepresented groups in STEM fields currently have limited access to the power of supercomputing, and the high-performance computing field is currently losing out on important perspectives.
A significant barrier to democratization is one of practicality: working with massive amounts of data, such as 10s of terabytes, usually requires knowledge of and access to high-performance computers. But thanks to an award from the National Science Foundation, my research team is developing new algorithms and software that allow for easier access to high-performance computing. The research project will focus on extending Arkouda, an open-source code library that is used by data scientists at the Department of Defense, and it will start to bridge the gap between ordinary people and high-performance computing technology. When we remove barriers of use and allow more people to interact with these technologies, we can utilize the full capabilities of supercomputers.
Increasing investment and expanding the user base of supercomputers helps drive innovation and improvement forward in academia, government, and the private sector. If we can’t get advanced supercomputers in the hands of more people, the US will fall behind globally in solving some of tomorrow’s most pressing problems.
David A. Bader is a Distinguished Professor in the Department of Computer Science in the Ying Wu College of Computing and Director of the Institute for Data Science at New Jersey Institute of Technology. He is a Fellow of the IEEE, AAAS, and SIAM.



Source link