High-performance computing (HPC) software is becoming increasingly complex, quickly outpacing the capabilities of existing software management tools. To support scientific applications, system administrators and developers frequently build, install, and support different configurations of math and physics libraries and other software on the same HPC system. Those applications are later rebuilt to fix bugs and to support new operating system versions, compilers, message passing interface (MPI) versions, and other dependency libraries. Forcing all application teams to use a single, standard software stack is infeasible, but managing many software configurations and versions for all users on a system is a time-consuming task for supercomputing staff.
Existing tools can automate portions of this process, but they either cannot manage installation of multiple versions and configurations, or they require numerous configuration files for each software version, leading to organizational and maintenance issues. Enter Spack, a flexible, Python-based package management tool designed by LLNL computer scientists. Spack operates on a wide variety of HPC platforms and environments, and it allows any number of builds to peacefully coexist on the same system. Software installed by Spack runs correctly regardless of environment, and file management is streamlined.
Spack maintains a single file for many different builds of the same package, and its simple syntax lets users specify versions and configuration options concisely. Each software’s dependency graph—showing the packages it relies on to function—is a unique configuration, and Spack installs each of those configurations in a unique directory, enabling configurations of the same package to run on the same machine. Spack uses RPATH linking so that each package knows where to find its dependencies.
By ensuring one configuration of each library per dependency graph, Spack guarantees that the interface between program modules (the application binary interface, or ABI) remains consistent. Users do not need to know dependency graph structure—only the dependency names. In addition, dependencies may be optional; concretization, a generalization technique, fills in missing configuration details when the user is not specific enough, based on user/site preferences.
Spack was designed for large supercomputing centers, where many users and application teams share common installations of software on clusters with exotic architectures, using libraries that do not have a standard ABI. It can make sure that builds use the same compiler, and the team is currently working on ensuring ABI compatibility when compilers are mixed. Spack can also handle ABI-incompatible interfaces such as MPI.
Various Livermore code teams are now making use of Spack. For instance, Spack was used to successfully automate software builds for ARES, a large Livermore radiation hydrodynamics code. At Livermore, ARES runs on commodity Linux clusters and on Blue Gene/Q. With Spack, the team was able to test more configurations more efficiently than ever before. The tool has enabled them to detect and fix compiler incompatibilities and to complete more testing, which has helped both ARES and LLNL library developers to build more robust software. Because other LLNL code teams use many of the same libraries as ARES, LLNL code teams also have begun creating an internal repository of Spack build recipes, which will make packaging the next code much easier.
Spack is open-source software, and a rapidly growing community of contributors and users has helped to grow its popularity. LLNL staff have collaborated with LBL’s National Energy Research Supercomputing Center (NERSC) to support Spack on Cray supercomputers, and Spack is now used to deploy software at NERSC. Spack is also used at Argonne National Laboratory in Chicago, École Polytechnique Fédérale de Lausanne in Switzerland, and it is gaining traction at other universities and national laboratories.
You can join the Spack community by visiting its repository: Spack on GitHub
For more information: