We partner with other LLNL programs to develop and apply computer science expertise by providing timely, cost-effective computing solutions through innovative software technologies and collaborative application development. Much of the locally developed software is available for download.
LLNL’s Center for Applied Scientific Computing (CASC) develops world-class software for scientific computing, with a focus on scaling to the world’s largest machines. CASC software includes math libraries (hypre, SUNDIALS, XBraid), language tools (ROSE, Babel), partial differential equations (PDEs) software frameworks (Overture, SAMRAI, MFEM), and other tools and benchmarks.
Proxy applications are applications meant to represent the HPC workload we care about, while still being small enough to easily understand and try out new ideas. LLNL is one of several DOE labs building proxy apps, such as LULESH, UMT, and Mulard, as a way to build a two-way communication pipeline with vendors and researchers about how to evaluate trade-offs in future architectures and software development methods. Next-generation simulation algorithms are also being explored in LLNL-developed research codes, such as BLAST.
LLNL deployed a Linux cluster in 2002 that peaked at #3 on the TOP500 list and has been committed to supporting the Linux ecosystem at the high end of commodity computing ever since. Administrators of Linux clusters will find an array of robust tools developed at LLNL for platform management (FreeIPMI, pdsh, Whatsup), authentication (Munge), and I/O analysis (LMT, IO Watchdog).
Open source software (OSS) has had a huge impact on innovation in HPC by leveraging the collective expertise of the HPC ecosystem and making the cost of entry to HPC affordable. LLNL has developed and deployed OSS across the entire software stack from operating systems (CHAOS/TOSS) to resource management (SLURM) to performance analysis (Open|SpeedShop, mpiP) to scalable debugging (STAT) to data compression (zfp).
Turning raw data into understandable information is a fundamental requirement for scientific computing. LLNL and Livermore Computing develop and support an array of visualization options, including in-house developed tools (VisIt, GLVis), commercial products (IDL, EnSight), and support and promotion of open standards and software (OpenGL, ParaView).
Livermore Computing supports several tools aimed at maximizing the efficiency of the HPC software developer. A robust process for deployment ensures that the best products are available by default while giving the developer flexibility in choice. These include compilers (Intel, gnu, XL, PGI), debuggers (TotalView), memory checking (Memcheck/Valgrind), and profiling and tracing (TAU, Open|SpeedShop, Vampir, mpiP)
File I/O at extreme scales is a particular challenge for both application developers and system architects. The Scalable I/O project is a collection of publications, talks, and proxy applications for studying and understanding scalable I/O behaviors.