You are here

Machine Catalog
We constantly redefine technology's boundaries.

Our hardware enables discovery-level science. The unique, world-class, balanced environment we’ve created at LLNL has helped usher in a new era of computing and will inspire the next level of technical innovation and scientific discovery.

Livermore Computing (LC) delivers multiple petaFLOP/s of compute power, massive shared parallel file systems, powerful data analysis platforms, and archival storage capable of storing hundreds of petabytes of data. We support a broad range of diverse and dynamic computing architectures—Blue Gene/Q; our Linux clusters, some of the largest on the planet (with Zin being the biggest at 980 teraFLOP/s); visualization, high-memory, and big data machines (such as Apache Hadoop servers); and homogeneous and heterogeneous core architectures. We continually explore new architectures and technologies.

Our advanced technology systems, such as Sequoia, are used to run large-scale multiphysics codes in a production environment. Our advanced architecture systems explore technologies at scale with the intent that matured technology can be used as a basis to design future production resources.

We also specialize in deploying commodity technology systems, such as the Tri-Lab Linux Capacity Clusters. These systems leverage industry advances and open-source software standards to build, field, and integrate Linux clusters of various sizes.

LLNL has deployed dozens of very large Linux clusters since 2001. These clusters have commodity nodes, provide a common programming model, and implement similar cluster architectures. Hardware components are carefully selected for performance, usability, manageability, and reliability, and are then integrated and supported using a strategy that evolved from practical experience.

We also deploy specialized vis clusters and archival storage, as well as support and contribute to the development of the open-source Lustre parallel file systems. Lustre is mounted across multiple compute clusters and delivers high-performance, global access to data.

For configuration details of the more than 20 production compute platforms supported by LC, see our systems summary. Descriptions of a few of our computers are included below.

Computers

Vulcan is one of the largest, most capable computational resources available in the U.S. for industrial collaborators. This prodigious unclassified supercomputer is shared by the M&IC and ASC programs, with 1.5 of its 5 petaFLOPS directly supporting NNSA’s mission to enhance U.S. economic competitiveness. Partners access Vulcan, whether for open, publishable science or business-sensitive and proprietary research, through the High Performance Computing Innovation Center.

HPCIC partners gain access to both a percentage of Vulcan for a period of time and to the entire associated infrastructure available in the world-class LC HPC environment, including subject-matter experts, network access, file systems, archive storage, and help-line support.   

In addition to supporting HPCIC efforts, Vulcan is used for NNSA research programs, academic alliances, and LLNL institutional science and technology efforts.

The 24-rack IBM Blue Gene/Q system is based on the POWER architecture. Vulcan was part of the contract that brought Sequoia to Livermore.

Ranked among the world’s most powerful supercomputers, Sequoia supports two missions: quantify the uncertainties in numerical simulations of nuclear weapons performance and perform the advanced weapons science calculations needed to develop the accurate physics-based models for weapons codes. Sequoia is primarily water-cooled and significantly more energy efficient than comparable systems, which is essential to control operating costs.

Sequoia has prevailed at No. 1 on the Graph500 list since November 2012, indicating that it is still the world’s most efficient at processing extremely vast (petabyte and exabyte-size) data sets. The Graph500 benchmark measures how quickly a system can search through a large data set—an important indicator of a system’s usefulness as computer scientists increasingly use supercomputers to analyze massive data-intensive workloads in addition to executing traditional modeling and simulation tasks. Sequoia traversed 15,363 billion edges per second.
 

Cab

Acquired as part of the TLCC2 (Tri-Lab Linux Capacity Cluster 2) procurement, Cab can be accessed by users in the new High Performance Computing Innovation Center (HPCIC). Cab is a large capacity resource shared by the M&IC and ASC programs for running small to moderate parallel jobs.
 

Sierra is a workhorse for solving computationally intensive problems, hosting both LLNL Grand Challenge-scale and mission-related projects. Used by the M&IC program, Sierra is sited in the Collaboration Zone and is tuned for running large parallel jobs.