You are here

We house some of the world’s most powerful supercomputers.

The computing and simulation infrastructure at LLNL spans multiple buildings with large computer rooms. The largest facility was designed specifically to house the landmark Purple and Blue Gene/L systems and their successors, which currently include Sequoia, Vulcan, and Zin, among others. This computer facility provides 48,000 ft2 and 30 MW of power for systems and peripherals, and additional power for the associated machine-cooling system. That’s more than one acre of floor space devoted to computing. One of the most modern HPC facilities in the world, it’s maintained by a world-class engineering and facilities staff in collaboration with a 24/7 operations staff.

Our infrastructure exemplifies innovative engineering solutions for electrical, mechanical, and structural design. Our engineers designed and implemented state-of-the-art solutions to prepare the facility for Sequoia, which had unprecedented requirements for electrical power distribution, mechanical cooling infrastructure, and structural support for extremely heavy racks. We also incorporate flexibility into our facility planning. For instance, our data center supporting collaborative work with industry and academia on the Livermore Valley Open Campus is modular, enabling us to quickly change out or add computing resources as technologies and business needs change.

We’re committed to achieving energy efficiency and full life-cycle sustainability in our facilities. Since 2010, two LLNL computer buildings have been certified as “green” facilities by the Leadership in Energy and Environmental Design (LEED) rating system. Sequoia made its debut in 2012 at the top of the Green500 list as well as the TOP500 list and the Graph 500 list. It’s one of the most energy-efficient supercomputers on the planet.

As we plan the next generation of innovative computing and simulation facilities, our focus is on emphasizing function over form, reducing energy intensity, and optimizing the efficiencies of HPC systems. For example, we recently stood up a modular, sustainable new facility for unclassified machines.