Inventing new technology with academia and industry
Co-design describes a computer system design process where hardware and software partners work collectively, influenced by the ultimate users of the system, to make sure that technology trade-offs are evaluated in the design of an end-product. LLNL has a long history of successful, revolutionary co-design-like relationships, dating back to the earliest days of supercomputing with first-of-a-kind machines. Most of those machines were rated among the fastest in the world at the time and were developed through a process very similar to the co-design processes proposed for the exascale era. In the 1970s and ‘80s, LLNL worked closely with Control Data Corp and Cray Computer to design machines using Laboratory applications or proxies (such as Livermore Loops) to influence the vendor solutions and provide a more usable machine. In the ASCI era (1996 to the present), we have been working closely with IBM as they have delivered five generations of supercomputers, all of which were heavily influenced by co-design—ASCI Blue, ASCI White, ASC Purple, BlueGene/L, and Sequoia.
Co-design projects advance the state of the art in a cost-effective manner, benefiting both end users, such as national laboratories, and the computing industry, which can expand the market with proven, easy-to-deploy business solutions. Through co-design, each partner has access to resources vastly greater than any one partner could afford alone. This allows the partners to push boundaries and focus on the critical aspects of high reliability and scalability required for next-generation computing. Collaboration also promotes long-term relationships between Laboratory researchers and the industrial partners, fostering continuity in high performance computing (HPC) technology development.
The term co-design refers to a computer system design process in which scientific problem requirements influence architecture design and technology and constraints inform formulation and design of algorithms and software. Major ongoing research and development centers of computational science need to be formally engaged in the hardware, software, numerical methods, algorithms, and applications to ensure that emerging computer architectures are well-suited for target applications. Co-design methodology combines the expertise of vendors, hardware architects, system software developers, domain scientists, computer scientists, and applied mathematicians working together to make informed decisions about features and tradeoffs in the design of the hardware, software, and underlying algorithms.
For those with extreme-scale computing requirements, co-design is widely considered to be a necessary step to achieve a usable system. Issues of extreme on-node concurrency, heterogeneous architectures, deep non-uniform memory hierarchies, resilience, power, performance portability, and programmer productivity are all issues facing the larger HPC community, not just for exascale systems, as many of these issues will trickle down—or already have—to the commodity server over the next decade.
On the Web
For more information and examples of how Co-design is partnering with American industry
With the LLNL-managed FastForward program, technical experts from seven national labs are working with five companies to accelerate the R&D of critical technologies for extreme-scale computing. FastForward is funded by DOE’s Office of Science and NNSA.
One co-design example in LLNL’s history is the Hyperion project. In 2008, LLNL teamed with 10 computing industry leaders, with each one playing a vital role. Dell, Intel, QLogic, Mellanox, and Super Micro Computer built the processors and nodes and helped integrate the input/output system. QLogic, Cisco Systems, and Mellanox built the InfiniBand and Ethernet network components. DataDirect Networks, Sun Microsystems, and LSI created the storage hardware, while Red Hat was responsible for Linux testing and various system administration duties.