You are here

Co-design
Inventing new technology with academia and industry

Co-design describes a computer system design process where hardware and software partners work collectively, influenced by the ultimate users of the system, to make sure that technology trade-offs are evaluated in the design of an end-product. LLNL has a long history of successful, revolutionary co-design-like relationships, dating back to the earliest days of supercomputing with first-of-a-kind machines. Most of those machines were rated among the fastest in the world at the time and were developed through a process very similar to the co-design processes proposed for the exascale era. In the 1970s and ‘80s, LLNL worked closely with Control Data Corp (CDC) and Cray Computer to design machines using Laboratory applications or proxies (such as Livermore Loops) to influence the vendor solutions and provide a more usable machine. In the ASCI era, from 1996 to the present, we’ve been working closely with IBM as they’ve delivered five generations of supercomputers, all of which were heavily influenced by co-design—ASCI Blue, ASCI White, ASC Purple, BlueGene/L, and Sequoia.

Co-design projects advance the state of the art in a cost-effective manner, benefitting both end users, such as the national security labs, and the computing industry, which can expand the market with proven, easy-to-deploy business solutions. Through co-design, each partner has access to resources vastly greater than any one partner could afford alone. This allows the partners to push boundaries and focus on the critical aspects of high reliability and scalability required for petascale computing. Collaboration also promotes long-term relationships between Laboratory researchers and the industrial partners, fostering continuity in HPC technology development.

The term co-design refers to a computer system design process in which scientific problem requirements influence architecture design and technology and constraints inform formulation and design of algorithms and software. Major ongoing research and development centers of computational science need to be formally engaged in the hardware, software, numerical methods, algorithms, and applications to ensure that emerging computer architectures are well-suited for target applications. Co-design methodology combines the expertise of vendors, hardware architects, system software developers, domain scientists, computer scientists, and applied mathematicians working together to make informed decisions about features and tradeoffs in the design of the hardware, software, and underlying algorithms.

For those with extreme-scale computing requirements, co-design is widely considered to be a necessary step to achieve a usable system. Issues of extreme on-node concurrency, heterogeneous architectures, deep non-uniform memory hierarchies, resilience, power, performance portability, and programmer productivity are all issues facing the larger HPC community, not just for exascale systems, as many of these issues will trickle down—or already have—to the commodity server over the next decade.