Shock hydrodynamic simulations, used to understand the behavior of materials at very high pressures, provide essential information for Lawrence Livermore stockpile stewardship work and complement and support laser experiments performed at Livermore’s National Ignition Facility (NIF). However, high compression rates and hydrodynamic instabilities make accurate modeling challenging.
“Implosions at NIF generate enormous pressure,” explains computational mathematician Tzanio Kolev. “This produces shock waves generating and interacting with each other, and dealing with these discontinuities computationally is difficult. Typically these are modeled by an Arbitrary Lagrangian–Eulerian (ALE) approach, where during the Lagrangian phase the mesh evolves with the simulation. One of the challenges with this method is that, if we’re not careful, the mesh will intersect and tangle. Also, we have multiple materials to represent. Modeling the physical discontinuities at shock fronts and material interfaces is challenging mathematically.”
Achieving higher-quality simulations of these problems requires the development of more advanced numerical algorithms. To that end, a team of computer scientists, mathematicians, and physicists from the Computation and Weapons and Complex Integration (WCI) organizations, led by Kolev and WCI’s Rob Rieben, has created a new high-order ALE framework as an alternative to the standard low-order ALE solution methods. The framework is based on a high-order finite element approach, which uses simple element equations over many small curved geometric subdomains to approximate a more complex equation over a large domain. “Higher-order elements have more vertices, or control points, positioned around them, allowing us to curve the element boundaries and the geometry inside,” says Kolev. “This helps us more accurately follow the material flow.”
The new numerical methods were implemented in the Livermore-developed BLAST research code. Over the past few years, the team has successfully demonstrated that the high-order ALE framework—and overarching BLAST code—can produce efficient, accurate, and robust simulations of a variety of challenging shock hydrodynamics problems, but they have also identified opportunities for improvement. In 2014, the research expanded to include performance analysis and improvement, led by Ian Karlin, and a Laboratory Directed Research and Development (LDRD) program strategic initiative (SI), led by WCI’s Bert Still. Under the new SI, the team has incorporated a new high-order Discontinuous Galerkin (DG) remapping algorithm into the ALE framework. This enables BLAST to simulate larger time steps and to more accurately address mesh elements containing multiple materials.
Sometimes mesh elements are unable to conform to a function as well as is desired, particularly for functions with steep gradients. For those portions of the calculation, the new DG algorithm “stops time” and institutes a remap phase, during which the function stays the same while the mesh evolves. Once it has been translated to a more appropriate mesh, the calculation continues from the point where it left off. Notes Kolev, “With high-order ALE, we can push the Lagrangian phase much farther than with low-order ALE codes, but finite elements thin out and time steps become small eventually. With our remap approach, we can run with much larger time steps in a way that preserves accuracy.”
When the mesh changes, it can result in multiple materials within the same element, which can produce mathematical difficulties. The team has solved the problem by representing each material as a high-order function for purposes of remapping. “With our newest work, we are able to capture a mix of materials at a very detailed level,” notes Kolev. Overall, the remapping algorithm has demonstrated excellent parallel scalability, geometric flexibility, and accuracy on a variety of model problems and single- and multiple-material ALE simulations. One of the most demanding calculations to date was a three-dimensional BLAST simulation involving three materials performed on 16,000 cores of the Vulcan supercomputer.
The team has now begun to apply high-order solution methods to other types of physics, beginning with radiation diffusion. “We’re gradually extending it to more and more pieces of what a realistic multiphysics simulation would require,” says Kolev.
Through their resolution of some long-standing numerical challenges in shock hydrodynamics simulation, the algorithms developed through this research support the Laboratory’s national and energy security missions, but they also benefit research on parallel performance and next-generation computer architectures. Higher-order methods have greater FLOP/byte ratios, meaning that more time is spent on floating-point operations relative to memory transfer, an important characteristic of numeric algorithms for extreme-scale computing.
“On top of their mathematical benefits, higher order lets us increase the arithmetic intensity,” Kolev explains. “We can dial in how much we do with the data within each processor, each element, and each integration point. In fact, higher-order methods can often run in the same amount of time as lower order due to the increased computational efficiency.” With LDRD funding, Computation and WCI researchers are characterizing and optimizing the performance of BLAST’s high-order methods on different high-performance computing systems.
High-order methods remain an area of opportunity for the researchers. Notes Kolev, “At higher orders, unexpected things can happen. A lot of questions people thought were settled suddenly became very interesting.”