You are here

Research & Development
Promoting innovation through cutting-edge R&D

Our responsibility for enabling science goes beyond helping to solve today’s problems; we must be prepared to solve tomorrow’s known and anticipated challenges through research and advanced development. We conduct collaborative scientific investigations that require the power of high performance computers and the efficiency of modern computational methods. Our research, some of which is described here, focuses on issues that will enable the next generation of computing applications for LLNL and our partners.

Computational Sciences/Simulation
View More

Our research explores a number of different scientific simulation fields, most of which have particular significance to LLNL programs (e.g., high-energy-density physics). Other scientific simulation work showcases the Lab’s high performance computing capabilities in collaborative efforts with scientists at other institutions.


Computational Biology

Computational Fluid Dynamics

Computational Seismology

Materials Science/Molecular Dynamics

Plasma Physics

Power Grid

Transport Methods

Ultra-short Pulse Laser Propagation

Cyber Security
View More

We’re conducting research and developing game-changing tools to meet the nation’s top priorities to enhance security in a highly interconnected world. In particular, our work is focused on developing a distributed approach to real-time situational awareness; advancing predictive and scalable simulations to design and analyze complex systems, such as space system protection and large-scale cyber defense, where full-scale experiments are not possible; and creating analytic methodologies and data management and fusion tools that are needed for the next generations of intelligence applications.

Reverse Engineering

Network Mapping

Secure Coding

Analysis of Industrial Control Systems

Network System Design and Engineering

Modeling and Simulation

Cyber Data Analytics

Education and Outreach

Data Analytics and Management
View More

Data Analytics and Management is the branch of computer science that is concerned with extracting usable information from data. At LLNL, we’re working with data in many forms: text, images, videos, semantic graphs, and more. This data may be “at rest” in files or databases, or “in motion” as it streams in from sensors or other live sources. Our informatics research aims to gain insight from data that is very large, geographically distributed, complex, fast moving, or some combination of these characteristics. Applications for this work span a wide range of LLNL missions, including energy security and efficiency, biosecurity, computer security, and climate change.

Machine Learning

Analysis of Large Graphs

Network Analysis and Mapping



Data Analytics for Facilities

Text Analysis

Extreme Computing
View More

Hardware Architecture
View More

We are determining how to build future generations of supercomputers. We are actively exploring issues such as possible uses of persistent memory (non-volatile random access memory or NVRAM) and methods to reduce power consumption or to increase reliability while maintaining (or even reducing) cost and maintaining (or improving) performance. We are also closely interacting with industry through local initiatives and programs such as FastForward.  Throughout these activities, we combine unique research capabilities with our prove track record of building and deploying reliable and productive large-scale systems.

Hardware Testbeds



I/O, Networking, and Storage
View More

Disk- and tape-delivered I/O bandwidths are being rapidly outpaced by capacity increases, which means valuable processor time is being wasted while waiting for data delivery. For extreme-scale machines to be productive, bandwidth challenges throughout the entire I/O stack must be addressed. We’re working on techniques and technologies that leverage node-local or near-node storage, refactor parallel file systems, and evolve tertiary storage software to enable efficient extreme-scale computing environments.


Data Management and Movement

Non-volatile Storage

Mesh Management
View More

We’re developing fast and scalable algorithms for solving partial differential equations that dynamically adjust the computation mesh in order to improve accuracy and make the best use of computational resources. We research new methods for block-structured adaptive mesh refinement and high-order unstructured curvilinear mesh optimization, targeting applications with moving and deforming meshes.  Our algorithms can be used to accurately represent the moving and deforming geometry as well as to resolve internally moving features such as material interfaces, shocks, and reaction fronts.

Unstructured Finite Elements

Numerical PDEs/High-Order Discretization Modeling
View More

We’re developing next-generation numerical methods to enable more accurate and efficient simulations of physical phenomena such as wave propagation, turbulent incompressible and high-speed reacting flows, shock hydrodynamics, fluid–structure interactions, and kinetic simulation. Our application-driven research is focused on designing, analyzing, and implementing new high-order finite difference, finite volume, and finite element discretization algorithms, with an emphasis on increased robustness, parallel scalability, and better utilization of modern computer architectures.

Parallel Software Development Tools
View More

We’re working on a new generation of tools to help our users with exascale machine bottlenecks. Our research emphasizes performance analysis and code correctness and aims to address these main challenges: seamless integration with programming models, scalability, automatic analysis, detection of inefficient resource usage, and tool modularity.

Debugging and Correctness Tools

Middleware for Parallel Performance Tools

Debugging and Correctness Tools

Job Scheduling & Resource Management

Tuning at Runtime

Programming Models and Languages
View More

Programming models and languages are essential for expressing our computational problems in ways that take best advantage of the massive capability of current and future computers at LLNL. Our research efforts extend and improve existing programming models, such as OpenMP and MPI. Using tools like the ROSE compiler technology and Babel, we’re researching new ways to transform, analyze, optimize, combine, and interoperate languages.

Software Quality Assurance
View More

Our SQA experts provide guidance on SQA policy for the entire LLNL community and ensure compliance with DOE and NNSA orders and policies. We emphasize what has to be done, not how to do it. By taking a risk-based graded approach to LLNL’s unique and varied software projects, we’re able to maximize each project’s effectiveness and assist in identification and mitigation of project risks. Our team consults throughout the DOE complex, and we provide scientists and researchers with simple, user-friendly tools, templates, checklists, classes, and other related resources.

Improvements of Large Scientific Software

View More

We’re developing algorithms and software to enable the scalable solution of equations central to large-scale science simulations. Our research involves developing new mathematics and computing techniques, with a major focus on methods (e.g., multilevel methods) suitable for the next generation of extreme-scale supercomputers.

Multigrid and Multilevel Solvers

Nonlinear Solvers

Time Integration


Optimization Methods

System Software
View More

We’re creating an LLNL commodity cluster system software environment based on Linux/Open-Source. We use the Red Hat Enterprise Linux distribution, stripping out the modules we don’t need and adding and modifying components as required. Working in open source allows for important HPC customizations and builds in-house expertise. Having in-house software developers is necessary to quickly resolve problems (especially at scale) on our cutting-edge hardware without having to wait for the vendors. The environment includes Linux kernel modifications, cluster management tools, monitoring and failure detection, resource management, authentication and access control, and parallel file system software (detailed elsewhere). These clusters provide users with a production solution capable of running MPI jobs at scale. 

Cluster Management Tools

Resource Management

User Productivity Tools

Uncertainty Quantification
View More

We’re developing techniques to quantify numerical error in multiphysics simulations. Understanding approximation error is an important component of a broader UQ strategy, and we’re investigating both adjoint and forward propagation methods. We’re also developing mathematical and statistical techniques to quantify different types of uncertainties (aleatory, epistemic, model form) that are present in multiphysics simulation models. These non-intrusive techniques include those for parameter screening, global sensitivity analysis, response surface analysis, and Bayesian inferences. In addition, we’re investigating hybrid UQ methodologies that enable the blending of the more rigorous and efficient intrusive UQ methods with non-intrusive and semi-intrusive methods at physics module level. Many of these methods have been incorporated into an open source software package called PSUADE. In addition, we’re investigating hybrid UQ methodologies that enable the blending of the more rigorous and efficient intrusive UQ methods with non-intrusive and semi-intrusive methods at physics module level. This flexible methodology facilitates a plug-and-play concept for in-situ UQ and sensitivity analysis that will be useful for high-fidelity stochastic multiphysics simulations. We’re also esearching and developing stochastic data assimilation methods to quantify uncertainties associated with high-dimensional stochastic source inversion. These methods are useful in applications such as seismic and power grid analysis. We’re exploring efficient nonlinear and non-Gaussian methods such as kernel principal component analysis and adjoint-based Bayesian inference.

Error Estimation

Non-intrusive UQ Methods

Hybrid and Semi-intrusive UQ Models

UQ Software

Stochastic Data Assimilation

Visualization and Scientific Data Analysis
View More

We provide tools and technology to enable scientists and engineers to gain understanding and exploit their data, whether it comes from large-scale simulations or extreme-scale sensor technologies. We created and continue to develop VisIt, a full-featured, cutting-edge visualization and analysis application that is scalable to tens of thousands of cores and is capable of analyzing and visualizing extreme-scale simulations. In addition, we have a world-class research group that is advancing the state of the art in extreme-scale data streaming, uncertainty visualization and analysis, topological and feature-based analysis, and high performance video processing and analysis for multi-gigapixel sensors.

Compression Techniques

Feature Detection/Extraction

Image Processing

Multiresolution Algorithms

Scientific Data Management

Scientific Visualization

Streaming Data Analysis

Video Processing