Providing a stable, usable, leading-edge parallel application development environment
Development Environment Group in front of the Sierra supercomputer
Livermore Computing: Development Environment Group

We meet the needs of today's code developers

The Development Environment Group (DEG) endeavors to provide a stable, usable, leading-edge parallel application development environment that significantly increases the productivity of LLNL applications developers. We strive to do this by enabling better scaleable performance and enhancing the reliability of LLNL applications.

DEG partners with its application development user community to identify user requirements and evaluate tool effectiveness. Through collaborations with vendors and other third party software developers, DEG ensures a complete environment in the most cost effective way possible and meets the needs of today’s code developers while steering their code development to exploit emerging technologies.

DEG, part of Livermore Computing, is currently involved in the following projects and activities:

  • Compilers—Compilers for Fortran 90/95, Fortran 77, ANSI C, and C++; details are available about compilers currently installed on Livermore Computing platforms.
  • Debuggers—See Supported Software and Computing Tools for the available debugging tools, their locations, the machines on which they run, and available documentation (if any).
  • Languages—The primary standardized languages used for scientific computing are Fortran, C, and C++. The international organization responsible for standardization in the field of information technology is the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC). The US counterpart is the International Committee for Information Technology Standards (INCITS). JTC 1, SC 22 manages programming languages, their environments, and system software interfaces.
  • Parallel tools—Tools are provided on most platforms to allow programmers to take advantage of the parallel nature of the machines. MPI is available on all platforms. See Supported Software and Computing Tools for available parallel tools, their locations, the machines on which they run, and available documentation (if any).
  • Performance analysis tools—Various performance analysis tools are available that provide information regarding memory use, hardware counter data, system resource use, and communication. Each tool varies both in ease of use and application perturbation. Several tools provide a GUI for visualization and data reporting. Examination of data is typically done in a postmortem manner; however, some tools have run-time reporting capability. See Supported Software and Computing Tools for available performance analysis tools, their locations, the machines on which they run, and available documentation (if any).
  • Scalable I/O—Providing high-performance parallel file system and I/O library support for all major platforms at LLNL, working closely with end users for all parallel I/O issues, performing tests using locally developed tools, and collaborating with platform partners, academic researchers, and vendors to address ASC high-performance I/O needs.

Team

Name E-mail (@llnl.gov) Assignment/Interests
Scott Futral futral DEG group leader; general environment support, Run/Proxy, findentry, flint
Dong Ahn dahn TotalView support and development projects (including scalability project and BGQ port); scalable debugging tools (including STAT), techniques, and infrastructure (including LaunchMON and Fast Global File Status); hardware performance counter (e.g., PAPI); massively parallel loading (SPINDLE); and next generation resource manager.
Blaise Barney blaiseb HPC training/workshops, MPI, OpenMP, Pthreads, Totalview, ASC Alliances
Greg Becker becker33 Spack, package management for HPC, tools productization
Chris Chambreau chcham Memory, profiling, and MPI tracing tool support, including mpiP, TAU, Vampir/VampirTrace, and memP
Bor Chan (CASC) chan1 Benchmarking, performance analysis
Chris Earl earl2 C++ and OpenMP standards support, LLVM compilers and tools
Todd Gamblin (CASC) gamblin2 Performance measurement and analysis; distributed clustering; scalable in-situ analysis techniques; load balance for AMR; run-time systems; collaborative development tools (adept.llnl.gov, Confluence, JIRA, Greenhopper, source hosting, code review, build & test, etc.); CMake and other build systems
Alfredo Gimenez gimenez1 Data analysis and engineering R&D, facility/application monitoring, hardware performance counters
Elsa Gonsiorowski gonsie Application file I/O and parallel file systems support
John Gyllenhaal gyllen Valgrind support, linker and POE tricks, compilers, Tool Gear, Qt, DPCL, C/C++
Ian Karlin karlin1 Performance analysis and optimization, benchmarking, LULESH point of contact, Shocx LDRD CS lead
Greg Lee lee218 Parallel tool development, Stack Trace Analysis Tool (STAT), Python support, Intel software (compilers, Inspector, VTune Amplifier, and Pin) support,  math libraries (MKL, ACML, Petsc, and FFTW) support.
Matt LeGendre legendre1 Performance analysis tool support, tool component support
Edgar Leon leon MPI libraries, exascale architectures,  process/thread affinity
Marty McFadden mcfadden8 Umpire, umap, OMPD, msr-safe, and spindle support
Kathryn Mohror (CASC) mohror1 Scalable fault tolerant computing, performance measurement and analysis, scalable I/O systems
Adam Moody moody20 MPI and communication performance for Linux, MPI compiler scripts, Dotkit
Ramesh Pankajakshan pankajakshan1 Porting and optimization of field simulation codes to general purpose GPUs using Cuda, RAJA, and MPI; performance analysis and benchmarking
David Poliakoff poliakoff1 C++ template metaprogramming, performance tools, parallel programming abstractions (RAJA), application code support
Barry Rountree (CASC) rountree4 Statistical and algorithmic debugging, power-aware supercomputing
Danielle Sikich sikich1 mpiFileUtils, UnifyCR, data management tools, distributed file systems
Local Vendor Support   
Max Katz katz12 NVIDIA support expert
Roy Musselman musselman4 IBM application analyst, BGQ compiler and MPI support