SUNDIALS: SUite of Nonlinear and DIfferential/ALgebraic Equation Solvers

ARKode

ARKode is a solver library that provides adaptive-step time integration of the initial value problem for systems of stiff, nonstiff, and multi-rate systems of ordinary differential equations (ODEs) given in linearly implicit form M y’ = fE(t,y) + fI(t,y), where M is a given nonsingular matrix (possibly time dependent). The right-hand side function is partitioned into two components–fE(t,y), containing the “slow” time scale components to be integrated explicitly, and fI(t,y), containing the “fast” time scale components to be integrated implicitly. The methods used in ARKode are adaptive-step additive Runge Kutta methods, defined by combining two complementary Runge-Kutta methods–one explicit (ERK) and the other diagonally implicit (DIRK). Only the components in fI(t,y) must be solved implicitly, allowing for splittings tuned for use with optimal implicit solvers. ARKode is packaged with a wide array of built-in methods, including adaptive explicit methods of orders 2-6, adaptive implicit methods of orders 2-5, and adaptive implicit-explicit (IMEX) methods of orders 3-5. The implicit nonlinear systems within implicit integrators are solved approximately at each integration step using a modified Newton method, an Inexact Newton method, or an accelerated fixed-point solver. For the Newton-based methods and the serial or threaded NVECTOR modules in SUNDIALS, ARKode provides both direct (dense, band, or sparse) and preconditioned Krylov iterative (GMRES, BiCGStab, TFQMR, FGMRES, PCG) linear solvers. When used with one of the distributed parallel NVECTOR modules, including PETSc and hypre vectors, or a user-provided vector data structure, only the Krylov solvers are available, although a user may supply their own linear solver for any data structures if desired.  For the serial vector structure, there is a banded preconditioner module called ARKBANDPRE for use with the Krylov solvers, while for the distributed memory parallel structure there is a preconditioner module called ARKBBDPRE which provides a band-block-diagonal preconditioner.

For use with Fortran applications, a set of Fortran/C interface routines, called FARKode, is also supplied. These are written in C, but assume that the user calling program and all user-supplied routines are in Fortran.

See Software page for download and documentation.

ARKode Release History

What’s new in v.2.2.1?

  • Fixed a bug in the CUDA NVector where the N_VInvTest operation could write beyond the allocated vector data
  • Fixed library installation path for multiarch systems. This fix changes the default library installation path to CMAKE_INSTALL_PREFIX/CMAKE_INSTALL_LIBDIR from CMAKE_INSTALL_PREFIX/lib. CMAKE_INSTALL_LIBDIR is automatically set, but is available as a CMAKE option that can modified.

What’s new in v.3.0.0-dev.2?

Version 3.0.0-dev.2 is a third step toward the full 3.0.0 release which should be complete by end of 2018. This development release includes all changes from v.2.2.0 in addition to those listed below.  The 3.0.0 release will include a full redesign of our nonlinear solver interfaces allowing for encapsulation of the nonlinear solvers and ease in interfacing outside nonlinear solver packages, streamlined linear solver interfaces, a restructuring of the ARKode package to allow for more time stepping options, and addition of a two-reate explicit/explicit integrator. 

    • New features and/or enhancements

    ARKode, CVODES, and IDAS have been updated to use the SUNNONLINSOL nonlinear solver API.

    The direct and iterative linear solver interfaces in ARKode, CVODE, IDA, and KINSOL have been merged into a single unified linear solver interface to support any valid SUNLINSOL module. The unified interface is very similar to the previous DLS and SPILS interfaces. To minimize challenges in user migration to the unified linear solver interface, the previous DLS and SPILS routines for CVODE, IDA, and KINSOL may still be used; these will be deprecated in future releases, so we recommend that users migrate to the new names soon. Additionally, we note that Fortran users will need to enlarge their iout array of optional integer outputs, and update the indices that they query for certain linear-solver-related statistics. The names of all constructor routines for SUNDIALS-provided SUNLinSol implementations have been updated to follow the naming convention SUNLinSol_* where * is the name of the linear solver e.g., Dense, KLU, SPGMR, PCG, etc. Solver-specific “set” routine names have been similarly standardized. To minimize challenges in user migration to the new names, the previous routine names may still be used; these will be deprecated in future releases, so we recommend that users migrate to the new names soon. The ARKode library has been entirely rewritten to support a modular approach to one-step methods, which should allow rapid research and development of novel integration methods without affecting existing solver functionality.

    ARKode’s dense output infrastructure has been improved to support higher-degree Hermite polynomial interpolants (up to degree 5) over the last successful time step.

    What’s new in v.2.2.0?

    • Added hybrid MPI/CUDA and MPI/RAJA vectors to allow use of more than one MPI rank when using a GPU system.  The vectors assume one GPU device per MPI rank.

    • Changed the name of the RAJA NVector library to libsundials\nveccudaraja\lib from libsundials\nvecraja\lib to better reflect that we only support CUDA as a backend for RAJA currently.

    • Increased CMake minimum version to 3.1.3

    • Several changes were made to the build system.

      • If MPI is enabled and MPI compiler wrappers are not set, the build system will check if   CMAKE_<language>_COMPILER can compile MPI programs before trying to locate and use an MPI installation.

      • The native CMake FindMPI module is now used to locate an MPI installation.

      • The options for setting MPI compiler wrappers and the executable for running MPI programs have been updated to align with those in the native CMake FindMPI module. This update included changing MPI_MPICC to MPI_C_COMPILER, MPI_MPICXX to MPI_CXX_COMPILER, combining MPI_MPIF77 and MPI_MPIF90 to MPI_Fortran_COMPILER, and changing MPI_RUN_COMMAND to MPIEXEC.

      • When a Fortran name-mangling scheme is needed (e.g., LAPACK_ENABLE is ON) the build system will infer the scheme from the Fortran compiler. If a Fortran compiler is not available or the inferred or default scheme needs to be overridden, the advanced options SUNDIALS_F77_FUNC_CASE and SUNDIALS_F77_FUNC_UNDERSCORES can be used to manually set the name-mangling scheme and bypass trying to infer the scheme.

      • Additionally, parts of the main CMakeLists.txt file were moved to new files in the src and example directories to make the CMake configuration file structure more modular.

    What’s new in v.3.0.0-dev.1?

    No changes were made in this package for this development release beyond those listed for v.2.1.2.

    What’s new in v.2.1.2?

    • Updated the minimum required version of CMake to 2.8.12 and enabled using rpath by default to locate shared libraries on OSX.
    • Fixed Windows specific problem where ‘sunindextype’ was not correctly defined when using 64-bit integers for the SUNDIALS index type. On Windows ‘sunindextype’ is now defined as the MSVC basic type ‘__int64’

    • Added sparse SUNMatrix “Reallocate” routine to allow specification of the nonzero storage.

    • Updated the KLU SUNLinearSolver module to set constants for the two reinitialization types, and fixed a bug in the full reinitialization approach where the sparse SUNMatrix pointer would go out of scope on some architectures.

    • Updated the “ScaleAdd” and “ScaleAddI” implementations in the sparse SUNMatrix module to more optimally handle the case where the target matrix contained sufficient storage for the sum, but had the wrong sparsity pattern.  The sum now occurs in-place, by performing the sum backwards in the existing storage.  However, it is still more efficient if the user-supplied Jacobian routine allocates storage for the sum ‘I+ gamma J’ manually (with zero entries if needed).

    • Changed the LICENSE install path to ‘instdir/include/sundials’.

    What’s new in v.3.0.0-dev?

    Version 3.0.0-dev is a first step toward the next major release which should be complete by end of 2018.  This development release includes all changes from v.2.1.1 in addition to those listed below.  This  release will inlude a full redesign of our nonlinear solver interfaces allowing for encapsulation of the nonlinear solvers and ease in interfacing outside nonlinear solver packages.

    • New features and/or enhancements
      • Three fused vector operations and seven vector array operations have been added to the NVECTOR API. These optional operations are intended to increase data reuse in vector operations, reduce parallel communication on distributed memory systems, and lower the number of kernel launches on systems with accelerators. The new operations are N_VLinearCombination, N_VScaleAddMulti, N_VDotProdMulti, N_VLinearCombinationVectorArray, N_VScaleVectorArray, N_VConstVectorArray, N_VWrmsNormVectorArray, N_VWrmsNormMaskVectorArray, N_VScaleAddMultiVectorArray, and N_VLinearCombinationVectorArray. If any of these operations are defined as NULL in an NVECTOR implementation the NVECTOR interface will automatically call standard NVECTOR operations as necessary.   Details on the new operations can be found in the user guide Chapter on the NVECTOR API.
      • Several changes were made to the build system.
        • If MPI is enabled and MPI compiler wrappers are not set, the build system will check if  CMAKE_<language>_COMPILER can compile MPI programs before trying to locate and use an MPI installation. The native CMake FindMPI module is now used to locate an MPI installation.
        • The options for setting MPI compiler wrappers and the executable for running MPI programs have been updated to align with those in the native CMake FindMPI module. This included changing MPI_MPICC to MPI_C_COMPILER, MPI_MPICXX to MPI_CXX_COMPILER, combining MPI_MPIF77 and MPI_MPIF90 to MPI_Fortran_COMPILER, and changing MPI_RUN_COMMAND to MPIEXEC.
        • When a Fortran name-mangling scheme is needed (e.g., LAPACK_ENABLE is ON) the build system will infer the scheme from the Fortran compiler. If a Fortran compiler is not available or the inferred or default scheme needs to be overridden, the advanced options SUNDIALS_F77_FUNC_CASE and SUNDIALS_F77_FUNC_UNDERSCORES can be used to manually set the name-mangling scheme and bypass trying to infer the scheme.
        • Parts of the main CMakeLists.txt file were moved to new files in the src and example directories to make the CMake configuration file structure more modular.

    What’s new in v.2.1.1?

    • Fixed a potential memory leak in the SPGMR and SPFGMR linear solvers: if “Initialize” was called multiple times then the solver memory was reallocated (without being freed).
    • Fixed C++11 compiler errors/warnings about incompatible use of string literals.

    • Updated KLU SUNLinearSolver module to use a typedef for the precision-specific solve function to be used (to avoid compiler warnings).

    • Added missing typecasts for some (void*) pointers (again, to avoid compiler warnings).

    • Bugfix in sunmatrix_sparse.c where we had used ‘int’ instead of ‘sunindextype’ in one location.

    • Added missing #include <stdio.h> in NVECTOR and SUNMATRIX header files.

    • Fixed an indexing bug in the CUDA NVECTOR implementation of N_VWrmsNormMask and revised the RAJA NVECTOR implementation of N_VWrmsNormMask to work with mask arrays using values other than zero or one. Replaced doubles with realtypes in the RAJA vector test functions.

    • Fixed compilation issue with GCC 7.3.0 and Fortran programs that do not require a SUNMatrix or SUNLinearSolver module (e.g. iterative linear solvers, explicit methods in ARKode, functional iteration in CVODE, etc.).

    • In ARKode:

      • Fixed a minor bug in the ARKReInit routine, where a flag was incorrectly set to indicate that the problem had been resized (instead of just re-initialized).

      • Added missing prototype for ARKSpilsGetNumMTSetups.

    What’s new in v.2.1.0?

    • New features and/or enhancements
      • Added NVECTOR print functions that write vector data to a specified file (e.g., N_VPrintFile_Serial).
      • Added ‘make test’ and ‘make test_install’ options to the build system for testing SUNDIALS after building with ‘make’ and installing with ‘make install’ respectively.
      • Added “Changes in …” (latest version) to Intro. in all User Guides.

    What’s new in v2.0.0?

    • New features and/or enhancements
      • New linear solver API and interfaces for all SUNDIALS packages and linear solvers.  The goal of the redesign of these interfaces was to provide more encapsulation and ease in interfacing custom linear solvers and interoperability with linear solver libraries.
        • Added generic SUNMATRIX module with three provided implementations: dense, banded, and sparse.  These implementations replicate previous SUNDIALS Dls and Sls matrix structures in a single object-oriented API.
        • Added example problems demonstrating use of generic SUNMATRIX modules.
        • Added generic SUNLINEARSOLVER module with eleven provided implementations: dense, banded, LAPACK dense, LAPACK band, KLU, SuperLU_MT, SPGMR, SPBCGS, SPTFQMR, SPFGMR, and PCG.  These implementations replicate previous SUNDIALS generic linear solvers in a single object-oriented API.
        • Added example problems demonstrating use of generic SUNLINEARSOLVER modules.
        • Expanded package-provided direct linear solver (Dls) interfaces and scaled, preconditioned, iterative linear solver (Spils) interfaces to utilize generic SUNMATRIX and SUNLINEARSOLVER objects.
        • Removed package-specific, linear solver-specific, solver modules (e.g. CVDENSE, KINBAND, IDAKLU, ARKSPGMR) since their functionality is entirely replicated by the generic Dls/Spils interfaces and SUNLINEARSOLVER/SUNMATRIX modules.  The exception is CVDIAG, a diagonal approximate Jacobian solver available to CVODE and CVODES.
        • Converted all SUNDIALS example problems to utilize new generic SUNMATRIX and SUNLINEARSOLVER objects, along with updated Dls and Spils linear solver interfaces.
        • Added Spils interface routines to ARKode, CVODE, CVODES, IDA and IDAS to allow specification of a user-provided “JTSetup” routine. This change supports users who wish to set up data structures for the user-provided Jacobian-times-vector (“JTimes”) routine, and where the cost of one JTSetup setup per Newton iteration can be amortized between multiple JTimes calls.
      • Two new NVECTOR modules added: for CUDA and RAJA support for GPU systems.  These vectors are supplied to provide very basic support for running on GPU architectures.  Users are advised that these vectors both move all data to the GPU device upon construction, and speedup will only be realized if the user also conducts the right-hand-side function evaluation on the device. In addition, these vectors assume the problem fits on one GPU. For further information about RAJA, users are referred to the web site, https://software.llnl.gov/RAJA/.
      • Addition of sunindextype option for 32- or 64-bit integer data index types within all SUNDIALS structures.
        • Sunindextype can be int64_t or int32_t or long long int and int depending on machine support for portable types.
        • The Fortran interfaces continue to use long_int for indices, except for their sparse matrix interface that now uses the new sunindextype.
        • Includes interfaces to PETSc, hypre, SuperLU_MT, and KLU with either 64-bit or 32-bit capabilities depending how the user configures SUNDIALS.
      • Temporary vectors were removed from preconditioner setup and solve routines for all packages.  It is assumed that all necessary data for user-provided preconditioner operations will be allocated and stored in user-provided data structures.
      • The file include/sundials_fconfig.h was added.  This file contains SUNDIALS type information for use in Fortran programs. 
      • Added support for many xSDK-compliant build system keys.
        • The xSDK is a movement in scientific software to provide a foundation for the rapid and efficient production of high-quality, sustainable extreme-scale scientific applications. 
        • More information can be found at https://xsdk.info.
      • Added functions SUNDIALSGetVersion and SUNDIALSGetVersionNumber to get SUNDIALS release version information at runtime.

      • To avoid potential namespace conflicts, the macros defining booleantype values TRUE and FALSE have been changed to SUNTRUE and SUNFALSE respectively.

      • In build system:
        • Added separate BLAS_ENABLE and BLAS_LIBRARIES CMake variables.
        • Additional error checking during CMake configuration.
        • Fixed minor CMake bugs.
        • Renamed CMake options to enable/disable examples for greater clarity and added option to enable/disable Fortran 77 examples:
          • Changed EXAMPLES_ENABLE to EXAMPLES_ENABLE_C.
          • Changed CXX_ENABLE to EXAMPLES_ENABLE_CXX.
          • Changed F90_ENABLE to EXAMPLES_ENABLE_F90.
          • Added EXAMPLES_ENABLE_F77 option.
      • Added comments to arkode_butcher.c regarding which methods should have coefficients accurate enough for use in quad precision.
      • Corrections and additions to all User Guides.
    • Bug fixes
      • Fixed a bug in arkode_butcher.c in use of RCONST.
      • Fixed a bug in in the arkInitialSetup utility routine in the order of operations when setting up and using mass matrices to ensure the mass matrix vector product is set up before the “msetup” routine is called.
      • Fixed ARKode printf-related compiler warnings when building SUNDIALS with extended precision.

    What’s new in v1.1.0?

    • New features and/or enhancements
      • Two new NVECTOR modules added: for Hypre ParVector and PETSc.
      • In vector API, added new required function, N_VGetVectorID.
      • Upgrades to sparse solver interfaces; now support CSR matrix type with KLU solver.
      • Example codes were changed from using NV_DATA macro to using N_VGetArrayPointer_* when using the native vectors shipped with SUNDIALS.
      • Implicit predictor algorithms were updated: methods 2 and 3 were improved, a new predictor approach was added, and the default choice was modified.
      • Revised handling of integer codes for specifying built-in Butcher tables: a global numbering system is still used, but methods now have #defined names to simplify the user interface.
      • Maximum number of Butcher table stages was increased from 8 to 15 to accommodate very high order methods, and an 8th-order adaptive ERK method was added.
      • Added support for the explicit and implicit methods in an additive Runge-Kutta method to utilize different stage times, solution and embedding coefficients, to support new SSP-ARK methods.
      • Extended FARKODE interface to include a routine to set scalar/array-valued residual tolerances, to support Fortran applications with non-identity mass-matrices.
      • Updated to return integers from linear solver and preconditioner ‘free’ functions.
    •  Bug fixes
      • Fix in initialization of linear solver performance counters.
      • Method and embedding for Billington and TRBDF2 explicit Runge-Kutta methods were swapped.
      • Fix for user specification of absolute tolerance array along with vector Resize() functionality.
      • Fix for user-supplied Butcher tables without embeddings (if fixed time steps or manual adaptivity are employed).
      • Multiple documentation updates.
      • Added missing ARKSpilsGetNumMtimesEvals() function.

    What’s new in v1.0.0?

    • Initial release