wiki:FeaturesofPdaf

Version 71 (modified by lnerger, 5 years ago) (diff)

--

Features and Requirements

  • PDAF is implemented in Fortran90 with some features from Fortran 2003. The standard interface also supports models that are written in other languages like C or C++.
  • The parallelization uses the MPI (Message Passing Interface) standard. The localized filters use, in addition, OpenMP-parallelization with features of OpenMP-4.
  • The core routines are fully independent of the model code. They can be compiled separately and can be used as a library.

Simplifying the implementation

PDAF simplifies the implementation of data assimilation systems using existing model code by the following:

  1. PDAF provides fully implemented, parallelized, and optimized ensemble-based algorithms for data assimilation. Currently, these are ensemble-based Kalman filters like the LETKF, LESTKF, and EnKF methods. Further nonlienar filters are provided.
  2. PDAF is attached to the model source code by minimal changes to the code, which we call 'online mode'. These changes only concern the general part of the code, but not the numerics of the model. In addition, a small set of routines is required that are specific to the model or the observations to be assimilated. These routines can be implemented like routines of the model.
  3. PDAF is called through a well-defined standard interface. This allows, for example, to switch between the LETKF, LESTKF, and LSEIK methods without additional coding.
  4. PDAF provides parallelization support for the data assimilation system. If your numerical model is already parallelized, PDAF enables the data assimilation system to run several model tasks in parallel within a single executable. However, PDAF can also be used without parallelization, for example to test small systems.
  5. PDAF does not require that your model can be called as a subroutine. Rather PDAF is added to the model and the formed data assimilation system can be executed pretty much like the model-program would without data assimilation.
  6. PDAF also offers an offline mode. This is for the case that you don't want to (or even cannot) modify your model source code at all. In the offline mode, PDAF is compiled separately from the model together with the supporting routines to handle the observations. Then the model and the assimilation step are executed separately. While this strategy is possible, we don't recommend it, because it's computationally less efficient.
  7. Starting with PDAF 1.13, the PDAF release also provides bindings to couple PDAF with selected real models. For a start we provide the modelbinding for the MITgcm ocean circulation model.

Filter algorithms

PDAF provides the following algorithms for data assimilation. All filters are fully implemented, optimized and parallelized.

Local filters:

  • LETKF (Hunt et al., 2007)
  • LESTKF (Local Error Subspace Transform Kalman Filter, Nerger et al., 2012, see publications)
  • LEnKF (classical EnKF with covariance localization, added in version 1.12)
  • LNETF (localized Nonlinear Ensemble Transform Filter by Toedter and Ahrens (2015), added in version 1.12)
  • LSEIK (Nerger et al., 2006)

Global filters:

  • ESTKF (Error Subspace Transform Kalman Filter, Nerger et al., 2012, see publications)
  • ETKF (The implementation follows Hunt et al. (2007) but without localization, which is available in the LETKF implementation)
  • EnKF (The classical formulation with perturbed observations by Evensen (1994), Burgers et al. (1998))
  • SEEK (The original formulation by Pham et al. (1998))
  • SEIK (Pham et al. (1998a, 2001), the implemented variant is described in more detail by Nerger et al. (2005))
  • NETF (Nonlinear Ensemble Transform Filter by Toedter and Ahrens (2015), added in version 1.12)
  • PF (Particle filter with resampling, added in version 1.14)

Starting from version 1.9 of PDAF, smoothers algorithms are provided for the following algorithms

  • ESTKF & LESTKF
  • ETKF & LETKF
  • EnKF
  • NETF (added in version 1.12)

Requirements

  • Compiler
    To compile PDAF a Fortran compiler is required which supports Fortran 2003. PDAF has been tested with a variety of compilers like gfortran, ifort, xlf, pgf90, cce.
  • BLAS and LAPACK
    The BLAS and LAPACK libraries are used by PDAF. For Linux there are usually packages with these libraries. With commercial compilers the functions are usually provided by optimized libraries (like MKL, ESSL).
  • MPI (optional)
    If the assimilation program should be executed with parallelization, an MPI library is required (e.g. OpenMPI). The assimilation program can also be compiled and run without parallelization. For this, PDAF provides functions that mimic MPI operations for a single process.
  • make
    PDAF provides Makefile definitions for different compilers and operating systems.

Test machines

PDAF has been tested on various machines with different compilers and MPI libraries. Current test machines include:

  • Linux Desktop machine, Ubuntu, ifort compiler
  • Linux Desktop machine, Ubuntu, gfortran, OpenMPI
  • Notebook Apple MacBook, Mac OS X, gfortran, OpenMPI
  • Cray XC30 and XC40, Cray compiler CCE, MPICH
  • Cray CS400, ifort, IMPI
  • NEC SX-ACE, sxf90 compiler (rev 530), sxmpi
  • Windows 10 with Cygwin, gfortran, OpenMPI

Past test machines also included

  • IBM p575 with Power6 processors, AIX6.1, XLF compiler 12.1, ESSL library, POE parallel environment
  • IBM BladeCenter with Power6 processors, AIX5.3, XLF compilers 10.1 to 13.1, ESSL library, POE parallel environment
  • SGI Altix UltraViolet, SLES 11 operating system, ifort compiler, SGI MPT

Test cases

The regular tests use a rather small configuration with a simulated model. This model is also included in the test suite of the downloadable PDAF package. In addition, the scalability of PDAF was examined with a real implementation with the finite element ocean model (FEOM, Danilov et al., A finite-element ocean model: Principles and evaluation. Ocean Modeling 6 (2004) 125-150). In these tests up to 4800 processor cores of a supercomputer have been used (see Nerger and Hiller (2013)).

To examine PDAF's behavior with large-scale cases, experiments with the simulated model have been performed. By now the biggest case had a state dimension of 3.686.1011. An observation vector of size 7.373.109 was assimilated. For these experiments, the computations used 24576 processor cores. In this case, the distributed ensemble array occupied about 2.86 GBytes of memory for each core.

Models connected to PDAF

ADCIRC Finite-Element ocean circulation model
AWI-CM Atmosphere ocean coupled model model binding included since PDAF V1.15
BSHcmod operational BSH circulation model, see Losa et al, 2012, 2014
FESOM Finite Element Sea-ice Ocean Model, see e.g. Nerger et al., 2006, Janjic et al. 2011, Androsov et al. 2019 model binding available as part of AWI-CM included since PDAF V1.15
HBM Hiromb-Boos Model, see Nerger et al., 2016
Lorenz-96 the low-dimensional chaotic test model for data assimilation (also known as Lorenz-40 or Lorenz-95) included in PDAF release
Lorenz-63 the 3-variable chaotic system by Lorenz (1963) included in PDAF release
MITgcm ocean circulation model, see e.g. Yang et al., 2014-2016, model binding included since PDAF V1.13
MPI-ESM The MPI Earth-System Model, see Brune et al. 2015
NEMO ocean circulation model, see e.g. Tödter et al. 2016
NOBM NASA Ocean Biogeochmical model, see Nerger and Gregg, 2007, 2008
OMCT Ocean Model for Circulation and Tides, see Saynisch and Thomas, 2012, Irrgang et al., 2017
Parody dynamo model see Fournier et al. 2013
SCHISM The SCHISM modeling system, Zhang et al see schism.wiki
TerrSysMP coupled atmosphere-land surface-subsurface model, see Kurtz et al, 2016, Baatz et al, 2017 model binding available, see Kurtz et al., 2016