wiki:FeaturesofPdaf

Version 88 (modified by lnerger, 6 hours ago) ( diff )

--

Features and Requirements

  • PDAF is implemented in Fortran90 with some features from Fortran 2003. The standard interface also supports models that are written in other languages like C or C++. Also the combination is Python is possible.
  • The parallelization uses the MPI (Message Passing Interface) standard. The localized filters use, in addition, OpenMP-parallelization with features of OpenMP-4.
  • The core routines are fully independent of the model code. They can be compiled separately and can be used as a library.

Simplifying the implementation

PDAF simplifies the implementation of data assimilation systems using existing model code by the following:

  1. PDAF provides fully implemented, parallelized, and optimized ensemble-based algorithms for data assimilation. Currently, these are ensemble-based Kalman filters like the LETKF, LESTKF, and EnKF methods and nonlinear filters are provided. Starting from PDAF V2.0 also 3D-variational methods are provided.
  2. PDAF provides two variants to build a data assimilation system:
    1. PDAF can be attached to the model source code by minimal changes to the code, which we call online mode. These changes only concern the general part of the code, but not the numerics of the model. In addition, a small set of routines is required that are specific to the model or the observations to be assimilated. These routines can be implemented like routines of the model.
    2. PDAF also offers an offline mode. This is for the case that you don't like to (or even cannot) modify your model source code at all. In the offline mode, PDAF is compiled separately from the model together with the supporting routines to handle the observations. Then, the model and the assimilation step are executed separately. This approach is simpler to implement than the online mode, but it is computationally less efficient.
  3. PDAF is called through a well-defined standard interface. This allows, for example, to switch between the LETKF, LESTKF, and LSEIK methods without additional coding.
  4. PDAF provides parallelization support for the data assimilation system. If your numerical model is already parallelized, PDAF enables the data assimilation system to run several model tasks in parallel within a single executable. However, PDAF can also be used without parallelization, for example to test small systems.
  5. PDAF does not require that your model can be called as a subroutine. Rather, PDAF is added to the model and the formed data assimilation system can be executed pretty much like the model-program would without data assimilation.
  6. The PDAF release also provides bindings to couple PDAF with selected real models. Such modelbindings are, e.g., available for the MITgcm and the NEMO ocean circulation models, for the AWI Climate Model (AWI-CM, a coupled model consisting of ECHAM (atmosphere) and FESOM (ocean)) and the Weather and Forecast Model (WRF). See the list of models that were already coupled to PDAF for an overview.

Data Assimilation Methods

PDAF provides the following method for data assimilation. All assimilation methods are fully implemented, optimized and parallelized. In addition, all ensemble-based methods offer an Ensemble-OI mode in which only a single ensemble state needs to be integrated.

Ensemble filters and smoothers

Local ensemble filters:

  • LETKF (Hunt et al., 2007)
  • LESTKF (Local Error Subspace Transform Kalman Filter, Nerger et al., 2012, see publications)
  • LEnKF (classical EnKF with covariance localization)
  • LNETF (localized Nonlinear Ensemble Transform Filter by Toedter and Ahrens (2015))
  • LSEIK (Nerger et al., 2006)
  • LKNETF (Local Kalman-nonlinear Ensemble Transform Filter, Nerger, 2022, see publications, added in PDAF V2.1)

Global ensemble filters:

  • ESTKF (Error Subspace Transform Kalman Filter, Nerger et al., 2012, see publications)
  • ETKF (The implementation follows Hunt et al. (2007) but without localization, which is available in the LETKF implementation)
  • EnKF (The classical formulation with perturbed observations by Evensen (1994), Burgers et al. (1998))
  • SEEK (The original formulation by Pham et al. (1998))
  • SEIK (Pham et al. (1998a, 2001), the implemented variant is described in more detail by Nerger et al. (2005))
  • NETF (Nonlinear Ensemble Transform Filter by Toedter and Ahrens (2015))
  • PF (Particle filter with resampling)

Smoother algorithms are provided for the following algorithms

  • ESTKF & LESTKF
  • ETKF & LETKF
  • EnKF
  • NETF & LNETF

3D variational methods

Starting from Version 2.0 of PDAF, 3D variational methods are also provided. The 3D-Var methods are implemented in incremental form using a control vector transformation (following the review by R. Bannister, Q. J. Roy. Meteorol. Soc., 2017) in three different variants:

  • 3D-Var - 3D-Var with parameterized covariance matrix
  • 3DEnVar - 3D-Var using ensemble covariance matrix. The ensemble perturbations are updated with either the LESTKF and ESTKF filters
  • Hyb3DVar - Hybrid 3D-Var using a combination of parameterized and ensemble covariance matrix. The ensemble perturbations are updated with either the LESTKF and ESTKF filters

Requirements

  • Compiler
    To compile PDAF, a Fortran compiler is required which supports Fortran 2003. PDAF has been tested with a variety of compilers like gfortran, ifort, nfort.
  • BLAS and LAPACK
    The BLAS and LAPACK libraries are used by PDAF. For Linux there are usually packages that provide these libraries. With commercial compilers the functions are usually provided by optimized libraries (like MKL, ESSL).
  • MPI
    An MPI library is required (e.g. OpenMPI).
  • make
    PDAF provides Makefile definitions for different compilers and operating systems.

Test machines

PDAF has been tested on various machines with different compilers and MPI libraries. Current test machines include:

  • Linux Desktop computer, Ubuntu, gfortran, OpenMPI
  • Notebook Apple MacBook, MacOS, gfortran, OpenMPI
  • Atos cluster 'Lise' at HLRN (Intel Cascade Lake processors), ifort, IMPI and OpenMPI
  • Windows with Cygwin, gfortran, OpenMPI
  • NEC SX-Aurora vector computer, nfort, NEC MPI

Past test machines also included

  • NEC SX-ACE, sxf90 compiler (rev 530), sxmpi
  • Cray CS400, Cray compiler, IMPI
  • Cray CS400, ifort, IMPI
  • Cray XC30 and XC40, Cray compiler CCE, MPICH
  • SGI Altix UltraViolet, SLES 11 operating system, ifort compiler, SGI MPT
  • IBM p575 with Power6 processors, AIX6.1, XLF compiler 12.1, ESSL library, POE parallel environment
  • IBM BladeCenter with Power6 processors, AIX5.3, XLF compilers 10.1 to 13.1, ESSL library, POE parallel environment

Test cases

The regular tests use a rather small configuration with a simulated model. This model is provided in the PDAF tutorial code of the release. In addition, the scalability of PDAF was examined with a real implementation with the finite element sea-ice ocean model (FESOM). In these tests up to 4800 processor cores of a supercomputer have been used (see Nerger and Hiller (2013)). In Nerger et al., GMD (2020), the scalability was assessed up to 12144 processor cores for the coupled atmosphere-ocean model AWI-CM (Sidorenko et al., 2015). Also, Kurtz et al., GMD, (2016) assessed the parallel performance up to 32768 processor cores for the TerrSysMP terrestial model system.

To examine PDAF's behavior with large-scale cases, experiments with the simulated model have been performed. By now the biggest case had a state dimension of 8.64.1011. An observation vector of size 1.73.1010 was assimilated. For these experiments, the computations used 57600 processor cores. In this case, the dimensions were limited by the available memory of the compute nodes. Using an ensemble of 25 states, the distributed ensemble array occupied about 2.9 GBytes of memory for each core (about 165 TBytes in total).

Note: See TracWiki for help on using the wiki.