Changes between Version 88 and Version 89 of FeaturesofPdaf


Ignore:
Timestamp:
May 30, 2025, 5:33:27 PM (2 days ago)
Author:
lnerger
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • FeaturesofPdaf

    v88 v89  
    33[[PageOutline]]
    44
    5  * PDAF is implemented in Fortran90 with some features from Fortran 2003. The standard interface also supports models that are written in other languages like C or C++. Also the combination is Python is possible.
    6  * The parallelization uses the MPI (Message Passing Interface) standard. The localized filters use, in addition, OpenMP-parallelization with features of OpenMP-4.
     5 * PDAF is implemented using Fortran 2003. The standard interface also supports models that are written in other languages like C or C++. Also the combination is Python is possible and pyPDAF provide a Python interface for PDAF.
     6 * The parallelization uses the MPI (Message Passing Interface) standard. The localized filters use, in addition, OpenMP parallelization with features of OpenMP-4.
    77 * The core routines are fully independent of the model code. They can be compiled separately and can be used as a library.
    88
     
    2626=== Ensemble filters and smoothers ===
    2727
    28 Local ensemble filters:
    29  * LETKF (Hunt et al., 2007)
    30  * LESTKF (Local Error Subspace Transform Kalman Filter, Nerger et al., 2012, [PublicationsandPresentations see publications])
    31  * LEnKF (classical EnKF with covariance localization)
    32  * LNETF (localized Nonlinear Ensemble Transform Filter by Toedter and Ahrens (2015))
    33  * LSEIK (Nerger et al., 2006)
    34  * LKNETF (Local Kalman-nonlinear Ensemble Transform Filter, Nerger, 2022, [PublicationsandPresentations see publications], added in PDAF V2.1)
     28**Local ensemble filters:**
     29 * **LETKF** (Hunt et al., 2007)
     30 * **LESTKF** (Local Error Subspace Transform Kalman Filter, Nerger et al., 2012, [PublicationsandPresentations see publications])
     31 * **LEnKF** (classical EnKF with perturbed observations by Evensen (1994), Burgers et al. (1998) with covariance localization)
     32 * **LNETF** (localized Nonlinear Ensemble Transform Filter by Toedter and Ahrens (2015))
     33 * **LSEIK** (Nerger et al., 2006)
     34 * **LKNETF** (Local Kalman-nonlinear Ensemble Transform Filter, Nerger, 2022, [PublicationsandPresentations see publications], added in PDAF V2.1)
     35 * **EnSRF** (Ensemble square-root filter using serial observation processing and covariance localization) by J. Whitaker and T. Hamill, Mon. Wea. Rev., 2002
     36 * **EAKF** (Ensemble Adjustment Filter using serial observation processing and covariance localization) by J. Anderson, Mon. Wea. Rev., 2003
    3537
    36 Global ensemble filters:
    37  * ESTKF (Error Subspace Transform Kalman Filter, Nerger et al., 2012, [PublicationsandPresentations see publications])
    38  * ETKF (The implementation follows Hunt et al. (2007) but without localization, which is available in the LETKF implementation)
    39  * EnKF (The classical formulation with perturbed observations by Evensen (1994), Burgers et al. (1998))
    40  * SEEK (The original formulation by Pham et al. (1998))
    41  * SEIK (Pham et al. (1998a, 2001), the implemented variant is described in more detail by Nerger et al. (2005))
    42  * NETF (Nonlinear Ensemble Transform Filter by Toedter and Ahrens (2015))
    43  * PF (Particle filter with resampling)
     38**Global ensemble filters:**
     39 * **ESTKF** (Error Subspace Transform Kalman Filter, Nerger et al., 2012, [PublicationsandPresentations see publications])
     40 * **ETKF** (The implementation follows Hunt et al. (2007) but without localization, which is available in the LETKF implementation)
     41 * **EnKF** (The classical formulation with perturbed observations by Evensen (1994), Burgers et al. (1998))
     42 * **SEIK** (Pham et al. (1998a, 2001), the implemented variant is described in more detail by Nerger et al. (2005))
     43 * **NETF** (Nonlinear Ensemble Transform Filter by Toedter and Ahrens (2015))
     44 * **PF** (Particle filter with resampling, see, e.g., Vetra-Carvalho et al. 2018)
    4445
    45 Smoother algorithms are provided for the following algorithms
     46**Smoother algorithms** are provided for the following algorithms
    4647 * ESTKF & LESTKF
    4748 * ETKF & LETKF
     
    5152=== 3D variational methods ===
    5253
    53 Starting from Version 2.0 of PDAF, 3D variational methods are also provided. The 3D-Var methods are implemented in incremental form using a control vector transformation (following the review by R. Bannister, Q. J. Roy. Meteorol. Soc., 2017) in three different variants:
    54  * 3D-Var - 3D-Var with parameterized covariance matrix
    55  * 3DEnVar - 3D-Var using ensemble covariance matrix. The ensemble perturbations are updated with either the LESTKF and ESTKF filters
    56  * Hyb3DVar - Hybrid 3D-Var using a combination of parameterized and ensemble covariance matrix. The ensemble perturbations are updated with either the LESTKF and ESTKF filters
     543D variational methods are provided in thee different variants.
     55The 3D-Var methods are implemented in incremental form using a control vector transformation (following the review by R. Bannister, Q. J. Roy. Meteorol. Soc., 2017):
     56 * **3D-Var** - 3D-Var with parameterized covariance matrix
     57 * **3DEnVar** - 3D-Var using ensemble covariance matrix. The ensemble perturbations are updated with either the LESTKF and ESTKF filters
     58 * **Hyb3DVar** - Hybrid 3D-Var using a combination of parameterized and ensemble covariance matrix. The ensemble perturbations are updated with either the LESTKF and ESTKF filters
    5759
    5860== Requirements ==
     
    8688In addition, the scalability of PDAF was examined with a real implementation with the finite element sea-ice ocean model (FESOM). In these tests up to 4800 processor cores of a supercomputer have been used (see [PublicationsandPresentations Nerger and Hiller (2013)]). In [PublicationsandPresentations Nerger et al., GMD (2020)], the scalability was assessed up to 12144 processor cores for the coupled atmosphere-ocean model AWI-CM (Sidorenko et al., 2015). Also, [PublicationsandPresentations Kurtz et al., GMD, (2016)] assessed the parallel performance up to 32768 processor cores for the TerrSysMP terrestial model system.
    8789
    88 To examine PDAF's behavior with large-scale cases, experiments with the simulated model have been performed. By now the biggest case had a state dimension of 8.64^.^10^11^. An observation vector of size 1.73^.^10^10^ was assimilated. For these experiments, the computations used 57600 processor cores. In this case, the dimensions were limited by the available memory of the compute nodes. Using an ensemble of 25 states, the distributed ensemble array occupied about 2.9 GBytes of memory for each core (about 165 TBytes in total).
     90To examine PDAF's behavior with large-scale cases, experiments with a simulated model have been performed. By now the biggest case had a state dimension of 8.64^.^10^11^. An observation vector of size 1.73^.^10^10^ was assimilated. For these experiments, the computations used 57600 processor cores. In this case, the dimensions were limited by the available memory of the compute nodes. Using an ensemble of 25 states, the distributed ensemble array occupied about 2.9 GBytes of memory for each core (about 165 TBytes in total).