14 | | 1. PDAF is attached to the model source code by minimal changes to the code, which we call 'online mode'. These changes only concern the general part of the code, but not the numerics of the model. In addition, a small set of routines is required that are specific to the model or the observations to be assimilated. These routines can be implemented like routines of the model. |
| 14 | 1. PDAF provides two variants to build a data assimilation system: |
| 15 | 1. PDAF can be attached to the model source code by minimal changes to the code, which we call ``online mode``. These changes only concern the general part of the code, but not the numerics of the model. In addition, a small set of routines is required that are specific to the model or the observations to be assimilated. These routines can be implemented like routines of the model. |
| 16 | 1. PDAF also offers an ``offline mode``. This is for the case that you don't like to (or even cannot) modify your model source code at all. In the offline mode, PDAF is compiled separately from the model together with the supporting routines to handle the observations. Then, the model and the assimilation step are executed separately. This approach is simpler to implement than the ``online mode``, but it is computationally less efficient. |
17 | | 1. PDAF does not require that your model can be called as a subroutine. Rather PDAF is added to the model and the formed data assimilation system can be executed pretty much like the model-program would without data assimilation. |
18 | | 1. PDAF also offers an offline mode. This is for the case that you don't want to (or even cannot) modify your model source code at all. In the offline mode, PDAF is compiled separately from the model together with the supporting routines to handle the observations. Then the model and the assimilation step are executed separately. While this strategy is possible, we don't recommend it, because it's computationally less efficient. |
19 | | 1. Starting with PDAF 1.13, the PDAF release also provides bindings to couple PDAF with selected real models. As of PDAF 1.15, modelbindings for the MITgcm ocean circulation model and for the AWI Climate Model (AWI-CM, a coupled model consisting of ECHAM (atmophsere) and FESOM (ocean)) are provided. |
| 19 | 1. PDAF does not require that your model can be called as a subroutine. Rather, PDAF is added to the model and the formed data assimilation system can be executed pretty much like the model-program would without data assimilation. |
| 20 | 1. The PDAF release also provides bindings to couple PDAF with selected real models. Such modelbindings are, e.g., available for the MITgcm and the NEMO ocean circulation models, for the AWI Climate Model (AWI-CM, a coupled model consisting of ECHAM (atmosphere) and FESOM (ocean)) and the Weather and Forecast Model (WRF). See the [wiki:ModelsConnectedToPDAF list of models that were already coupled to PDAF] for an overview. |
56 | | * '''Compiler'''[[BR]]To compile PDAF a Fortran compiler is required which supports Fortran 2003. PDAF has been tested with a variety of compilers like gfortran, ifort, xlf, pgf90, cce. |
57 | | * '''BLAS''' and '''LAPACK'''[[BR]]The BLAS and LAPACK libraries are used by PDAF. For Linux there are usually packages with these libraries. With commercial compilers the functions are usually provided by optimized libraries (like MKL, ESSL). |
58 | | * '''MPI''' [[BR]] An MPI library is required (e.g. OpenMPI). [For the PDAF versions before V2.0, the assimilation program can also be compiled and run without parallelization. For this, PDAF <2.0 provides functions that mimic MPI operations for a single process.] |
| 60 | * '''Compiler'''[[BR]]To compile PDAF, a Fortran compiler is required which supports Fortran 2003. PDAF has been tested with a variety of compilers like gfortran, ifort, nfort. |
| 61 | * '''BLAS''' and '''LAPACK'''[[BR]]The BLAS and LAPACK libraries are used by PDAF. For Linux there are usually packages that provide these libraries. With commercial compilers the functions are usually provided by optimized libraries (like MKL, ESSL). |
| 62 | * '''MPI''' [[BR]] An MPI library is required (e.g. OpenMPI). |
64 | | * Linux Desktop machine, Ubuntu, ifort compiler |
65 | | * Linux Desktop machine, Ubuntu, gfortran, OpenMPI |
66 | | * Notebook Apple !MacBook, Mac OS X, gfortran, OpenMPI |
67 | | * Atos cluster 'Lise' at HLRN (Intel Cascade Lake processors), ifort, IMPI |
68 | | * Windows 10 with Cygwin, gfortran, OpenMPI |
| 68 | * Linux Desktop computer, Ubuntu, gfortran, OpenMPI |
| 69 | * Notebook Apple !MacBook, MacOS, gfortran, OpenMPI |
| 70 | * Atos cluster 'Lise' at HLRN (Intel Cascade Lake processors), ifort, IMPI and OpenMPI |
| 71 | * Windows with Cygwin, gfortran, OpenMPI |
| 72 | * NEC SX-Aurora vector computer, nfort, NEC MPI |
81 | | The regular tests use a rather small configuration with a simulated model. This model is also included in the test suite of the downloadable PDAF package. |
82 | | In addition, the scalability of PDAF was examined with a real implementation with the finite element ocean model (FEOM, Danilov et al., A finite-element ocean model: Principles and evaluation. Ocean Modeling 6 (2004) 125-150). In these tests up to 4800 processor cores of a supercomputer have been used (see [PublicationsandPresentations Nerger and Hiller (2013)]). In [PublicationsandPresentations Nerger et al., GMD (2020)], the scalability was assessed up to 12144 processor cores for the coupled atmosphere-ocean model AWI-CM (Sidorenko et al., 2015). |
| 85 | The regular tests use a rather small configuration with a simulated model. This model is provided in the PDAF tutorial code of the release. |
| 86 | In addition, the scalability of PDAF was examined with a real implementation with the finite element sea-ice ocean model (FESOM). In these tests up to 4800 processor cores of a supercomputer have been used (see [PublicationsandPresentations Nerger and Hiller (2013)]). In [PublicationsandPresentations Nerger et al., GMD (2020)], the scalability was assessed up to 12144 processor cores for the coupled atmosphere-ocean model AWI-CM (Sidorenko et al., 2015). Also, [PublicationsandPresentations Kurtz et al., GMD, (2016)] assessed the parallel performance up to 32768 processor cores for the TerrSysMP terrestial model system. |