Changes between Initial Version and Version 1 of AdaptParallelization_PDAF23


Ignore:
Timestamp:
May 19, 2025, 7:54:54 PM (2 weeks ago)
Author:
lnerger
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • AdaptParallelization_PDAF23

    v1 v1  
     1= Adapting a model's parallelization for PDAF2 =
     2
     3{{{
     4#!html
     5<div class="wiki-toc">
     6<h4>Implementation Guide for PDAF2</h4>
     7<ol><li><a href="ImplementationGuide_PDAF23">Main page</a></li>
     8<li>Adaptation of the parallelization</li>
     9<li><a href="InitPdaf_PDAF23">Initialization of PDAF</a></li>
     10<li><a href="ModifyModelforEnsembleIntegration_PDAF23">Modifications for ensemble integration</a></li>
     11<li><a href="ImplementationofAnalysisStep_PDAF23">Implementation of the analysis step</a></li>
     12<li><a href="AddingMemoryandTimingInformation_PDAF23">Memory and timing information</a></li>
     13</ol>
     14</div>
     15}}}
     16
     17[[PageOutline(2-3,Contents of this page)]]
     18
     19== Overview ==
     20
     21Like many numerical models, PDAF uses the MPI standard for the parallelization. In the description below, we assume that the model is parallelized using MPI.
     22
     23PDAF supports a 2-level parallelization: First, the numerical model can be parallelized and can be executed using several processors. Second, several model tasks can be computed in parallel, i.e. a parallel ensemble integration can be performed. This 2-level parallelization has to be initialized before it can be used. The templates-directory  `templates/` contains the file `init_parallel_pdaf.F90` that can be used as a template for the initialization. The required variables are defined in `mod_parallel.F90`, which is stored in the same directory and can also be used as a template. If the numerical model itself is parallelized, this parallelization has to be adapted and modified for the 2-level parallelization of the data assimilation system generated by adding PDAF to the model. The necessary steps are described below.
     24
     25
     26== Three communicators ==
     27
     28MPI uses so-called 'communicators' to define groups of parallel processes. These groups can then conveniently exchange information. In order to provide the 2-level parallelism for PDAF, three communicators need to be initialized that define the processes that are involved in different tasks of the data assimilation system.
     29The required communicators are initialized in the routine `init_parallel_pdaf`. There are called
     30 * `COMM_model` - defines the groups of processes that are involved in the model integrations (one group for each model task)
     31 * `COMM_filter` - defines the group of processes that perform the filter analysis step
     32 * `COMM_couple` - defines the groups of processes that are involved when data are transferred between the model and the filter
     33
     34The parallel region of an MPI parallel program is initialized by calling `MPI_init`.  By calling `MPI_init`, the communicator `MPI_COMM_WORLD` is initialized. This communicator is pre-defined by MPI to contain all processes of the MPI-parallel program. Often it is sufficient to conduct all parallel communication using only `MPI_COMM_WORLD`. Thus, numerical models often use only this communicator to control all communication. However, as `MPI_COMM_WORLD` contains all processes of the program, this approach will not allow for parallel model tasks. In order to allow parallel model tasks, it is required to replace `MPI_COMM_WORLD` by an alternative communicator that is split for the model tasks. We will denote this communicator `COMM_model`. If a model code already uses a communicator distinct from `MPI_COMM_WORLD`, it should be possible to use that communicator.
     35
     36[[Image(//pics/communicators_PDAFonline.png)]]
     37[[BR]]'''Figure 1:''' Example of a typical configuration of the communicators using a parallelized model. In this example we have 12 processes over all, which are distributed over 3 model tasks (COMM_model) so that 3 model states can be integrated at the same time. COMM_couple combines each set of 3 communicators of the different model tasks. The filter is executed using COMM_filter which uses the same processes of the first model tasks, i.e. COMM_model 1 (Figure credits: A. Corbin)
     38
     39== Using COMM_model ==
     40
     41Frequently the parallelization is initialized in the model by the lines:
     42{{{
     43      CALL MPI_Init(ierr)
     44      CALL MPI_Comm_Rank(MPI_COMM_WORLD, rank, ierr)
     45      CALL MPI_Comm_Size(MPI_COMM_WORLD, size, ierr)
     46}}}
     47(The call to `MPI_init` is mandatory, while the second an third lines are optional) If the model itself is not parallelized, the MPI-initialization will not be present. Please see the section '[#Non-parallelmodels Non-parallel models]' below for this case.
     48
     49Subsequently, one can define `COMM_model` by adding
     50{{{
     51      COMM_model = MPI_COMM_WORLD
     52}}}
     53In addition, the variable `COMM_model` has to be declared in a way such that all routines using the communicator can access it. The parallelization variables of the model are frequently held in a Fortran module. In this case, it is easiest to add `COMM_model` as an integer variable here.  (The example declares `COMM_model` and other parallelization-related variables in `mod_parallel.F90`)
     54
     55Having defined the communicator `COMM_model`, the communicator `MPI_COMM_WORLD` has to be replaced by `COMM_model` in all routines that perform MPI communication, except in calls to `MPI_init`, `MPI_finalize`, and `MPI_abort`.
     56The changes described by now must not influence the execution of the model itself. Thus, after these changes, one should ensure that the model compiles and runs correctly.
     57
     58== Initializing the communicators ==
     59
     60Having replaced `MPI_COMM_WORLD` by `COMM_model` enables to split the model integration into parallel model tasks. For this, the communicator `COMM_model` has to be redefined. This is performed by the routine `init_parallel_init`, which is supplied with the PDAF package. The routine should be added to the model usually directly after the initialization of the parallelization described above.
     61The routine `init_parallel_pdaf` also defines the communicators `COMM_filter` and `COMM_couple` that were described above. The provided routine `init_paralllel_init` is a template implementation. Thus, it has to be adjusted for the model under consideration. In particular one needs to ensure that the routine can access the variables `COMM_model` as well as `rank` and `size` (See the initialization example above. These variables might have different names in a model). If the model defines these variables in a module, a USE statement can be added to `init_parallel_pdaf` as is already done for `mod_parallel`.
     62
     63The routine `init_parallel_pdaf` splits the communicator `MPI_COMM_WORLD` and (re-)defines `COMM_model`. If multiple parallel model tasks are used, by setting `n_modeltasks` to a value above 1, `COMM_model` will actually be a set of communicators with one for each model task. In addition, the variables `npes_world` and `mype_world` are defined. If the model uses different names for these quantities, like `rank` and `size`, the model-specific variables should be re-initialized at the end of `init_parallel_pdaf`.
     64The routine defines several more variables that are declared and held in the module `mod_parallel`. It can be useful to use this module with the model code as some of these variables are required when the initialization routine of PDAF (`PDAF_init`) is called.
     65
     66== Arguments of `init_parallel_pdaf` ==
     67
     68The routine `init_parallel_pdaf` has two arguments, which are the following:
     69{{{
     70SUBROUTINE init_parallel_pdaf(dim_ens, screen)
     71}}}
     72 * `dim_ens`: An integer defining the ensemble size. This allows to check the consistency of the ensemble size with the number of processes of the program. If the ensemble size is specified after the call to `init_parallel_pdaf` (as in the example) it is recommended to set this argument to 0. In this case no consistency check is performed.
     73 * `screen`: An integer defining whether information output is written to the screen (i.e. standard output). The following choices are available:
     74  * 0: quite mode - no information is displayed.
     75  * 1: Display standard information about the configuration of the processes (recommended)
     76  * 2: Display detailed information for debugging
     77
     78
     79== Compiling the extended program ==
     80
     81This completes the adaptation of the parallelization. The compilation of the model has to be adjusted for the added files holding the routine `init_parallel_pdaf` and the module `mod_parallel`. One can test the extension by running the compiled model. It should run as without these changes, because `mod_parallel` defines by default that a single model task is executed (`n_modeltasks=1`). If `screen` is set to 1 in the call to init_parallel_pdaf, the standard output should include lines like
     82{{{
     83 Initialize communicators for assimilation with PDAF
     84
     85                  PE configuration:
     86   world   filter     model        couple     filterPE
     87   rank     rank   task   rank   task   rank    T/F
     88  ----------------------------------------------------------
     89     0       0      1      0      1      0       T
     90     1       1      1      1      2      0       T
     91     2       2      1      2      3      0       T
     92     3       3      1      3      4      0       T
     93}}}
     94These lines show the configuration of the communicators. This example was executed using 4 processes and `n_modeltasks=1`. (In this case, the variables `npes_filter` and `npes_model` will have a value of 4.)
     95
     96
     97To test parallel model tasks one has to set the variable `n_modeltasks` to a value larger than one. Now, the model will execute parallel model tasks. For `n_modeltasks=4` and running on a total of 4 processes the output from init_parallel_pdaf will look like the following:
     98{{{
     99 Initialize communicators for assimilation with PDAF
     100
     101                  PE configuration:
     102   world   filter     model        couple     filterPE
     103   rank     rank   task   rank   task   rank    T/F
     104  ----------------------------------------------------------
     105     0       0      1      0      1      0       T
     106     1              2      0      1      1       F
     107     2              3      0      1      2       F
     108     3              4      0      1      3       F
     109
     110}}}
     111In this example only a single process will compute the filter analysis (`filterPE=.true.`). There are now 4 model tasks, each using a single process. Thus, both `npes_filter` and `npes_model` will be one.
     112
     113Using multiple model tasks can result in the following effects:
     114 * The standard screen output of the model can by shown multiple times. This is due to the fact that often the process with `rank=0` performs screen output. By splitting the communicator `COMM_model`, there will be as many processes with rank 0 as there are model tasks.
     115 * Each model task might write file output. This can lead to the case that several processes try to generate the same file or try to write into the same file. In the extreme case this can result in a program crash. For this reason, it might be useful to restrict the file output to a single model task. This can be implemented using the variable `task_id`, which is initialized by `init_parallel_pdaf` and holds the index of the model task ranging from 1 to `n_modeltasks`. (For the ensemble assimilation, it can be useful to switch off the regular file output of the model completely. As each model tasks holds only a single member of the ensemble, this output might not be useful. In this case,  the file output for the state estimate and perhaps all ensemble members should be done in the pre/poststep routine of the assimilation system.)
     116
     117
     118== Non-parallel models ==
     119
     120If the numerical model is not parallelized (i.e. serial), there are two possibilities: The data assimilation system can be used without parallelization (serial), or parallel model tasks can be used in which each model task uses a single process. Both variants are described below.
     121
     122=== Serial assimilation system ===
     123
     124PDAF requires that an MPI-library is present. Usually this is easy to realize since for example OpenMPI is available for many operating systems and can easily be installed from a package.
     125
     126Even if the model itself does not use parallelization, the call to `init_parallel_pdaf` described above is still required. The routine will simply initialize the parallelization variables for a single-process case.
     127
     128=== Adding parallelization to a serial model ===
     129
     130In order to use parallel model tasks with a model that is not parallelized, the procedure is generally as described for the fully parallel case. However, one has to add the general initialization of MPI to the model code (or to `init_parallel_pdaf`). This is the lines
     131{{{
     132      CALL MPI_Init(ierr)
     133      CALL MPI_Comm_Rank(MPI_COMM_WORLD, mype_world, ierr)
     134      CALL MPI_Comm_Size(MPI_COMM_WORLD, npes_world, ierr)
     135      COMM_model = MPI_COMM_WORLD
     136}}}
     137together with the `USE` statement for `mod_parallel` should be added. Subsequently, the call to `init_parallel_pdaf` has to be inserted at the beginning of the model code. At the end of the program one should insert
     138{{{
     139    CALL  MPI_Barrier(MPI_COMM_WORLD,ierr)
     140    CALL  MPI_Finalize(ierr)
     141}}}
     142The module `mod_parallel.F90` from the template directory provides subroutines for the initialization and finalization of MPI. Thus, if this module is used, the is no need to explicitly add the call to the MPI functions, but one can simply add
     143{{{
     144    CALL init_parallel()
     145}}}
     146at the beginning of the program. This has to be followed by
     147{{{
     148    CALL init_parallel_pdaf(dim_ens, screen)
     149}}}
     150to initialize the variables for the parallelization of PDAF. At the end of the program one should then insert
     151{{{
     152    CALL finalize_parallel()
     153}}}
     154in the source code.
     155
     156If the program is executed with these extensions using multiple model tasks, the issues discussed in '[#Compilingtheextendedprogram Compiling the extended program]' can occur. This one has to take care about which processes will perform output to the screen or to files.