Changes between Version 3 and Version 4 of OfflineAdaptParallelization_PDAF3
- Timestamp:
- Apr 16, 2026, 4:56:19 PM (10 hours ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
OfflineAdaptParallelization_PDAF3
v3 v4 16 16 [[PageOutline(2-3,Contents of this page)]] 17 17 18 || This page describes the initialization of the parallelization for PDAF in PDAF V3.1. Implementations with PDAF V3.0 and before used a different scheme. See the [wiki:OfflineAdaptParallelization_PDAF23 page on init_parallel_pdaf in PDAF2.3] for information on the previous initialization scheme. || 19 20 18 21 == Overview == 19 22 … … 25 28 26 29 27 == Threecommunicators ==30 == Background: Two MPI communicators == 28 31 29 32 Like many numerical models, PDAF uses the MPI standard for the parallelization. PDAF requires for the compilation that an MPI library is available. In any case, it is necessary to execute the routine `init_parallel_pdaf` as described below. 30 33 31 MPI uses so-called 'communicators' to define sets of parallel processes. In the offline mode, the communicators of the PDAF parallelization are only used internally to PDAF. However, as the communicators need to be initialized and provided to `PDAF_init` to initialized PDAF, we describe them here.34 MPI uses so-called 'communicators' to define sets of parallel processes. In the offline mode, the communicators of the PDAF parallelization are only used internally to PDAF. However, the communicators needs to be initialized by PDAF, and are partily also used in the user code. For this, we describe them here. 32 35 33 The initialization routine `PDAF_init` is used for both the offline and online modes. Due to this, three communicators need to be initialized that define the groups of processes that are involved in different tasks of the data assimilation system.34 The required communicators are initialized in the routine `init_parallel_pdaf` and are named 36 The routine `PDAF3_init_parallel` that is called in `init_parallel_pdaf` is used for both the online and offline coupled modes. Because of this, it returns two communicators. They define the groups of processes that are involved in different tasks of the data assimilation system. These are 37 * `COMM_filter` - defines the processes that perform the filter analysis step 35 38 * `COMM_model` - defines the processes that are involved in the model integrations 36 * `COMM_filter` - defines the processes that perform the filter analysis step 37 * `COMM_couple` - defines the processes that are involved when data are transferred between the model and the filter 38 For the offline mode, only `COMM_filter` is relevant. `COMM_model` is identical to `COMM_filter` and `COMM_couple` is not actively used. 39 For the offline mode, only `COMM_filter` is relevant, and `COMM_model` is here identical to `COMM_filter`. Partly the tutorial source codes use both, with using `COMM_filter` and related variables in the call-back routines, and `COMM_model` in the main part of the code, in particular in `initialize.F90`, which defines the model grid dimensions. 39 40 40 The parallel region of an MPI parallel program is initialized by calling `MPI_init` . By calling `MPI_init`, the communicator `MPI_COMM_WORLD` is initialized. This communicator is pre-defined by MPI to contain all processes of the MPI-parallel program. In the offline mode, it would be sufficient to conduct all parallel communication using only `MPI_COMM_WORLD`. However, as PDAF uses internally the three communicators listed above, they have to be initialized. In general they will be identical to `MPI_COMM_WORLD` with the exception of `COMM_couple`, which is not usedin the offline mode.41 The parallel region of an MPI parallel program is initialized by calling `MPI_init` (which is called inside `PDAF3_init_parallel`). By calling `MPI_init`, the communicator `MPI_COMM_WORLD` is initialized. This communicator is pre-defined by MPI to contain all processes of the MPI-parallel program. In the offline mode, it would be sufficient to conduct all parallel communication using only `MPI_COMM_WORLD`. However, as PDAF uses internally the communicators listed above, so they have to be initialized. However, they include the same processes `MPI_COMM_WORLD` in the offline mode. 41 42 42 [[Image(//pics/communicators_PDAFoffline .png)]]43 [[BR]]'''Figure 1:''' Example of a typical configuration of the communicators for the offline coupled assimilation with parallelization. We have 4 processes in this example. The communicators COMM_model, COMM_filter , and COMM_couple need to be initialized. COMM_model and COMM_filter use all processes (as MPI_COMM_WORLD) while COMM_couple is a group of communicators each holding only one process. Thus, the initialization of the communicators is simpler than in case of the [wiki:OnlineAdaptParallelization_PDAF3 parallelization for the online coupling].43 [[Image(//pics/communicators_PDAFoffline_V3.1.png)]] 44 [[BR]]'''Figure 1:''' Example of a typical configuration of the communicators for the offline coupled assimilation with parallelization. We have 4 processes in this example. The communicators COMM_model, COMM_filter are initialized. COMM_model and COMM_filter use all processes (as MPI_COMM_WORLD). Thus, the configuration of the communicators is simpler than in case of the [wiki:OnlineAdaptParallelization_PDAF3 parallelization for the online coupling]. 44 45 45 46 46 47 == Initializing the parallelization == 47 48 48 The routine `init_parallel_pdaf`, which is supplied in `templates/offline` and `tutorial/offline_2D_parallel`, initializes the necessary communicators for the assimilation program and PDAF. The routine is called at the beginning of the assimilation program directly after the initialization of the parallelization described above. The provided routine `init_parallel_pdaf` is a template implementation. For the offline mode, it should not be necessary to modify it!49 The routine `init_parallel_pdaf`, which is supplied in `templates/offline` and `tutorial/offline_2D_parallel`, contains all necessary functionality for the initialization of the communicators. The actual initialization of MPI and of the communicators in performed in the call to the routine `PDAF3_init_parallel` (for details see the [wiki:PDAF3_init_parallel documentation of PDAF3_init_parallel). 49 50 50 `init_parallel_pdaf` is identical for the online and offline modes. The routine uses the variable `n_modeltasks`. For the offline mode, the provided code sets this variable to 1, because no integrations are performed in the assimilation program. The routine the variables `npes_world`, `mype_world` as well as `npes_filter` and `mype_filter` are defined. Here `npes_world` and `npes_filter` provide the number of processes involved in the program, while `mype_world` and `mype_filter` identify the index of a process (its 'rank'). These variables can be used in the user-supplied routines to control, for example, which process write information to the screen. In the example codes we use `mype_filter` in the call-back routines, while we use `mype_world` in the main part of the code.51 `init_parallel_pdaf` is called at the beginning of the assimilation program. The provided file `template/offline/init_parallel_pdaf.F90` is a template implementation. It should not be necessary to modify this file. 51 52 52 `init_parallel_pdaf` defines several more variables that are declared and held in the module `mod_parallel_pdaf`. 53 54 In implementations done with PDAF V3.0 and later, `init_parallel_pdaf` provides the parallelization variables to PDAF by a call to `PDAF3_set_parallel`. Implementations done before do usually not include this call, but provide the variables to PDAF in the call to `PDAF_init`. 53 || **Note:** In implementations done with PDAF V3.0 and before the actual communicators were generated in `init_parallel_pdaf`. In PDAF V3.0, the parallelization variables are then provided to PDAF by a call to `PDAF3_set_parallel`. Implementations with PDAF V2.x do usually not include this call, but provide the variables to PDAF in the call to `PDAF_init`. See the [wiki:OfflineAdaptParallelization_PDAF23 page on init_parallel_pdaf in PDAF2] for information on the previous initialization scheme. || 55 54 56 55 57 56 == Arguments of `init_parallel_pdaf` == 58 57 59 The routine `init_parallel_pdaf` has two arguments, which are the following:58 In the template implementation the routine `init_parallel_pdaf` has one argument: 60 59 {{{ 61 SUBROUTINE init_parallel_pdaf( dim_ens,screen)60 SUBROUTINE init_parallel_pdaf(screen) 62 61 }}} 63 * `dim_ens`: An integer defining the ensemble size. This allows to check the consistency of the ensemble size with the number of processes of the program. For the offline mode, one should set this variable to 0. In this case no consistency check for the ensemble size with regard to parallelization is performed.64 62 * `screen`: An integer defining whether information output is displayed. The following choices are available: 65 63 * 0: quite mode - no information is displayed. … … 72 70 == Testing the assimilation program == 73 71 74 One can compile the template code and run it, e.g., with `mpirun -np 4 PDAF_offline`. Close to the beginning of the output, one will seethe lines72 One can compile the template code and run it, e.g., with `mpirun -np 4 PDAF_offline`. If you set `screen=1` you will see close to the beginning of the output the lines 75 73 {{{ 76 Initialize communicators for assimilation withPDAF74 PDAF MPI-initialization by PDAF 77 75 78 PE configuration: 79 world filter model couple filterPE 80 rank rank task rank task rank T/F 81 ---------------------------------------------------------- 82 0 0 1 0 1 0 T 83 1 1 1 1 2 0 T 84 2 2 1 2 3 0 T 85 3 3 1 3 4 0 T 76 PDAF *** Initialize MPI communicators for assimilation with PDAF *** 77 PDAF Pconf Process configuration: 78 PDAF Pconf world assim model couple assimPE 79 PDAF Pconf rank rank task rank task rank T/F 80 PDAF Pconf ------------------------------------------------------------ 81 PDAF Pconf 0 0 1 0 1 0 T 82 PDAF Pconf 2 2 1 2 3 0 T 83 PDAF Pconf 1 1 1 1 2 0 T 84 PDAF Pconf 3 3 1 3 4 0 T 86 85 }}} 87 86 These lines show the configuration of the communicators. Here, 'world rank' is the value of 'mype_world', and 'filter rank' is the value of 'mype_filter'. These ranks always start with 0. 88 87 88 In the tutorial code in `tutorial/offline_2D_parallel` we have set `screen=0` in `main_offline.F90`. Thus, the lines above will only be shown if you modify the value of `screen`. Generally, seeing the communicator setup is not really important for the offline mode. 89
