Changes between Version 22 and Version 23 of AdaptParallelization


Ignore:
Timestamp:
Feb 13, 2012, 4:59:12 PM (12 years ago)
Author:
lnerger
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • AdaptParallelization

    v22 v23  
    7878This completes the adaptation of the parallelization. The compilation of the model has to be adjusted for the added files holding the routine `init_parallel_pdaf` and the module `mod_parallel`. One can test the extension by running the compiled model. It should run as without these changes, because `mod_parallel` defines by default that a single model task is executed (`n_modeltasks=1`). If `screen` is set to 1 in the call to init_parallel_pdaf, the standard output should include lines like
    7979{{{
    80  PDAF: Initializing communicators
     80 Initialize communicators for assimilation with PDAF
    8181
    8282                  PE configuration:
     
    8989     3       3      1      3      4      0       T
    9090}}}
    91 These lines show the configuration of the communicators. This example was executed using 4 processes and `n_modeltasks=1`.
     91These lines show the configuration of the communicators. This example was executed using 4 processes and `n_modeltasks=1`. (In this case, the variables `npes_filter` and `npes_model` will have a value of 4.)
    9292
    9393
    94 To test parallel model tasks one has to set the variable `n_modeltasks` to a value larger than one. Now, the model will execute parallel model tasks. This can result in the following effects:
     94To test parallel model tasks one has to set the variable `n_modeltasks` to a value larger than one. Now, the model will execute parallel model tasks. For `n_modeltasks=4` and running on a total of 4 processes the output from init_parallel_pdaf will look like the following:
     95{{{
     96 Initialize communicators for assimilation with PDAF
     97
     98                  PE configuration:
     99   world   filter     model        couple     filterPE
     100   rank     rank   task   rank   task   rank    T/F
     101  ----------------------------------------------------------
     102     0       0      1      0      1      0       T
     103     1              2      0      1      1       F
     104     2              3      0      1      2       F
     105     3              4      0      1      3       F
     106
     107}}}
     108In this example only a single process will compute the filter analysis (`filterPE=.true.`). There are now 4 model tasks, each using a single process.
     109
     110Using multiple model tasks can result in the following effects:
    95111 * The standard screen output of the model can by shown multiple times. This is due to the fact that often the process with `rank=0` performs screen output. By splitting the communicator `COMM_model`, there will be as many processes with rank 0 as there are model tasks.
    96112 * Each model task might write file output. This can lead to the case that several processes try to generate the same file or try to write into the same file. In the extreme case this can result in a program crash. For this reason, it might be useful to restrict the file output to a single model task. This can be implemented using the variable `task_id`, which is initialized by `init_parallel_pdaf` and holds the index of the model task ranging from 1 to `n_modeltasks`. (For the ensemble assimilation, it can be useful to switch off the regular file output of the model completely. As each model tasks holds only a single member of the ensemble, this output might not be useful. In this case,  the file output for the state estimate and perhaps all ensemble members should be done in the pre/poststep routine of the assimilation system.)
     113
    97114
    98115'''Remark:''' For the compilation with a real MPI library, one has to ensure that the header file (`mpif.h`) of the MPI-library is used for both the model and PDAF. (Thus in the include file for make, one might have set `MPI_INC =`. The include directory `dummympi` specified in some of the include files, will not be compatible with all MPI implementations.