Changes between Version 4 and Version 5 of Implement3DVarAnalysisPDAF3_3DEnVar
- Timestamp:
- May 27, 2025, 1:51:15 PM (5 days ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
Implement3DVarAnalysisPDAF3_3DEnVar
v4 v5 40 40 The different 3D-Var methods in PDAF were explained on the [wiki:Implement3DVarAnalysisOverviewPDAF3 page providing the verview of the Analysis Step for 3D-Var Methods]. Depending the type of 3D-Var, the background covariance matrix '''B''' is represented either in a parameterized form, by an ensemble, or by a combination of both. The 3D-Var methods that use an ensemble need to transform the ensemble perturbations using an ensemble Kalman filter. PDAF uses for this the error-subspace transform filter ESTKF. There are two variants: The first uses the localized filter LESTKF, while the second uses the global filter ESTKF. 41 41 42 For the analysis step of 3D Ensemble Var we need different operations related to the observations. These operations are requested by PDAF by call-back routines supplied by the user and provided in the OMI structure. The names of the routines that are provided by the user are specified in the call to the assimilation routines as was examplained on the [wiki:Implement3DVarAnalysisOverviewPDAF3 page providing the verview of the Analysis Step for 3D-Var Methods].42 For the analysis step of 3D Ensemble Var we need different operations related to the observations. These operations are requested by PDAF by call-back routines supplied by the user and provided in the PDAF-OMI structure. The names of the routines that are provided by the user are specified in the call to the assimilation routines as was examplained on the [wiki:Implement3DVarAnalysisOverviewPDAF3 page providing the verview of the Analysis Step for 3D-Var Methods]. 43 43 44 44 For completeness we discuss here all user-supplied routines that are specified as arguments. Thus, some of the user-supplied routines, which were explained on the page describing the modification of the model code for the ensemble integration, are repeated here. 45 45 46 46 47 == A nalysisRoutines ==47 == AssimilationRoutines == 48 48 49 49 The general aspects of the filter (or solver) specific routines for the 3D-Var analysis step have been described on the page [wiki:OnlineModifyModelforEnsembleIntegration_PDAF3 Modification of the model code for the ensemble integration]. Here, we list the full interface of the routine. Subsequently, the user-supplied routines specified in the call is explained. … … 101 101 This routine exists for backward-compatibility. In implementations that were done before the release of PDAF V3.0, a 'put_state' routine was used for the ''flexible'' parallelization variant and for the offline mode. 102 102 When the ''flexible'' implementation variant is chosen for the assimilation system, the routine. This routine allows to port such implementations to the PDAF3 interface with minimal changes. 103 The interface of the routine is identical with that of `PDAF3_assimilate_en3dvar`, except that the user-supplied routines `distribute_state_pdaf` and `next_observation_p adf` are missing.103 The interface of the routine is identical with that of `PDAF3_assimilate_en3dvar`, except that the user-supplied routines `distribute_state_pdaf` and `next_observation_pdaf` are missing. 104 104 105 105 The interface is: … … 252 252 It has to apply the adjoint control vector transformation to a state vector and return the control vector. Usually this transformation is the multiplication with transpose of the square-root of the background error covariance matrix '''B'''. For the 3D Ensemble Var, this square root is usually expressed through the ensemble. More complex transformation, including the combination with a parameterized covariance matrix, are possible and the routine permits the flexiblity to implement any transformation. 253 253 254 If the state vector is decomposed in case of parallelization one needs to take care that the application of the trasformation is complete. This usually requries a comminucation with MPI_Allreduce to obtain a global su n.254 If the state vector is decomposed in case of parallelization one needs to take care that the application of the trasformation is complete. This usually requries a comminucation with MPI_Allreduce to obtain a global sum. 255 255 256 256