MFC:Pre_process  v1.0
m_mpi_proxy Module Reference

This module serves as a proxy to the parameters and subroutines available in the MPI implementation's MPI module. Specifically, the role of the proxy is to harness basic MPI commands into more complex procedures as to achieve the required pre-processing communication goals. More...

Functions/Subroutines

subroutine s_mpi_initialize ()
 The subroutine intializes the MPI environment and queries both the number of processors that will be available for the job as well as the local processor rank. More...
 
subroutine s_mpi_abort ()
 The subroutine terminates the MPI execution environment. More...
 
subroutine s_initialize_mpi_data (q_cons_vf)
 
subroutine s_mpi_barrier ()
 Halts all processes until all have reached barrier. More...
 
subroutine s_mpi_bcast_user_inputs ()
 Since only processor with rank 0 is in charge of reading and checking the consistency of the user provided inputs, these are not available to the remaining processors. This subroutine is then in charge of broadcasting the required information. More...
 
subroutine s_mpi_decompose_computational_domain ()
 Description: This subroutine takes care of efficiently distributing the computational domain among the available processors as well as recomputing some of the global parameters so that they reflect the configuration of sub-domain that is overseen by the local processor. More...
 
subroutine s_mpi_reduce_min (var_loc)
 The following subroutine takes the inputted variable and determines its minimum value on the entire computational domain. The result is stored back into inputted variable. More...
 
subroutine s_mpi_finalize ()
 Finalization of all MPI related processes. More...
 

Variables

integer, private err_code
 
integer, private ierr
 Generic flags used to identify and report MPI errors. More...
 

Detailed Description

This module serves as a proxy to the parameters and subroutines available in the MPI implementation's MPI module. Specifically, the role of the proxy is to harness basic MPI commands into more complex procedures as to achieve the required pre-processing communication goals.

Function/Subroutine Documentation

◆ s_initialize_mpi_data()

subroutine m_mpi_proxy::s_initialize_mpi_data ( type(scalar_field), dimension(sys_size), intent(in)  q_cons_vf)

Definition at line 101 of file m_mpi_proxy.f90.

Here is the caller graph for this function:

◆ s_mpi_abort()

subroutine m_mpi_proxy::s_mpi_abort ( )

The subroutine terminates the MPI execution environment.

Definition at line 91 of file m_mpi_proxy.f90.

Here is the caller graph for this function:

◆ s_mpi_barrier()

subroutine m_mpi_proxy::s_mpi_barrier ( )

Halts all processes until all have reached barrier.

Definition at line 140 of file m_mpi_proxy.f90.

Here is the caller graph for this function:

◆ s_mpi_bcast_user_inputs()

subroutine m_mpi_proxy::s_mpi_bcast_user_inputs ( )

Since only processor with rank 0 is in charge of reading and checking the consistency of the user provided inputs, these are not available to the remaining processors. This subroutine is then in charge of broadcasting the required information.

Definition at line 154 of file m_mpi_proxy.f90.

Here is the caller graph for this function:

◆ s_mpi_decompose_computational_domain()

subroutine m_mpi_proxy::s_mpi_decompose_computational_domain ( )

Description: This subroutine takes care of efficiently distributing the computational domain among the available processors as well as recomputing some of the global parameters so that they reflect the configuration of sub-domain that is overseen by the local processor.

Definition at line 412 of file m_mpi_proxy.f90.

Here is the caller graph for this function:

◆ s_mpi_finalize()

subroutine m_mpi_proxy::s_mpi_finalize ( )

Finalization of all MPI related processes.

Definition at line 859 of file m_mpi_proxy.f90.

Here is the caller graph for this function:

◆ s_mpi_initialize()

subroutine m_mpi_proxy::s_mpi_initialize ( )

The subroutine intializes the MPI environment and queries both the number of processors that will be available for the job as well as the local processor rank.

Definition at line 63 of file m_mpi_proxy.f90.

Here is the caller graph for this function:

◆ s_mpi_reduce_min()

subroutine m_mpi_proxy::s_mpi_reduce_min ( real(kind(0d0)), intent(inout)  var_loc)

The following subroutine takes the inputted variable and determines its minimum value on the entire computational domain. The result is stored back into inputted variable.

Parameters
var_locholds the local value to be reduced among all the processors in communicator. On output, the variable holds the minimum value, reduced amongst all of the local values.

Definition at line 834 of file m_mpi_proxy.f90.

Here is the caller graph for this function:

Variable Documentation

◆ err_code

integer, private m_mpi_proxy::err_code
private

Definition at line 53 of file m_mpi_proxy.f90.

◆ ierr

integer, private m_mpi_proxy::ierr
private

Generic flags used to identify and report MPI errors.

Definition at line 53 of file m_mpi_proxy.f90.