Quick Reference
- alignment
- data alignment
See also Wikipedia.
- intracommunicator
An object managing a group of processes and a communication context. Processes in the communicator can perform message passing operations with each other, but are isolated from processes not in the communicator.
- intercommunicator
A communicator for group-to-group communication.
- origin process
In RMA, the process invoking
MPI_PutandMPI_Getto access another process’s (the target process) window.- target process
In RMA, the process whose window is accessed by
MPI_PutandMPI_Getinvoked by another process (the origin process)- window
- memory window
- remote memory window
Process-local memory allocated for RMA operations. It is of implementation-dependent type
MPI_Win. Windows can be created with a variety of MPI functions:- RMA
- remote memory access
- one-sided communication
Communication paradigm allowing processes to access memory on other processes (remote memory) without the latter’s explicit involvement.
- synchronization
The necessary coordination of remote memory accesses. It can be active or passive.
- typemap
Abstraction used to represent a datatypes in MPI. It is an associative array (map) with datatypes, as understood by MPI, as keys and displacements, in bytes, as values. The displacements are computed relative to the buffer the datatype describes.
\[\textrm{Typemap} = \{ \textrm{Datatype}_{0}: \textrm{Displacement}_{0}, \ldots, \textrm{Datatype}_{n-1}: \textrm{Displacement}_{n-1} \}\]
Visual glossary
Todo
Explain the graphical conventions for MPI_Send, MPI_Recv, MPI_Get, MPI_Put etc
MPI functions
MPI_Comm_splitSplit an existing communicator.
int MPI_Comm_split(MPI_Comm comm, int color, int key, MPI_Comm *newcomm)
Documentation from implementors:
Documentation in the standard:
MPI_Type_get_extentRetrieve lower bound and extent of a type known to MPI.
int MPI_Type_get_extent(MPI_Datatype type, MPI_Aint *lb, MPI_Aint *extent)
Documentation from implementors:
Documentation in the standard:
MPI_Type_sizeRetrieve size a type known to MPI.
int MPI_Type_get_size(MPI_Datatype type, int *size)
Documentation from implementors:
Documentation in the standard:
MPI_PackPack data in a message. The message is in contiguous memory.
int MPI_Pack(const void *inbuf, int incount, MPI_Datatype datatype, void *outbuf, int outsize, int *position, MPI_Comm comm)
Documentation from implementors:
Documentation in the standard:
MPI_UnpackUnpack a message to data in contiguous memory.
int MPI_Unpack(const void *inbuf, int insize, int *position, void *outbuf, int outcount, MPI_Datatype datatype, MPI_Comm comm)
Documentation from implementors:
Documentation in the standard:
MPI_Type_create_structCreate a new MPI datatype given its typemap. This function replaces the deprecated
MPI_Type_struct.int MPI_Type_create_struct(int count, const int array_of_block_lengths[], const MPI_Aint array_of_displacements[], const MPI_Datatype array_of_types[], MPI_Datatype *newtype)
Documentation from implementors:
Documentation in the standard:
MPI_Type_commitPublish a new type to the MPI runtime. You can only use a new type in MPI routines after calling this routine.
int MPI_Type_commit(MPI_Datatype *datatype)
Documentation from implementors:
Documentation in the standard:
MPI_Type_contiguousCreate a homogeneous collection of a given datatype. Elements are contiguous: \(n\) and \(n-1\) are separated by the extent of the old type.
int MPI_Type_contiguous(int count, MPI_Datatype oldtype, MPI_Datatype *newtype)
Documentation from implementors:
Documentation in the standard:
MPI_Type_vectorCreate a collection of
countelements ofoldtypeseparated by a stride that is an arbitrary multiple of the extent of the old type.int MPI_Type_vector(int count, int blocklength, int stride, MPI_Datatype oldtype, MPI_Datatype *newtype)
Documentation from implementors:
Documentation in the standard:
MPI_Type_indexedCreate a type with non-homogeneous separations between the elements. Each displacement is intended as a multiple of the extent of the old type.
int MPI_Type_indexed(int count, const int array_of_blocklengths[], const int array_of_displacements[], MPI_Datatype oldtype, MPI_Datatype *newtype)
Documentation from implementors:
Documentation in the standard:
MPI_Type_create_hvectorCreate a collection of
countelements ofoldtype. The separation between elements in a hvector is expressed in bytes, rather than as a multiple of the extent.int MPI_Type_create_hvector(int count, int blocklength, MPI_Aint stride, MPI_Datatype oldtype, MPI_Datatype *newtype)
Documentation from implementors:
Documentation in the standard:
MPI_Type_create_hindexedCreate a type with non-homogeneous separations between the elements expressed in bytes, rather than as multiples of the extent.
int MPI_Type_create_hindexed(int count, const int array_of_blocklengths[], const MPI_Aint array_of_displacements[], MPI_Datatype oldtype, MPI_Datatype *newtype)
Documentation from implementors:
Documentation in the standard:
MPI_Type_freeFree a
MPI_Datatypeobject.int MPI_Type_free(MPI_Datatype *type)
Documentation from implementors:
Documentation in the standard:
MPI_GetLoad data from a remote memory window.
int MPI_Get(void *origin_addr, int origin_count, MPI_Datatype origin_datatype, int target_rank, MPI_Aint target_disp, int target_count, MPI_Datatype target_datatype, MPI_Win win)
Documentation from implementors:
Documentation in the standard:
MPI_PutStore data to a remote memory window.
int MPI_Put(const void *origin_addr, int origin_count, MPI_Datatype origin_datatype, int target_rank, MPI_Aint target_disp, int target_count, MPI_Datatype target_datatype, MPI_Win win)
Documentation from implementors:
Documentation in the standard:
MPI_AccumulateAccumulate data into target process through remote memory access.
int MPI_Accumulate(const void *origin_addr, int origin_count, MPI_Datatype origin_datatype, int target_rank, MPI_Aint target_disp, int target_count, MPI_Datatype target_datatype, MPI_Op op, MPI_Win win)
Documentation from implementors:
Documentation in the standard:
MPI_Win_createAllocates memory and creates the window object.
int MPI_Win_create(void *base, MPI_Aint size, int disp_unit, MPI_Info info, MPI_Comm comm, MPI_Win *win)
Documentation from implementors:
Documentation in the standard:
MPI_Win_allocateCreates a window from already allocated memory.
int MPI_Win_allocate(MPI_Aint size, int disp_unit, MPI_Info info, MPI_Comm comm, void *baseptr, MPI_Win *win)
Documentation from implementors:
Documentation in the standard:
Creates a window from already allocated MPI shared memory.
int MPI_Win_allocate_shared(MPI_Aint size, int disp_unit, MPI_Info info, MPI_Comm comm, void *baseptr, MPI_Win *win)
Documentation from implementors:
Documentation in the standard:
MPI_Win_create_dynamicCreates a window from allocated memory, but the window-memory pairing is deferred.
int MPI_Win_create_dynamic(MPI_Info info, MPI_Comm comm, MPI_Win *win)
Documentation from implementors:
Documentation in the standard:
MPI_Win_fenceSynchronization routine in active target RMA. It opens and closes an access epoch.
int MPI_Win_fence(int assert, MPI_Win win)
Documentation from implementors:
Documentation in the standard:
MPI_Win_postSynchronization routine in active target RMA. Starts an exposure epoch.
int MPI_Win_post(MPI_Group group, int assert, MPI_Win win)
Documentation from implementors:
Documentation in the standard:
MPI_Win_startSynchronization routine in active target RMA. Starts an access epoch.
int MPI_Win_start(MPI_Group group, int assert, MPI_Win win)
Documentation from implementors:
Documentation in the standard:
MPI_Win_completeSynchronization routine in active target RMA. Finishes an access epoch.
int MPI_Win_complete(MPI_Win win)
Documentation from implementors:
Documentation in the standard:
MPI_Win_waitSynchronization routine in active target RMA. Finishes an exposure epoch.
int MPI_Win_wait(MPI_Win win)
Documentation from implementors:
Documentation in the standard:
MPI_Win_testSynchronization routine in active target RMA. This is the non-blocking version of
MPI_Win_waitand finishes an exposure epoch.int MPI_Win_test(MPI_Win win, int *flag)
Documentation from implementors:
Documentation in the standard:
MPI_Win_lockSynchronization routine in passive target RMA. Locks a memory window.
int MPI_Win_lock(int lock_type, int rank, int assert, MPI_Win win)
Documentation from implementors:
Documentation in the standard:
MPI_Win_unlockSynchronization routine in passive target RMA. Unlocks a memory window.
int MPI_Win_unlock(int rank, MPI_Win win)
Documentation from implementors:
Documentation in the standard:
MPI_IsendStart a non-blocking send
int MPI_Isend(const void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request)
Documentation from implementors:
Documentation in the standard:
MPI_IrecvStarts a non-blocking receive
int MPI_Irecv(void* buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Request *request)
Documentation from implementors:
Documentation in the standard:
MPI_WaitReturn when the operation is complete
int MPI_Wait(MPI_Request *request, MPI_Status *status)
Documentation from implementors:
Documentation in the standard:
MPI_WaitanyWaits until exactly one operation completes
int MPI_Waitany(int count, MPI_Request array_of_requests[], int *index, MPI_Status *status)
Documentation from implementors:
Documentation in the standard:
MPI_WaitsomeWaits until at least one operation completes
int MPI_Waitsome(int incount, MPI_Request array_of_requests[], int *outcount, int array_of_indices[], MPI_Status array_of_statuses[])
Documentation from implementors:
Documentation in the standard:
MPI_WaitallWaits until all operations complete
int MPI_Waitall(int count, MPI_Request array_of_requests[], MPI_Status array_of_statuses[])
Documentation from implementors:
Documentation in the standard:
MPI_TestReturn immediately whether the operation is complete
int MPI_Test(MPI_Request *request, int *flag, MPI_Status *status)
Documentation from implementors:
Documentation in the standard:
MPI_TestanyReturns immediately at most one operation has completed
int MPI_Testany(int count, MPI_Request array_of_requests[], int *index, int *flag, MPI_Status *status)
Documentation from implementors:
Documentation in the standard:
MPI_TestsomeLike
MPI_Waitsomebut returns immediatelyint MPI_Testsome(int incount, MPI_Request array_of_requests[], int *outcount, int array_of_indices[], MPI_Status array_of_statuses[])
Documentation from implementors:
Documentation in the standard:
MPI_TestallReturns immediately whether all operations have completed
int MPI_Testall(int count, MPI_Request array_of_requests[], int *flag, MPI_Status array_of_statuses[])
Documentation from implementors:
Documentation in the standard:
MPI_Init_threadInitializes MPI and the threading environment within it. Should be preferred to
MPI_Initby thread-aware applications.int MPI_Init_thread(int *argc, char **argv, int required, int *provided)
Documentation from implementors:
Documentation in the standard:
MPI_Query_threadReturns the current level of threading support.
int MPI_Query_thread(int *provided)
Documentation from implementors:
Documentation in the standard:
MPI_Is_thread_mainReturns whether the calling thread previously called
MPI_Init_threadint MPI_Is_thread_main(int *flag)
Documentation from implementors:
Documentation in the standard:
MPI_IreduceNon-blocking variant of
MPI_Reduceint MPI_Ireduce(const void* sendbuf, void* recvbuf, int count, MPI_Datatype datatype, MPI_Op op, int root, MPI_Comm comm, MPI_Request *request)
Documentation from implementors:
Documentation in the standard:
MPI_IbarrierNon-blocking variant of
MPI_Barrierint MPI_Ibarrier(MPI_Comm comm, MPI_Request *request)
Documentation from implementors:
Documentation in the standard:
MPI_BarrierEnsures all ranks arrive at this call before any of the proceeds past it.
int MPI_Barrier(MPI_Comm comm)
Documentation from implementors:
Documentation in the standard:
MPI_BcastSends data from one rank to all other ranks
int MPI_Bcast(void *buffer, int count, MPI_Datatype datatype, int root, MPI_Comm comm)
Documentation from implementors:
Documentation in the standard:
MPI_ReduceCombines data from all ranks using an operation and returns values to a single rank.
int MPI_Reduce(const void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op, int root, MPI_Comm comm)
Documentation from implementors:
Documentation in the standard:
MPI_ScatterSends data from one rank to all other ranks
int MPI_Scatter(const void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm)
Documentation from implementors:
Documentation in the standard:
MPI_GatherSends data from all ranks to a single rank
int MPI_Scatter(const void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm)
Documentation from implementors:
Documentation in the standard:
MPI_AllgatherGathers data from all ranks and provides the same data to all ranks
int MPI_Allgather(const void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm)
Documentation from implementors:
Documentation in the standard:
MPI_AlltoallGathers data from all ranks and provides different parts of the data to different ranks.
int MPI_Alltoall(const void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm)
Documentation from implementors:
Documentation in the standard: