One-sided communication: concepts
Questions
How can we optimize communication?
Objectives
Learn about concepts in MPI for remote-memory access (RMA)
You are already familiar with the MPI_Send
/MPI_Recv
communication
pattern in MPI. This pattern is also called two-sided communication: the two
processes implicitly synchronize with each other.
It is like calling up someone: you wait for the other person to pick up to actually deliver your message.
However, this is not always the most optimal pattern for transferring data. MPI offers routines to perform remote memory access (RMA), also known as one-sided communication, where processes can access data on other processes, as long as it is made available in special memory windows.
Proceeding with our telecommunications analogy: one-sided communication resembles an email. Your message will sit in your friend’s inbox, but you are immediately free to do other things after hitting the send button! Your friend will read the email at their leisure.
At a glance: how does it work?
Let us look at the following figure, what routines are available in MPI for process 0 communicate a variable in its local memory to process 1?
It is foundational to MPI that every interaction between processes be explicit, so a simple assignment will not do. First, we must make a portion of memory on the target process, process 1 in this case, visible for process 0 to manipulate. We call this a window and we will represent it as a blue diamond.
Once a window into the memory of process 1 is open, process 0 can access it and manipulate
it. Process 0 can put (store) data in its local memory into the memory window of process
1, using MPI_Put
:
In this example, process 0 is the origin process: it participates actively in
the communication by calling the RMA routine MPI_Put
. Process 1
in the target process.
Conversely, process 0 might have populated its memory window with some data: any
other process in the communicator can now get (load) this data, using MPI_Get
:
In this scenario, process 1 is the origin process: it participates actively in the
communication by calling the RMA routine MPI_Get
. Process 0 is
the target process.
Note
With the term memory window or simply window we refer to the memory, local to each process, reserved for remote memory accesses. A window object is instead the collection of windows of all processes in the communicator and it has type
MPI_Win
.
Graphical conventions
We have introduced these graphical conventions:
What kind of operations are being carried out?
Solution
A is the correct answer. Process 1 initiates the one-sided memory access, in order to put (store) the contents of its local memory to the remote memory window opened on process 0.
C is the correct answer. This is the standard, blocking two-sided communication pattern in MPI.
D is the correct answer. Process 1 initiates the one-sided memory access in order to get (load) the contents of the remote memory window on process 0 to its local memory.
Both B and D are valid answers. The figure depicts a memory operation within process 0, which does not involve communication with any other process and thus pertains the programming language and not MPI.
D is the correct answer. This is the standard, blocking two-sided communication pattern in MPI: it does not matter whether the message stems from memory local to process 0 or its remotely accessible window.
B is the correct answer. Different processes can only interact with explicit two-sided communication or by first publishing to their remotely accessible window.
It is rarely the case that things are as simple as in a figure. With great power, come great responsibilities: operations on windows are non-blocking. Whereas non-blocking operations allow the programmer to overlap computation and communication, they also pose the burden of explicit synchronization. One-sided communication has its own styles of synchronization, which we will cover in the episode One-sided communication: synchronization. The following figure shows, schematically, the concept of epochs in RMA and the life cycle of a window object.
See also
The lecture covering MPI RMA from EPCC is available here
Chapter 3 of the Using Advanced MPI by William Gropp et al. [GHTL14]
Keypoints
The MPI model for remote memory accesses.
Window objects and memory windows.
Timeline of RMA and the importance of synchronization.