INTERMEDIATE MPI
The Message Passing Interface (MPI) is the de facto standard for distributed memory parallelism in high performance computing (HPC). MPI is the dominant programming model for modern day supercomputers and will continue to be critical in enabling researchers to scale up their HPC workloads to forthcoming pre-exascale and exascale systems.
This training material targets programmers who already have experience with basic MPI and are ready to take the next step to more advanced usage. Topics covered include communicators and groups, derived datatypes, one-sided communication, collective communication and hybrid MPI-threading approaches. See below for recommended prerequisite knowledge.
Prerequisites
Before attending this workshop, please make sure that you have access
to a computer with a C compiler and an MPI library installed. If you
have access to a supercomputer (e.g. a SNIC system)
with a compute allocation you can use that during the workshop. Any questions
on how to use a particular HPC resource should be directed to the appropriate
support desk.
You can also use your own computer for this workshop, provided that it has
compilers and an MPI library installed. If you do not already have these
installed, we recommend that you set up an isolated software environment
using conda
. For Windows computers we recommend to use the Windows
Subsystem for Linux (WSL). Detailed instructions can be found
on the Setting up your system page.
40 min |
|
40 min |
|
40 min |
|
40 min |
|
45 min |
|
45 min |
|
70 min |
|
40 min |
|
40 min |
|
50 min |
|
50 min |
|
70 min |
|
40 min |
Who is the course for?
This course is for students, researchers, engineers and programmers who already know the basics of MPI and want to learn more advanced MPI topics. To derive benefit from this material you should have attended introductory MPI training and preferably used basic MPI functionality in some code projects. Specifically, this lesson assumes that participants have some prior experience with or knowledge of the following topics (but no expertise is required):
General concepts: distributed memory parallelism, MPI process model
Communicators
Point-to-point communication
Non-blocking point-to-point communication
MPI datatypes
These pre-requisites are taught in courses such as PDC’s Introduction to MPI and the SNIC course An introduction to parallel programming using Message Passing with MPI.
About the course
This lesson material is developed by the EuroCC National Competence Center Sweden (ENCCS) and taught in ENCCS workshops. It is aimed at researchers and developers who already know the basics of MPI. Each lesson episode has clearly defined learning objectives and includes multiple exercises along with solutions, and is therefore also useful for self-learning. The lesson material is licensed under CC-BY-4.0 and can be reused in any form (with appropriate credit) in other courses and workshops. Instructors who wish to teach this lesson can refer to the Instructor’s guide for practical advice.
Graphical and text conventions
We adopt a few conventions which help organize the material.
- Function signatures
These are shown in a text block marked with a wrench emoji:
MPI_Win_unlock
int MPI_Win_unlock(int rank, MPI_Win win)
The signature can be hidden by clicking the toggle.
- Function parameters
The description of the function parameters will appear in a separate text box. It will be marked with a laptop emoji:
Parameters
rank
The rank whose memory window should be unlocked.
win
The window object.
The description is hidden and will be shown by clicking the toggle.
- Type-along
The text and code for these activities are in a separate text box, marked with a keyboard emoji:
Let’s look at an example
#include <stdio.h> int main(int argc, char *argv[]) { printf("Hello, world!"); return 0; }
The content can be hidden by clicking the toggle.
See also
There are many free online resources whose contents overlap with those covered in this lesson. Here is a non-comprehensive list:
The EPCC materials on basic and intermediate MPI and advanced topics
The freely available book Parallel Programming for Science Engineering by Viktor Eijkhout.
The MPI Standard. It is very dry, technical material.
You can also consult the following books:
Parallel Programming with MPI by Peter Pacheco.
Using MPI by William Gropp, Ewing Lusk, and Anthony Skjellum.
Using Advanced MPI by William Gropp, Torsten Hoefler, Rajeev Thakur, and Ewing Lusk.
Credits
The lesson file structure and browsing layout is inspired by and derived from work by CodeRefinery licensed under the MIT license. We have copied and adapted most of their license text.
Instructional Material
All ENCCS instructional material is made available under the Creative Commons Attribution license (CC-BY-4.0). The following is a human-readable summary of (and not a substitute for) the full legal text of the CC-BY-4.0 license. You are free:
to share - copy and redistribute the material in any medium or format
to adapt - remix, transform, and build upon the material for any purpose, even commercially.
The licensor cannot revoke these freedoms as long as you follow these license terms:
Attribution - You must give appropriate credit (mentioning that your work is derived from work that is Copyright (c) ENCCS and, where practical, linking to https://enccs.se), provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
No additional restrictions - You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits. With the understanding that:
You do not have to comply with the license for elements of the material in the public domain or where your use is permitted by an applicable exception or limitation.
No warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material.
Software
Except where otherwise noted, the example programs and other software provided by ENCCS are made available under the OSI-approved MIT license.