How to compile an MPI included c program using cmake

OpenMP Is this a question about OpenMP? Then all you have to do is compile with -fopenmp which you can do by appending it to CMAKE_C_FLAGS, for example: SET(CMAKE_C_FLAGS “${CMAKE_C_FLAGS} -fopenmp) MPI For MPI, you have to find mpi first find_package(MPI) #make it REQUIRED, if you want then add it’s header files to your search … Read more

Sending columns of a matrix using MPI_Scatter

There’s a long description of this issue in my answer to this question: the fact that many people have these questions is proof that it’s not obvious and the ideas take some getting used to. The important thing to know is what memory layout the MPI datatype describes. The calling sequence to MPI_Type_vector is: int … Read more

MPI partition matrix into blocks

What you’ve got is pretty much “best practice”; it’s just a bit confusing until you get used to it. Two things, though: First, be careful with this: sizeof(MPI_CHAR) is, I assume, 4 bytes, not 1. MPI_CHAR is an (integer) constant that describes (to the MPI library) a character. You probably want sizeof(char), or SIZE/2*sizeof(char), or … Read more

Parallel Algorithms for Generating Prime Numbers (possibly using Hadoop’s map reduce)

Here’s an algorithm that is built on mapping and reducing (folding). It expresses the sieve of Eratosthenes      P = {3,5,7, …} \ U {{p2, p2+2p, p2+4p, …} | p in P} for the odd primes (i.e without the 2). The folding tree is indefinitely deepening to the right, like this: where each prime number … Read more

Scatter Matrix Blocks of Different Sizes using MPI

You have to go through at least one extra step in MPI to do this. The problem is that the most general of the gather/scatter routines, MPI_Scatterv and MPI_Gatherv, allow you to pass a “vector” (v) of counts/displacements, rather than just one count for Scatter and Gather, but the types are all assumed to be … Read more

Trouble Understanding MPI_Type_create_struct

The purpose of MPI_Type_create_struct() is, as you know, to provide a way to create user’s MPI_Datatypes mapping his structured types. These new types will subsequently be usable for MPI communications and other calls just as the default types, allowing for example to transfer arrays of structures the same way you would transfer arrays of ints … Read more

Using MPI_Bcast for MPI communication

This is a common source of confusion for people new to MPI. You don’t use MPI_Recv() to receive data sent by a broadcast; you use MPI_Bcast(). Eg, what you want is this: #include <mpi.h> #include <stdio.h> int main(int argc, char** argv) { int rank; int buf; const int root=0; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &rank); if(rank == … Read more

MPI: blocking vs non-blocking

Blocking communication is done using MPI_Send() and MPI_Recv(). These functions do not return (i.e., they block) until the communication is finished. Simplifying somewhat, this means that the buffer passed to MPI_Send() can be reused, either because MPI saved it somewhere, or because it has been received by the destination. Similarly, MPI_Recv() returns when the receive … Read more

How do I debug an MPI program?

I have found gdb quite useful. I use it as mpirun -np <NP> xterm -e gdb ./program This the launches xterm windows in which I can do run <arg1> <arg2> … <argN> usually works fine You can also package these commands together using: mpirun -n <NP> xterm -hold -e gdb -ex run –args ./program [arg1] … Read more

MPI_Rank return same process number for all process

Make sure that both mpicc and mpirun come from the same MPI implementation. When mpirun fails to provide the necessary universe information to the launched processes, with the most common reason for that being that the executable was build against a different MPI implementation (or even a different version of the same implementation), MPI_Init() falls … Read more