For basic applications, MPI is as easy to use as any other message-passing system. The sample code below contains the complete communications skeleton for a dynamically load balanced master/slave application. Following the code is a description of the few functions necessary to write typical parallel applications.
#include <mpi.h> #define WORKTAG 1 #define DIETAG 2 main(argc, argv) int argc; char *argv[]; { int myrank; MPI_Init(&argc, &argv); /* initialize MPI */ MPI_Comm_rank( MPI_COMM_WORLD, /* always use this */ &myrank); /* process rank, 0 thru N-1 */ if (myrank == 0) { master(); } else { slave(); } MPI_Finalize(); /* cleanup MPI */ } master() { int ntasks, rank, work; double result; MPI_Status status; MPI_Comm_size( MPI_COMM_WORLD, /* always use this */ &ntasks); /* #processes in application */ /* * Seed the slaves. */ for (rank = 1; rank < ntasks; ++rank) { work = /* get_next_work_request */; MPI_Send(&work, /* message buffer */ 1, /* one data item */ MPI_INT, /* data item is an integer */ rank, /* destination process rank */ WORKTAG, /* user chosen message tag */ MPI_COMM_WORLD);/* always use this */ } /* * Receive a result from any slave and dispatch a new work * request work requests have been exhausted. */ work = /* get_next_work_request */; while (/* valid new work request */) { MPI_Recv(&result, /* message buffer */ 1, /* one data item */ MPI_DOUBLE, /* of type double real */ MPI_ANY_SOURCE, /* receive from any sender */ MPI_ANY_TAG, /* any type of message */ MPI_COMM_WORLD, /* always use this */ &status); /* received message info */ MPI_Send(&work, 1, MPI_INT, status.MPI_SOURCE, WORKTAG, MPI_COMM_WORLD); work = /* get_next_work_request */; } /* * Receive results for outstanding work requests. */ for (rank = 1; rank < ntasks; ++rank) { MPI_Recv(&result, 1, MPI_DOUBLE, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status); } /* * Tell all the slaves to exit. */ for (rank = 1; rank < ntasks; ++rank) { MPI_Send(0, 0, MPI_INT, rank, DIETAG, MPI_COMM_WORLD); } } slave() { double result; int work; MPI_Status status; for (;;) { MPI_Recv(&work, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, &status); /* * Check the tag of the received message. */ if (status.MPI_TAG == DIETAG) { return; } result = /* do the work */; MPI_Send(&result, 1, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD); } }
MPI_Init(&argc, &argv); MPI_Finalize( );
int myrank; MPI_Comm_rank(MPI_COMM_WORLD, &myrank);The total number of processes is returned by MPI_Comm_size( ):
int nprocs; MPI_Comm_size(MPI_COMM_WORLD, &nprocs);
A message is sent to a specific process and is marked by a tag (integer value) specified by the user. Tags are used to distinguish between different message types a process might send/receive. In the sample code above, the tag is used to distinguish between work and termination messages.
MPI_Send(buffer, count, datatype, destination, tag, MPI_COMM_WORLD);
MPI_Recv(buffer, maxcount, datatype, source, tag, MPI_COMM_WORLD, &status);Information about the received message is returned in a status variable. The received message tag is status.MPI_TAG and the rank of the sending process is status.MPI_SOURCE.
Another function, not used in the sample code, returns the number of datatype elements received. It is used when the number of elements received might be smaller than `maxcount'.
MPI_Get_count(&status, datatype, &nelements);With these few functions, you are ready to program almost any application. There are many other, more exotic functions in MPI, but all can be built upon those presented here so far.