MPI: It's Easy to Get Started

For basic applications, MPI is as easy to use as any other message-passing system. The sample code below contains the complete communications skeleton for a dynamically load balanced master/slave application. Following the code is a description of the few functions necessary to write typical parallel applications.

#include <mpi.h>

#define WORKTAG		1
#define DIETAG		2

main(argc, argv)

int			argc;
char			*argv[];

{
	int		myrank;

	MPI_Init(&argc, &argv);		/* initialize MPI */
	MPI_Comm_rank(MPI_COMM_WORLD,	/* always use this */
			&myrank);	/* process rank, 0 thru N-1 */

	if (myrank == 0) {
		master();
	} else {
		slave();
	}

	MPI_Finalize();			/* cleanup MPI */
}

master()

{
	int		ntasks, rank, work;
	double		result;
	MPI_Status	status;

	MPI_Comm_size(MPI_COMM_WORLD,	/* always use this */
			&ntasks);	/* #processes in application */
/*
 * Seed the slaves.
 */
	for (rank = 1; rank < ntasks; ++rank) {

		work = /* get_next_work_request */;

		MPI_Send(&work,		/* message buffer */
			1,		/* one data item */
			MPI_INT,	/* data item is an integer */
			rank,		/* destination process rank */
			WORKTAG,	/* user chosen message tag */
			MPI_COMM_WORLD);/* always use this */
	}
/*
 * Receive a result from any slave and dispatch a new work request
 * work requests have been exhausted.
 */
	work = /* get_next_work_request */;

	while (/* valid new work request */) {

		MPI_Recv(&result,	/* message buffer */
			1,		/* one data item */
			MPI_DOUBLE,	/* data item is a double real */
			MPI_ANY_SOURCE,	/* receive from any sender */
			MPI_ANY_TAG,	/* receive any type of message */
			MPI_COMM_WORLD,	/* always use this */
			&status);	/* info about received message */

		MPI_Send(&work, 1, MPI_INT, status.MPI_SOURCE,
				WORKTAG, MPI_COMM_WORLD);

		work = /* get_next_work_request */;
	}
/*
 * Receive results for outstanding work requests.
 */
	for (rank = 1; rank < ntasks; ++rank) {
		MPI_Recv(&result, 1, MPI_DOUBLE, MPI_ANY_SOURCE,
				MPI_ANY_TAG, MPI_COMM_WORLD, &status);
	}
/*
 * Tell all the slaves to exit.
 */
	for (rank = 1; rank < ntasks; ++rank) {
		MPI_Send(0, 0, MPI_INT, rank, DIETAG, MPI_COMM_WORLD);
	}
}

slave()

{
	double		result;
	int		work;
	MPI_Status	status;

	for (;;) {
		MPI_Recv(&work, 1, MPI_INT, 0, MPI_ANY_TAG,
				MPI_COMM_WORLD, &status);
/*
 * Check the tag of the received message.
 */
		if (status.MPI_TAG == DIETAG) {
			return;
		}

		result = /* do the work */;

		MPI_Send(&result, 1, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD);
	}
}

The World of MPI

Processes are represented by a unique "rank" (integer) and ranks are numbered 0, 1, 2, ..., N-1. MPI_COMM_WORLD means "all the processes in the MPI application." It is called a communicator and it provides all information necessary to do message passing. Portable libraries do more with communicators to provide synchronization protection that most other systems cannot handle.

Enter and Exit MPI

As with other systems, two functions are provided to initialize and cleanup a MPI process:
	MPI_Init(&argc, &argv);
	MPI_Finalize();

Who Am I? Who Are They?

Typically, a process in a parallel application needs to know who it is (its rank) and how many other processes exist. A process finds out its own rank by calling MPI_Comm_rank():
	int		myrank;

	MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
The total number of processes is returned by MPI_Comm_size():
	int		nprocs;

	MPI_Comm_size(MPI_COMM_WORLD, &nprocs);

Sending Messages

A message is an array of elements of a given datatype. MPI supports all the basic datatypes and allows a more elaborate application to construct new datatypes at runtime.

A message is sent to a specific process and is marked by a tag (integer value) specified by the user. Tags are used to distinguish between different message types a process might send/receive. In the sample code above, the tag is used to distinguish between work and termination messages.

	MPI_Send(buffer, count, datatype, destination, tag, MPI_COMM_WORLD);

Receiving Messages

A receiving process specifies the tag and the rank of the sending process. MPI_ANY_TAG and MPI_ANY_SOURCE may be used optionally to receive a message of any tag and from any sending process.

	MPI_Recv(buffer, maxcount, datatype,
	         source, tag, MPI_COMM_WORLD, &status);
Information about the received message is returned in a status variable. The received message tag is status.MPI_TAG and the rank of the sending process is status.MPI_SOURCE.

Another function, not used in the sample code, returns the number of datatype elements received. It is used when the number of elements received might be smaller than `maxcount'.

	MPI_Get_count(&status, datatype, &nelements);
With these few functions, you are ready to program almost any application. There are many other, more exotic functions in MPI, but all can be built upon those presented here so far.

LAM / MPI Parallel Computing / Ohio Supercomputer Center / lam@tbag.osc.edu