MPI

Introduction

Message Passing Interface (MPI) is a portable library of subprograms which can be used to facilitate parallel computing.

The MPI subprograms can be called from C and Fortran programs.

Parallel Computing

Parallel computing enables large scale numerical problems to be solved in a timely manner.  It can be performed on a multi-core PC, or using several networked PCs on a cluster or grid.

The key is to separate large problems into smaller ones.  The calculations are then carried out simultaneously.

The MPI subprograms regulate the communication and synchronization between the various CPUs and memory locations.

Installation

MPI can be downloaded from:
http://www.mcs.anl.gov/research/projects/mpich2/

The best installation method is to build the source code using the directions in:
http://www.mcs.anl.gov/research/projects/mpich2/documentation/files/mpich2-1.4.1-installguide.pdf

This can be done under Cygwin or Linux.

Sample Program

Then go to:  http://www.cs.usfca.edu/~peter/ppmpi/

Download the greetings.c program to the same folder which contains mpicc.

Also find and copy libmpich.a into this same folder.

Compile the program via:

mpicc -o greetings greetings.c libmpich.a

Then run the program via:

./mpirun-n 4 ./greetings

Grant persmission to run under firewalls if so prompted by pop-up windows.

The program can also be run as:

./mpiexec -n 4 ./greetings

In the above example, four processors were used.

More later . . .

* * *

Another source…     Argonne National Labs MPICH2

Tom Irvine