MPI Poll '95 Results

A poll was conducted to better understand how programmers are using the rich MPI functionality, what extensions are needed, and how to prioritize our work to better serve MPI users, and the HPC community at large.

These are the verbatim responses to the request for general commentary on MPI.


Comments:


>  I use MPI for systems programming. I am building a parallel language
>  run-time system on top of it. I do not use it for application programming.


Comments:


  !Active messages are highly desirable!
  although not part of the current standard...


Comments:


I'm not very familiar with MPI. Mostly I had used Express and PVM for
parallelization. But I'm interested in using MPI for more portable
programming.


Comments:


 > MPI is very useful. Nevertheless, It is very hard to convince
 > people to natively implement in MPI. Usually, MPI is one of the
 > transport layer (with PVM & Parmacs), so only basic functionnalities
 > are used. Probably, the lack of native versions (supported by
 > hardware provider) and associated tools is the main reason for that.
 >
 > Some points are not very clear in the standard such as the beaviour
 > of some functions (ex: ready mode,...)


Comments:


Excellent product, not to say almost perfect.


Comments:


* Currently use only the predefined communicator MPI_COMM_WORLD in
  all of my applications.

* The MPI timing routines are very useful for having *portable*
  timing, however it is not clear (to me) how comparable these are 
  from one system to another.

* The various MPI_allxxx, MPI_allxxxv routines are all 
  highly useful, however it is vital that implementations of these 
  are heavily optimised. 
  -> For some application codes I have set up a compile time choice 
  between MPI collective routines OR hand written (reasonably 
  efficient) collective routines using only MPI sends and recvs; this 
  is necessary to avoid a performance hit on machines which have very 
  naive implementations of collective routines.

* There is a general drift here towards fortran 90. Specific
  bindings for F90 will therefore be needed; there do not seem to
  have been any efforts to address this.

* In general, I find MPI far better to use than all other message 
  passing setups with which I am familiar (PVM,Intel NX,etc.). I
  am now using it in *all* the application programs that I am working 
  on to ensure portability.


Comments:


Derived data types are an absolute pain to construct, and are even
more difficult to debug.  This is particularly true when creating
types with non-standard offsets.  Some means of 'viewing' the actual
message content (i.e. the underlying basic values) would be
invaluable.


Comments:


I am encouragin all our customers/collaborators to use MPI as the standard.
We are responsible for supporting some 50% of the UK Grand-Challenge applications
on a 320-processor Cray T3D, 16-processor IBM SP-2 and 64-processor Intel
iPSC/860. Applications in a wide area of science and engineering.
I am using MPI with Fortran-90.


Comments:

 
_ contains allmost everything one can ask for...
 

Comments:


It would be nice to have a universal way to run MPI.  I realize that
some implementations are trying to use "mpirun" as a general environment,
which is nice for workstations; but what about the big MPP's, like the
SP2, and such?  If the vendors could all get together on this too, that
would be great!  Then us poor programmers wouldn't have to worry about
the easily-confused users getting confused!  :)


Comments:


We use MPI (among other things) to provide communication for HPF 
programs.

My greatest concern is better performance, not more functionality.


Comments:


The MPI_UB and MPI_LB stuff is really awkward. There should be a better
solution to this. I would suggest alternate versions of the type
creation calls which allow the extent of the datatype just be passed as
an argument.


Comments:


THERE COULD HAVE BEEN NOTHING BETTER THAN A STANDARD INTERFACE LIKE MPI.
CREDITS TO ALL THOSE INVOLVED IN DEVELOPING THE STANDARD AND ALSO THOSE
WHO QUICKLY MADE THE IMPLEMENTATIONS AVAILABLE.


Comments:

 
I look forward to reading the survey results and following future
developments.
 

Comments:


>  - the MPI standard has been an immense success in reducing the
     need for online debugging on MPP systems; most development can
     now proceed on a workstation (or a workstation network), and
     most problems debugged before testing on the much more inaccessible
     MPP system.
 

Comments:


> I'm not tremendously familiar with all the MPI concepts so I'm not
absolutely sure whether or not I need/will use them. Thus my
nonanswers to some of the questions.
 

Comments:


  * On the whole, besides the above "difficiencies", I feel most of our
    users are quite happy with MPI, and it has been quite successful in
    porting largish scientific programs to the AP1000 that have been
    written in MPI.

  * It would be nice if it could be more rigoriously defined as to how
    threads and signals can be intermixed with MPI calls.  As far as I
    can tell, this hasn't really be address by the standard or at this
    stage, using threads and signals will not make your application
    "MPI compliant".


Comments:


> I use the ANU implementation of MPI for the AP 1000.  The
> feature that I would most like to see is the ability to run
> multiple processes on a single processor (as can be done with
> the native message passing software).  I'm not sure if
> any other implementors have done this, but it would be very
> useful.


Comments:


add IO :-)


Comments:


Some comments can be found in my paper for MPI Developer Conference '95
http://www.cse.nd.edu/mpidc95/proceedings/papers/postscript/fang.ps


Comments:


|> I am working on a code for quantum checial calculations; a direct 
|> self consitent fild program including electron correlation.
|> I am only working on the whole processor array ( mpi_comm_world )
   and I am only using 10 subroutines from mpi:
   MPI_INIT, 
   MPI_COMM_RANK, 
   MPI_COMM_SIZE, 
   MPI_BCAST, 
   MPI_SEND, 
   MPI_RECV,
   MPI_ALLREDUCE( sum, max and min ), 
   MPI_REDUCE( sum, max, min ), 
   MPI_BARRIER, 
   MPI_FINALIZE


Comments:


I have had good experiences with MPI (MPICH implementation on our SP2
and RS6000 network).  This is the first message passing system I've
used, and I have found it to work well in parallelizing my applications.


Comments:


   MPI is great. It is nice (and very rare) 
 to see a commitee create something that 
 is MUCH better then any of the previous packages.


Comments:


I may have already repleid (can't remember). This project is a
finite element code for wave propagation. One of the aims was to try
and evaluate how some intermediate/advanced features of MPI could help
express parallelism. 
They can ! 


Comments:


I'm submitting this as a developer of our intercommunicator
extensions to MPI so I may not be quite the kind of user
you're wanting responses from -- Nathan


Comments:


Good work..


Comments:


Keep up the great work!


Comments:


MPI is great - no doubt about it.  I personally think
that message passing is bound to go the way of
assembly language programming sometime in the future,
but in the meantime - I think the forum did an
excellent job, and have continued to do so.


Comments:


Today I use PVM


Comments:


The debugging tools of LAM are great!  Thanks!


Comments:


The datatype constructor seems to be a bit tedious and 
complicated.


Comments:


MPI is a wonderful example of the benefits that can be
derived from a group of people working together to
generate a truly useful tool. 


Comments:


What I really miss from several of the MPI implementations
is the ability to start up each process running inside a
debugger such as xdbx. Debugging numerical code with big
arrays where something is walking over something else is
made much more easy if this functionality is available. So
far I've only found CHIMP-MPI does this easily although
I can fool LAM into doing it if I don't run my code using
MPIRUN.


Comments:


This answer is for a program that is currently being written.
We are doing higher order finite elements for solving the
wave eqaution by domain decomposition.

The features I've checked are those we *think* we'll be using
but we do not yet have any actual experience, or feedback, as
to else we'll need.

It's possible we'll need some simple form of global communications
(algorithms is explicit, so no global residual, but maybe a
global energy). Also it is not clear how much is to be gained
by using graph topologies (in terms of expressiveness, not
performance).


Comments:


I just bumped into LAM and MPI...looks good.  Would consider
how to use it for both performance and fault tolerance in 
computers on aircraft (radar processing, sensor image 
processing, etc.).


Comments:


I'm just getting started in a new project.  We are planning
to use MPI, but how so, we haven't decided.


Comments:


I find MPI very exciting to use.  It has many features not
present in other message-passing systems (e.g. the BLACS).
For example, MPI_Bcast is much easier than if's with
MPI_Bsend and MPI_Brecv's.


Comments:


There should be a stripped version of MPI for single 
group code, which uses only the six basic MPI calls
for message passing.  A majority of user codes in MPI
are this type, and the less overhead in the language, the
better.


Comments:


Advanced enviroment for program developement,
debugging and profiling required.


Comments:


I hope I did not fill this out already.

We are using MPI to parallelize a Four dimensional
data assimilation system (.e.g. optimal interpolation).
THe project is classified as one of the great challange
problems.


Comments:


I am not using MPI yet. I am starting to get it and then
I will install it at my university network. Now I am 
checking ftp and www sites. 


Comments:


All my MPI codes so far are straightforward translations
of PVM code to MPI, so I haven't made use of features like
derived datatypes yet, though they look useful.


Comments:


MPI is great, it does most of the important things to
make life bearable. Having said that...

I tried to use a derived data type once, but it didn't 
work quite the way I needed. I had a 3 dimensional array
in  Fortran, and wanted to send a collection of 'strips' 
of it from a number of slave processes so that they could 
be gathered into the same sort of structure in the master
process --- however, when I tried to define a single type
to represent the set of strips (using mpi_type_indexed) 
I found that, because of the way extent works, they were 
gathered with information from consecutive slaves 
contiguous, which is not what I wanted. To illustrate, here
is a schematic representation of the final array at the 
root, using numbers to indicate the process each strip has
come from--- 

wanted:
1..1..1..2..2..2..3..3..3..
got:
1..1..12..2..23..3..3

I could no
LAM / MPI Parallel Computing / Ohio Supercomputer Center / lam@tbag.osc.edu