/fs/meiko-user/shared/mpi contains more MPI information and examples.
All sample programs from the text book ``Parallel Programming with MPI"
are in directory tyang/cs110b97/mpi/ppmpi_c.
The following is a partial list of
the MPI programs from this
book and ``Page" refers to the first page in the text book that discusses
about the corresponding code.
Page File(s) and Description ---- ----------------------- 41 chap3/greetings.c -- greetings program 55 chap4/serial.c -- serial trapezoidal rule 57 chap4/trap.c -- parallel trapezoidal rule, first version 61 chap4/get_data.c -- parallel trap. rule, reads and distributes input using linear for loops. 67 chap5/get_data1.c -- parallel trap. rule, uses hand-coded, tree- structured broadcast to distribute input. 70 chap5/get_data2.c -- parallel trap. rule, uses 3 calls to MPI_Bcast to distribute input. 74 chap5/reduce.c -- parallel trap. rule, uses 3 calls to MPI_Bcast to distribute input and MPI_Reduce to compute final sum. 75 chap5/serial_dot.c -- serial dot product 76 chap5/parallel_dot.c -- parallel dot product 78 chap5/parallel_dot1.c -- parallel dot product using MPI_Allreduce 78 chap5/serial_mat_vect.c -- serial matrix-vector product 83 chap5/parallel_mat_vect.c -- parallel matrix-vector product 90 chap6/count.c -- send a subarray using count parameter 93 chap6/get_data3.c -- parallel trap. rule, builds derived datatype for use with distribution of input 96 chap6/send_row.c -- send row of a matrix 97 chap6/send_col.c -- use derived datatype to send a column of a matrix 98 chap6/send_triangle.c -- use derived datatype to send upper triangle of a matrix 100 chap6/send_col_to_row.c -- send a row of a matrix on one process to a column on another 100 chap6/get_data4.c -- parallel trap. rule, use MPI_Pack/Unpack in distribution of input 104 chap6/sparse_row.c -- use MPI_Pack/Unpack to send a row of sparse matrix 113 chap7/serial_mat_mult.c -- serial matrix multiplication of two square matrices 118 chap7/comm_create.c -- build a communicator using MPI_Comm_create 118 chap7/comm_test.c -- tests communicator built using MPI_Comm_create 120 chap7/comm_split.c -- builds a collection of communicators using MPI_Comm_split 121 chap7/top_fcns.c -- builds and tests basic Cartesian topology functions 125 chap7/fox.c -- uses Fox's algorithm to multiply two square matrices 140 chap8/cache_test.c -- cache and retrieve a process rank attribute 143 chap8/cio.c, cio.h, cio_test.c, vsscanf.c, vsscanf.h, Makefile.cio -- functions for basic collective I/O 154 chap8/stdin_test.c -- test whether an MPI implementation allows input from stdin. 154 chap8/arg_test.c -- test whether an MPI implementation allows processes access to command line arguments 157 chap8/cfopen.c -- open a file 157 chap8/multi_files.c -- each process opens and writes to a different file. 165 chap8/ub.c -- build a derived type that uses MPI_UB 166 chap8/cyclic_io.c, cyclic_io.h, sum.c, Makefile.sum -- functions for array I/O using cyclic distribution 180 chap9/bug.c -- bugged serial insertion sort 188 chap9/mat_mult.c -- nondeterministic matrix multiplication 192 chap9/comm_time_0.c -- initial ring pass program 200 chap9/comm_time_1.c -- added first debugging output 202 chap9/comm_time_2.c -- ring pass started by process 0 204 chap9/comm_time_2a.c -- printf added to Print_results 205 chap9/comm_time_3.c -- fixed incorrect calculation of *order_ptr 206 chap9/comm_time_3a.c -- two different message sizes 206 chap9/comm_time_4.c -- removed printf from Print_results 208 chap9/comm_time_5.c -- checking behavior of message-passing functions by adding Cprintf's 209 chap9/comm_time_6.c -- relocate pesky Cprintf's 210 chap9/comm_time_7.c -- remove debug output, change number of tests 211 chap9/err_handler.c -- test changing default error handler in MPI to MPI_ERRORS_RETURN 218 chap10/serial_jacobi.c -- serial version of Jacobi's method 223 chap10/parallel_jacobi.c -- parallel version of Jacobi's method 226 chap10/sort_1.c, sort_1.h -- level 1 version of sort program 231 chap10/sort_2.c, sort_2.h -- add Get_list_size, Allocate_list, and Get_local_keys 234 chap10/sort_3.c, sort_3.h -- add Redistribute_keys, finish Allocate_list, add Insert, Local_sort, and Print_list 237 chap10/sort_4.c, sort_4.h -- add Find_alltoall_send_params, Find_cutoff, and Find_recv_displacements. Allow user input list size 255 chap11/parallel_trap.c -- parallel trapezoidal rule with code for taking timings. 267 chap12/ping_pong.c -- two process ping-pong 268 chap12/send.c, bcast.c -- simple example showing MPI's profiling interface 283 chap13/ag_ring_blk.c -- ring allgather using blocking send/recv 292 chap13/ag_cube_blk.c -- hypercube allgather using blocking send/recv 298 chap13/ag_ring_nblk.c -- ring allgather using nonblocking communications 299 chap13/ag_cube_nblk.c -- hypercube allgather using nonblocking communications 301 chap13/ag_ring_pers.c -- ring allgather using persistent communication requests 305 chap13/ag_ring_syn.c -- ring allgather using synchronous sends 307 chap13/ag_ring_rdy.c -- ring allgather using ready mode sends 309 chap13/ag_ring_buf.c -- ring allgather using buffered mode sends
and at processor
needs to be broadcasted to other processors
in the same row
where
.
and at processor
needs to be broadcasted to other processors
in the same row
where
.
and at processor
needs to be broadcasted to other processors
in the same row
where
.
and at processor
needs to be broadcasted to other processors
in the same row
where
.
and at processor
needs to be broadcasted to other processors
in the same row
where
.
and at processor
needs to be broadcasted to other processors
in the same row
where
.