This document provides an overview of parallel programming concepts and MPI (Message Passing Interface). It discusses parallelization techniques like blocking communication, collective communication routines, point-to-point communication, and derived data types. It also covers how to parallelize programs by distributing work across processes, such as with block decomposition of arrays and parallelizing I/O and loops. Advanced topics include parallelizing finite difference methods, LU factorization, and molecular dynamics simulations. The document is intended to help programmers write practical MPI programs for parallel systems like IBM RS/6000 SP.