This project implements a high-performance parallel approximation of π using the midpoint rule and MPI. It was benchmarked on the Bridges2 supercomputer using SLURM with 1, 12, 24, and 48 cores.
- ✅ 4 implementations:
pi1.cppthroughpi4.cpp - ✅ Uses
MPI_ReduceandMPI_Bcast - ✅ Includes SLURM scripts to run on HPC clusters
- ✅ Full report analyzing performance, speedup, and scaling behavior
- C++
- OpenMPI
- SLURM
- HPC (Bridges2 @ Pittsburgh Supercomputing Center)
| Cores | Time (s) | Speedup | Efficiency (%) |
|---|---|---|---|
| 1 | 3.50 | 1.00 | 100.0 |
| 12 | 0.345 | 10.14 | 84.5 |
| 24 | 0.346 | 10.12 | 42.1 |
| 48 | 3.50 | 1.00 | 2.0 |
pi1.cpptopi4.cpp: Four stages of parallel implementation- SLURM job scripts:
job-pi2-01.slurm, etc. report.txt: Final project report with results and analysis
##folder Structure
parallel-pi-mpi-hpc/
├── src/
│ ├── pi1.cpp
│ ├── pi2.cpp
│ ├── pi3.cpp
│ └── pi4.cpp
├── jobs/
│ ├── job-pi2-01.slurm
│ ├── job-pi2-12.slurm
│ ├── job-pi2-24.slurm
│ └── job-pi2-48.slurm
├── results/
│ └── timing-analysis.csv ← (optional)
├── report.txt
├── README.md
└── LICENSE
- Writing MPI programs in C++
- Measuring parallel performance and speedup
- SLURM job submission and HPC benchmarking
- Communication overhead and scalability limits