Fast Conjugate Gradients with Multiple GPUs
2009
The limiting factor for efficiency of sparse linear solvers is the memory bandwidth. In this work, we describe a fast Conjugate Gradient solver for unstructured problems, which runs on multiple GPUs installed on a single mainboard. The solver achieves double precision accuracy with single precision GPUs, using a mixed precision iterative refinement algorithm. To achieve high computation speed, we propose a fast sparse matrix-vector multiplication algorithm, which is the core operation of iterative solvers. The proposed multiplication algorithm efficiently utilizes GPU resources via caching, coalesced memory accesses and load balance between running threads. Experiments on wide range of matrices show that our matrix-vector multiplication algorithm achieves up to 11.6 Gflops on single GeForce 8800 GTS card and CG implementation achieves up to 24.6 Gflops with four GPUs.
Keywords:
- Mathematical optimization
- Parallel computing
- Limiting factor
- Sparse matrix
- Matrix multiplication
- Iterative refinement
- Memory bandwidth
- Conjugate gradient method
- Solver
- Multiplication algorithm
- Computer science
- Theoretical computer science
- Single-precision floating-point format
- Double-precision floating-point format
- CUDA
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
20
References
83
Citations
NaN
KQI