In the ongoing efforts targeting the vectorization of linear algebra primitives, sparse matrix-matrix multiplication (SpGEMM) has received considerably much less attention than sparse Matrix Vector multiplication (SpMV). While both are equally important, this disparity can be attributed mainly to the additional formidable challenges raised by SpGEMM.
In this paper, we present a dynamic approach for addressing SpGEMM on the GPU. Our approach works directly on the standard compressed sparse rows (CSR) data format. In comparison to previous SpGEMM implementations, our approach guarantees a homogeneous, load-balanced access pattern to the first input matrix and improves memory access to the second input matrix. It adaptively repurposes GPU threads during execution and maximizes the time efficient on-chip scratchpad memory can be used. Following a completely deterministic scheduling pattern, it guaranties bit-stable results during repetitive execution, a property missing from other approaches. Evaluation on an extensive sparse matrix benchmark suggest our approach being the fastest SpGEMM implementation for highly sparse matrices and when seeking bit-stable results across the entire test set.
Mon 18 FebDisplayed time zone: Guadalajara, Mexico City, Monterrey change
10:55 - 12:35
|Throughput-Oriented GPU Memory Allocation|
|SEP-Graph: Finding Shortest Execution Paths for Graph Processing under a Hybrid Framework on GPU|
Hao Wang The Ohio State University, USA, Liang Geng The Ohio State University, USA, Rubao Lee United Parallel Computing Corporation, USA, Kaixi Hou Virginia Tech, USA, Yanfeng Zhang , Xiaodong Zhang The Ohio State University, USADOI
|Incremental Flattening for Nested Data Parallelism|
Troels Henriksen University of Copenhagen, Denmark, Frederik Thorøe DIKU, University of Copenhagen, Martin Elsman University of Copenhagen, Denmark, Cosmin Oancea University of Copenhagen, DenmarkDOI
|Adaptive Sparse Matrix-Matrix Multiplication on the GPU|
Martin Winter Graz University of Technology, Austria, Daniel Mlakar Graz University of Technology, Austria, Rhaleb Zayer Max Planck Institute for Informatics, Hans-Peter Seidel Max Planck Institute for Informatics, Markus Steinberger Graz University of Technology, AustriaDOI