PPoPP 2019
Sat 16 - Wed 20 February 2019 Washington, DC, United States
Tue 19 Feb 2019 12:10 - 12:35 at Salon 12/13 - Session 6, Best Paper Candidates Chair(s): Rudolf Eigenmann

General matrix multiplication (GEMM) plays a paramount role in a broad range of domains such as deep learning, scientific computing, and image processing. The primary optimization method is to partition the matrix into many tiles and exploit the parallelism within and between tiles. The tiling hierarchy closely mirrors the thread hierarchy on GPUs. In practice, GPUs can fully unleash its computing power when the matrix size is large and there are sufficient number of tiles and workload for each tile. However, in many real-world applications especially deep learning domains, the matrix size is small. To this end, prior work proposes batched-GEMMs to process a group of small independent GEMMs together by designing a single CUDA kernel for all of these GEMMs.

However, the current support for batched-GEMMs is still rudimentary. Tiling and batching are tightly correlated. A large tile size can exploit data reuse, but it will decrease the thread-level parallelism, which further decrease the optimization space for the batching. A small tile size can increase the thread-level parallelism and then provide larger optimization space for the batching, but at the cost of sacrificing data reuse. In this paper, we propose a coordinated tiling and batching framework for accelerating GEMM on GPUs. It is a two-phase framework, which consists of a tiling engine and a batching engine to perform efficient batched-GEMMs on GPUs. Tiling engine partitions the GEMMs into independent tiles and batching engine assigns the tiles to thread blocks. Moreover, we propose a general programming interface to describe the coordinated tiling and batching solution. Finally, experiment evaluation results on synthetic batched GEMM cases show that our framework can achieve about 1.40X performance speedup on average over the state-of-the-art technique. We also use GoogleNet as a real-world case study and our framework achieves 1.23X speedup.

Tue 19 Feb

PPoPP-2019-papers
10:55 - 12:35: Main Conference - Session 6, Best Paper Candidates at Salon 12/13
Chair(s): Rudolf EigenmannUniversity of Delaware
PPoPP-2019-papers10:55 - 11:20
Talk
Qingsen WangCollege of William and Mary, Pengfei SuCollege of William and Mary, Milind ChabbiUber Technologies, Xu LiuCollege of William and Mary
DOI
PPoPP-2019-papers11:20 - 11:45
Talk
Ke Meng, Jiajia LiGeorgia Institute of Technology, Pacific Northwest National Laboratory, Guangming TanChinese Academy of Sciences(CAS), Ninghui SunState Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences, University of Chinese Academy of Sciences
DOI
PPoPP-2019-papers11:45 - 12:10
Talk
Umut AcarCarnegie Mellon University, Vitaly AksenovInria & ITMO University, Arthur CharguéraudInria, Mike RaineyIndiana University, USA
DOI
PPoPP-2019-papers12:10 - 12:35
Talk
Xiuhong LiPeking University, Eric LiangPeking University, Shengen YanSenseTime, Jia LianchengPeking University, Yinghan LiSenseTime
DOI