Compilation techniques for nested-parallel applications that can adapt to hardware and dataset characteristics are vital for unlocking the power of modern hardware. This paper proposes such a technique, which builds on flattening and is applied in the context of a functional data-parallel language. Our solution uses the degree of utilized parallelism as the driver for generating a multitude of code versions, which together cover all possible mappings of the application’s regular nested parallelism to the levels of parallelism supported by the hardware. These code versions are then combined into one program by guarding them with predicates, whose threshold values are automatically tuned to hardware and dataset characteristics.
Our unsupervised method—of statically clustering datasets to code versions—is different from autotuning work that typically searches for the combination of code transformations producing a single version, best suited for a specific dataset or on average for all datasets.
We demonstrate—by fully integrating our technique in the repertoire of a compiler for the Futhark programming language—significant performance gains on two GPUs for three real-world applications, from the financial domain, and for six Rodinia benchmarks.
Mon 18 Feb
|10:55 - 11:20|
|11:20 - 11:45|
Hao WangThe Ohio State University, USA, Liang GengThe Ohio State University, USA, Rubao LeeUnited Parallel Computing Corporation, USA, Kaixi HouVirginia Tech, USA, Yanfeng Zhang, Xiaodong ZhangThe Ohio State University, USADOI
|11:45 - 12:10|
Troels HenriksenUniversity of Copenhagen, Denmark, Frederik ThorøeDIKU, University of Copenhagen, Martin ElsmanUniversity of Copenhagen, Denmark, Cosmin OanceaUniversity of Copenhagen, DenmarkDOI
|12:10 - 12:35|
Martin WinterGraz University of Technology, Austria, Daniel MlakarGraz University of Technology, Austria, Rhaleb ZayerMax Planck Institute for Informatics, Hans-Peter SeidelMax Planck Institute for Informatics, Markus SteinbergerGraz University of Technology, AustriaDOI