PPoPP 2019
Sat 16 - Wed 20 February 2019 Washington, DC, United States

PPoPP is the premier forum for leading work on all aspects of parallel programming, including theoretical foundations, techniques, languages, compilers, runtime systems, tools, and practical experience. In the context of the symposium, “parallel programming” encompasses work on concurrent and parallel systems (multicore, multi-threaded, heterogeneous, clustered, and distributed systems; grids; datacenters; clouds; and large scale machines). Given the rise of parallel architectures in the consumer market (desktops, laptops, and mobile devices) and data centers, PPoPP is particularly interested in work that addresses new parallel workloads and issues that arise out of extreme-scale applications or cloud platforms, as well as techniques and tools that improve the productivity of parallel programming or work towards improved synergy with such emerging architectures.

Proceedings will be available on the ACM Digital Library.

Dates
You're viewing the program in a time zone which is different from your device's time zone change time zone

Sat 16 Feb

Displayed time zone: Guadalajara, Mexico City, Monterrey change

08:00 - 17:00

Sun 17 Feb

Displayed time zone: Guadalajara, Mexico City, Monterrey change

08:00 - 17:00
18:00 - 20:00
Welcome Reception and Poster SessionMain Conference at Mezzanine Foyer

Mon 18 Feb

Displayed time zone: Guadalajara, Mexico City, Monterrey change

08:00 - 17:00
08:15 - 09:30
Plenary Session 1Main Conference at Salon 6
08:15
15m
Talk
Chairs' Welcome
Main Conference

08:30
60m
Talk
HPCA Keynote: Srini Devadas (MIT)
Main Conference
09:35 - 10:25
Session 1: Big DataMain Conference at Salon 12/13
Chair(s): Roberto Palmieri Lehigh University
09:35
25m
Talk
Beyond Human-Level Accuracy: Computational Challenges in Deep Learning
Main Conference
Joel Hestness Baidu Research, Newsha Ardalani Baidu Research, Gregory Diamos Baidu Research
DOI
10:00
25m
Talk
S-EnKF: Co-designing for Scalable Ensemble Kalman Filter
Main Conference
Junmin Xiao , Shijie Wang Institute of Computing Technology, Chinese Academy of Sciences, Weiqiang Wan Institute of Computing Technology, Chinese Academy of Sciences, Xuehai Hong Institute of Computing Technology, Chinese Academy of Sciences, Guangming Tan Chinese Academy of Sciences(CAS)
DOI
10:55 - 12:35
Session 2: Heterogeneous Platforms and GPUMain Conference at Salon 12/13
Chair(s): Xu Liu College of William and Mary
10:55
25m
Talk
Throughput-Oriented GPU Memory Allocation
Main Conference
Isaac Gelado NVIDIA, Michael Garland NVIDIA Research
DOI
11:20
25m
Talk
SEP-Graph: Finding Shortest Execution Paths for Graph Processing under a Hybrid Framework on GPU
Main Conference
Hao Wang The Ohio State University, USA, Liang Geng The Ohio State University, USA, Rubao Lee United Parallel Computing Corporation, USA, Kaixi Hou Virginia Tech, USA, Yanfeng Zhang , Xiaodong Zhang The Ohio State University, USA
DOI
11:45
25m
Talk
Incremental Flattening for Nested Data Parallelism
Main Conference
Troels Henriksen University of Copenhagen, Denmark, Frederik Thorøe DIKU, University of Copenhagen, Martin Elsman University of Copenhagen, Denmark, Cosmin Oancea University of Copenhagen, Denmark
DOI
12:10
25m
Talk
Adaptive Sparse Matrix-Matrix Multiplication on the GPU
Main Conference
Martin Winter Graz University of Technology, Austria, Daniel Mlakar Graz University of Technology, Austria, Rhaleb Zayer Max Planck Institute for Informatics, Hans-Peter Seidel Max Planck Institute for Informatics, Markus Steinberger Graz University of Technology, Austria
DOI
14:00 - 15:40
Session 3: Transactional MemoryMain Conference at Salon 12/13
Chair(s): Milind Chabbi Uber Technologies
14:00
25m
Talk
Modular Transactions: Bounding Mixed Races in Space and Time
Main Conference
Brijesh Dongol University of Surrey, Radha Jagadeesan DePaul University, James Riely DePaul University
DOI
14:25
25m
Talk
Leveraging Hardware TM in Haskell
Main Conference
Ryan Yates , Michael Scott University of Rochester
DOI Authorizer link File Attached
14:50
25m
Talk
Stretching the capacity of Hardware Transactional Memory in IBM POWER architectures
Main Conference
Ricardo Jorge Duarte Filipe , Shady Issa INESC-ID, João Barreto INESC-ID, Paolo Romano University of Lisbon, Portugal
DOI
15:15
25m
Talk
Processing Transactions in a Predefined Order
Main Conference
Mohamed M. Saad Virginia Tech, Masoomeh Javidi Kishi Lehigh University, Shihao Jing Lehigh University, Sandeep Hans IBM India Research Lab, Roberto Palmieri Lehigh University
DOI
16:10 - 17:00
Session 4: GPU B-TreesMain Conference at Salon 12/13
Chair(s): Ang Li Pacific Northwest National Laboratory
16:10
25m
Talk
Harmonia: A High Throughput B+tree for GPUs
Main Conference
Zhaofeng Yan Fudan University, Yuzhe Lin Fudan University, Lu Peng , Weihua Zhang Fudan University
DOI
16:35
25m
Talk
Engineering a High-Performance GPU B-Tree
Main Conference
Muhammad Awad , Saman Ashkiani University of California, Davis, Rob Johnson VMWare Research, Martin Farach-Colton Rutgers University, John D. Owens University of California, Davis
DOI
17:00 - 18:00
Business MeetingMain Conference at Salon 12/13

Tue 19 Feb

Displayed time zone: Guadalajara, Mexico City, Monterrey change

08:00 - 17:00
08:15 - 09:30
Plenary Session 2Main Conference at Salon 6
Chair(s): Idit Keidar Technion - Israel institute of technology
08:15
75m
Talk
PPoPP Keynote: Karin Strauss (Microsoft Research)
Main Conference
S: Karin Strauss Microsoft Research
09:35 - 10:25
Session 5, PerformanceMain Conference at Salon 12/13
Chair(s): Cosmin Oancea University of Copenhagen, Denmark
09:35
25m
Talk
QTLS: High-Performance TLS Asynchronous Offload Framework with Intel® QuickAssist Technology
Main Conference
Xiaokang Hu Shanghai Jiao Tong University, Changzheng Wei Intel Asia-Pacific Research and Development Ltd., Li Jian , Brian Will Intel Corporation, Ping Yu Intel Asia-Pacific Research and Development Ltd., Lu Gong Intel Asia-Pacific Research and Development Ltd., Haibing Guan Shanghai Jiao Tong University
DOI
10:00
25m
Talk
Data-Flow/Dependence Profiling for Structured Transformations
Main Conference
Fabian Gruber Université Grenoble Alpes / INRIA Grenoble Rhônes-Alpes, Manuel Selva Université Grenoble Alpes, Diogo Sampaio Inria, Christophe Guillon STMicroelectronics, Antoine Moynault STMicroelectronics, Louis-Noël Pouchet Colorado State University, Fabrice Rastello INRIA
DOI
10:55 - 12:35
Session 6, Best Paper CandidatesMain Conference at Salon 12/13
Chair(s): Rudolf Eigenmann University of Delaware
10:55
25m
Talk
Lightweight Hardware Transactional Memory Profiling
Main Conference
Qingsen Wang College of William and Mary, Pengfei Su College of William and Mary, Milind Chabbi Uber Technologies, Xu Liu College of William and Mary
DOI
11:20
25m
Talk
A Pattern Based Algorithmic Autotuner for Graph Processing on GPUs
Main Conference
Ke Meng , Jiajia Li Georgia Institute of Technology, Pacific Northwest National Laboratory, Guangming Tan Chinese Academy of Sciences(CAS), Ninghui Sun State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences, University of Chinese Academy of Sciences
DOI
11:45
25m
Talk
Provably and Practically Efficient Granularity Control
Main Conference
Umut A. Acar Carnegie Mellon University, Vitaly Aksenov Inria & ITMO University, Arthur Charguéraud Inria, Mike Rainey Indiana University, USA
DOI
12:10
25m
Talk
A Coordinated Tiling and Batching Framework for Efficient GEMM on GPUs
Main Conference
Xiuhong Li Peking University, Eric Liang Peking University, Shengen Yan SenseTime, Jia Liancheng Peking University, Yinghan Li SenseTime
DOI
14:00 - 15:15
Session 7: SchedulingMain Conference at Salon 12/13
Chair(s): Jidong Zhai Tsinghua University
14:00
25m
Talk
Semantics-Aware Scheduling Policies for Synchronization Determinism
Main Conference
Qi Zhao North Carolina State University, Zhengyi Qiu North Carolina State University, Guoliang Jin North Carolina State University
DOI
14:25
25m
Talk
Proactive Work Stealing for Futures
Main Conference
Kyle Singer Washington University in St. Louis, Yifan Xu Washington University in St. Louis, I-Ting Angelina Lee Washington University in St. Louis
DOI
14:50
25m
Talk
A Round-Efficient Distributed Betweenness Centrality Algorithm
Main Conference
Loc Hoang University of Texas at Austin, USA, Matteo Pontecorvi Nokia Bell Labs, Roshan Dathathri University of Texas at Austin, USA, Gurbinder Gill University of Texas at Austin, USA, Bozhi You Xi'an Jiaotong University, Keshav Pingali University of Texas at Austin, USA, Vijaya Ramachandran University of Texas at Austin
DOI
15:45 - 16:35
Session 8: HPCMain Conference at Salon 12/13
Chair(s): I-Ting Angelina Lee Washington University in St. Louis
15:45
25m
Talk
Corrected Trees for Reliable Group Communication
Main Conference
Martin Küttler TU Dresden, Maksym Planeta TU Dresden, Germany, Jan Bierbaum TU Dresden, Carsten Weinhold TU Dresden, Hermann Härtig TU Dresden, Amnon Barak The Hebrew University of Jerusalem, Torsten Hoefler ETH Zurich
DOI
16:10
25m
Talk
Adaptive Sparse Tiling for Sparse Matrix Multiplication
Main Conference
Changwan Hong , Aravind Sukumaran-Rajam Ohio State University, USA, Israt Nisa , Kunal Singh The Ohio State University, P. Sadayappan Ohio State University
DOI
17:10 - 18:30
19:00 - 21:00
Excursion and Banquet DinnerMain Conference at Air and Space Museum

Wed 20 Feb

Displayed time zone: Guadalajara, Mexico City, Monterrey change

08:00 - 13:00
08:15 - 09:30
Plenary Session 3Main Conference at Salon 6
08:15
75m
Talk
CGO Keynote: Michael O’Boyle (University of Edinburgh)
Main Conference
S: Michael F. P. O'Boyle University of Edinburgh
09:35 - 10:50
Session 9: SynchronizationMain Conference at Salon 12/13
Chair(s): Erez Petrank Technion
09:35
25m
Talk
Encapsulated Open Nesting for STM: Fine-Grained Higher-Level Conflict Detection
Main Conference
Martin Bättig Department of Computer Science, ETH Zurich, Thomas Gross ETH Zurich
DOI
10:00
25m
Talk
A Specialized B-Tree for Concurrent Datalog Evaluation
Main Conference
Herbert Jordan University of Innsbruck, Pavle Subotic University College London, David Zhao The University of Sydney, Bernhard Scholz University of Sydney, Australia
DOI
10:25
25m
Talk
Efficient Race Detection with Futures
Main Conference
Robert Utterback Monmouth College, Kunal Agrawal Washington University in St. Louis, Jeremy Fineman , I-Ting Angelina Lee Washington University in St. Louis
DOI
11:20 - 12:35
Session 10: VerificationMain Conference at Salon 12/13
Chair(s): Michael Lam James Madison University
11:20
25m
Talk
Verifying C11 Programs Operationally
Main Conference
Simon Doherty University of Sheffield, Brijesh Dongol University of Surrey, Heike Wehrheim Paderborn University, John Derrick University of Sheffield
DOI
11:45
25m
Talk
Checking Linearizability Using Hitting Families
Main Conference
Burcu Kulahcioglu Ozkan MPI-SWS, Germany, Rupak Majumdar MPI-SWS, Germany, Filip Niksic University of Pennsylvania
DOI
12:10
25m
Talk
Transitive Joins: A Sound and Efficient Online Deadlock-Avoidance Policy
Main Conference
Caleb Voss Georgia Institute of Technology, Tiago Cogumbreiro University of Massachusetts Boston, Vivek Sarkar Rice University, USA
DOI
12:35 - 13:00
Best Paper Award and ClosingMain Conference at Salon 12/13

Accepted Papers

Title
A Coordinated Tiling and Batching Framework for Efficient GEMM on GPUs
Main Conference
DOI
Adaptive Sparse Matrix-Matrix Multiplication on the GPU
Main Conference
DOI
Adaptive Sparse Tiling for Sparse Matrix Multiplication
Main Conference
DOI
A Pattern Based Algorithmic Autotuner for Graph Processing on GPUs
Main Conference
DOI
A Round-Efficient Distributed Betweenness Centrality Algorithm
Main Conference
DOI
A Specialized B-Tree for Concurrent Datalog Evaluation
Main Conference
DOI
Beyond Human-Level Accuracy: Computational Challenges in Deep Learning
Main Conference
DOI
Checking Linearizability Using Hitting Families
Main Conference
DOI
Corrected Trees for Reliable Group Communication
Main Conference
DOI
Data-Flow/Dependence Profiling for Structured Transformations
Main Conference
DOI
Efficient Race Detection with Futures
Main Conference
DOI
Encapsulated Open Nesting for STM: Fine-Grained Higher-Level Conflict Detection
Main Conference
DOI
Engineering a High-Performance GPU B-Tree
Main Conference
DOI
Harmonia: A High Throughput B+tree for GPUs
Main Conference
DOI
Incremental Flattening for Nested Data Parallelism
Main Conference
DOI
Leveraging Hardware TM in Haskell
Main Conference
DOI Authorizer link File Attached
Lightweight Hardware Transactional Memory Profiling
Main Conference
DOI
Modular Transactions: Bounding Mixed Races in Space and Time
Main Conference
DOI
Proactive Work Stealing for Futures
Main Conference
DOI
Processing Transactions in a Predefined Order
Main Conference
DOI
Provably and Practically Efficient Granularity Control
Main Conference
DOI
QTLS: High-Performance TLS Asynchronous Offload Framework with Intel® QuickAssist Technology
Main Conference
DOI
Semantics-Aware Scheduling Policies for Synchronization Determinism
Main Conference
DOI
S-EnKF: Co-designing for Scalable Ensemble Kalman Filter
Main Conference
DOI
SEP-Graph: Finding Shortest Execution Paths for Graph Processing under a Hybrid Framework on GPU
Main Conference
DOI
Stretching the capacity of Hardware Transactional Memory in IBM POWER architectures
Main Conference
DOI
Throughput-Oriented GPU Memory Allocation
Main Conference
DOI
Transitive Joins: A Sound and Efficient Online Deadlock-Avoidance Policy
Main Conference
DOI
Verifying C11 Programs Operationally
Main Conference
DOI

Call for Papers

PPoPP 2019: 24th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming

Washington DC, USA, Feb 16 – 20, 2019 (collocated with HPCA-2019 and CGO-2019)

Important dates

  • Paper registration and abstract submission: August 13, 2018
  • Full paper submission: August 20, 2018
  • Author response period: October 28–November 1, 2018
  • Author Notification: November 15, 2018
  • Artifact submission to AE committee: November 23, 2018
  • Artifact notification by AE committee: December 20, 2018
  • Final paper due: January 4, 2019

All deadlines are at midnight anywhere on earth (AoE), and are firm.

Scope

PPoPP is the premier forum for leading work on all aspects of parallel programming, including theoretical foundations, techniques, languages, compilers, runtime systems, tools, and practical experience. In the context of the symposium, “parallel programming” encompasses work on concurrent and parallel systems (multicore, multi-threaded, heterogeneous, clustered, and distributed systems; grids; datacenters; clouds; and large scale machines). Given the rise of parallel architectures in the consumer market (desktops, laptops, and mobile devices) and data centers, PPoPP is particularly interested in work that addresses new parallel workloads and issues that arise out of extreme-scale applications or cloud platforms, as well as techniques and tools that improve the productivity of parallel programming or work towards improved synergy with such emerging architectures.

Specific topics of interest include (but are not limited to):

  • Compilers and runtime systems for parallel and heterogeneous systems
  • Concurrent data structures
  • Development, analysis, or management tools
  • Fault tolerance for parallel systems
  • Formal analysis and verification
  • High-performance / scientific computing
  • Libraries
  • Middleware for parallel systems
  • Parallel algorithms
  • Parallel applications and frameworks
  • Parallel programming for deep memory hierarchies including nonvolatile memory
  • Parallel programming languages
  • Parallel programming theory and models
  • Parallelism in non-scientific workloads: web, search, analytics, cloud, machine learning
  • Performance analysis, debugging and optimization
  • Programming tools for parallel and heterogeneous systems
  • Software engineering for parallel programs
  • Software for heterogeneous architectures
  • Software productivity for parallel programming
  • Synchronization and concurrency control

Papers should report on original research relevant to parallel programming and should contain enough background materials to make them accessible to the entire parallel programming research community. Papers describing experience should indicate how they illustrate general principles or lead to new insights; papers about parallel programming foundations should indicate how they relate to practice.

PPoPP submissions will be evaluated based on their technical merit and accessibility. Submissions should clearly motivate the importance of the problem being addressed, compare to the existing body of work on the topic, and explicitly and precisely state the paper’s key contributions and results towards addressing the problem. Submissions should strive to be accessible both to a broad audience and to experts in the area.

Paper Submission

Conference submission site

All submissions must be made electronically through the conference web site and include an abstract (100–400 words), author contact information, the full list of authors and their affiliations. Full paper submissions must be in PDF formatted printable on both A4 and US letter size paper.

All papers must be prepared in ACM Conference Format using the acmart format (use the SIGPLAN proceedings template acmart-sigplanproc-template.tex). You may also want to consult the offical ACM information on the Master Article Template and related tools.

Papers should contain a maximum of 10 pages of text (in a typeface no smaller than 10 point) or figures, NOT INCLUDING references. There is no page limit for references and they must include the name of all authors (not {et al.}). Appendices are not allowed, but the authors may submit supplementary material, such as proofs or source code; all supplementary material must be in PDF or ZIP format. Looking at supplementary material is at the discretion of the reviewers.

Submission is double blind and authors will need to identify any potential conflicts of interest with PC and Extended Review Committee members, as defined here: http://www.sigplan.org/Resources/Policies/Review/ (ACM SIGPLAN policy). Detailed instructions for electronic submission and other important ACM SIGPLAN Policies are posted here: Submission Guidelines.

PPoPP 2018 will employ a lightweight double-blind reviewing process. To facilitate this process, submissions should not reveal the identity of the authors in any way. Authors should leave out author names and affiliations from the body of their submission. They should also ensure that any references to authors’ own related work should be in the third person (e.g., not “We build on our previous work …” but rather “We build on the work of …”). The purpose of this process is to help the PC and external reviewers come to an initial judgment about the paper without bias, not to make it impossible for them to discover the authors if they were to try. Nothing should be done in the name of anonymity that weakens the submission or makes the job of reviewing the paper more difficult. In particular, important background references should not be omitted or anonymized. In addition, authors should feel free to disseminate their ideas or draft versions of their paper as they normally would. For instance, authors may post drafts of their papers on the web or give talks on their research ideas. Authors with further questions on double-blind reviewing are encouraged to contact the Program Chair by email.

Submissions should be in PDF and printable on both US Letter and A4 paper. Papers may be resubmitted to the submission site multiple times up until the deadline, but the last version submitted before the deadline will be the version reviewed. Papers that exceed the length requirement, that deviate from the expected format, or that are submitted late will be rejected.

All submissions that are not accepted for regular presentations will automatically be considered for posters. Two-page summaries of posters will be included in the conference proceedings (authors must decide by December 15 if they want to submit a poster).

To allow reproducibility, we encourage authors of accepted papers to submit their papers for Artifact Evaluation (AE). The AE process begins after the acceptance notification, and is run by a separate committee whose task is to assess how the artifacts support the work described in the papers. Artifact evaluation is voluntary and will not affect paper acceptance, but will be taken into consideration when selecting papers for awards. Papers that go through the AE process successfully will receive one or several of the ACM reproducibility badges, printed on the papers themselves. For more information, see: http://ctuning.org/ae/ppopp2019.html.

Deadlines expire at midnight anywhere on earth.

Publication Date

The titles of all accepted papers are typically announced shortly after the author notification date (around mid-November 2018). Note, however, that this is not the official publication date. The official publication date is the date the proceedings are made available in the ACM Digital Library. ACM will make the proceedings available via the Digital Library for one month, up to 2 weeks prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work.

Committees:

Title: When Moore met Feynman: Ultra-dense data storage and extreme parallelism with electronic-molecular systems

Abstract: Sustaining Moore’s law is an increasingly challenging proposition. This talk will cover an alternative approach: going directly to the molecular level, as suggested by Feynman in his famous lecture, “There’s Plenty of Room at the Bottom.” Although we have yet to achieve scalable, general-purpose molecular computation, there are areas of IT in which a molecular approach shows growing promise.

In this talk, I will explain how molecules, specifically synthetic DNA, can store digital data and perform certain types of special-purpose computation by leveraging tools already developed by the biotechnology industry. I will also discuss the architectural implications of molecular storage and processing systems and advocate for hybrid electronic-molecular systems as potential solutions to difficult computational problems, such as large-scale similarity search.

Bio: Karin Strauss is a Principal Researcher at Microsoft Corporation and an Affiliate Professor at the University of Washington. She co-leads the Molecular Information System Laboratory with Luis Ceze, working on using molecules, currently DNA, to benefit the IT industry. Her background is in computer architecture, systems, and most recently biology. Her research interests include emerging storage technologies, scaling of computation and storage, and special-purpose accelerators. Selected as one of the “100 Most Creative People in Business in 2016” by Fast Company Magazine, she got her PhD from the Department of Computer Science at the University of Illinois, Urbana-Champaign in 2007.