PPoPP 2019
Sat 16 - Wed 20 February 2019 Washington, DC, United States

PPoPP is the premier forum for leading work on all aspects of parallel programming, including theoretical foundations, techniques, languages, compilers, runtime systems, tools, and practical experience. In the context of the symposium, “parallel programming” encompasses work on concurrent and parallel systems (multicore, multi-threaded, heterogeneous, clustered, and distributed systems; grids; datacenters; clouds; and large scale machines). Given the rise of parallel architectures in the consumer market (desktops, laptops, and mobile devices) and data centers, PPoPP is particularly interested in work that addresses new parallel workloads and issues that arise out of extreme-scale applications or cloud platforms, as well as techniques and tools that improve the productivity of parallel programming or work towards improved synergy with such emerging architectures.

Proceedings will be available on the ACM Digital Library.

Dates
Tracks
You're viewing the program in a time zone which is different from your device's time zone - change time zone

Sat 16 Feb
Times are displayed in time zone: Guadalajara, Mexico City, Monterrey change

08:00 - 17:00

Sun 17 Feb
Times are displayed in time zone: Guadalajara, Mexico City, Monterrey change

08:00 - 17:00
18:00 - 20:00
Welcome Reception and Poster SessionMain Conference at Mezzanine Foyer

Mon 18 Feb
Times are displayed in time zone: Guadalajara, Mexico City, Monterrey change

08:00 - 17:00
08:15 - 09:30
Plenary Session 1Main Conference at Salon 6
08:15
15m
Talk
Chairs' Welcome
Main Conference
08:30
60m
Talk
HPCA Keynote: Srini Devadas (MIT)
Main Conference
09:35 - 10:25
Session 1: Big DataMain Conference at Salon 12/13
Chair(s): Roberto PalmieriLehigh University
09:35
25m
Talk
Beyond Human-Level Accuracy: Computational Challenges in Deep Learning
Main Conference
Joel HestnessBaidu Research, Newsha ArdalaniBaidu Research, Gregory DiamosBaidu Research
DOI
10:00
25m
Talk
S-EnKF: Co-designing for Scalable Ensemble Kalman Filter
Main Conference
Junmin Xiao, Shijie WangInstitute of Computing Technology, Chinese Academy of Sciences, Weiqiang WanInstitute of Computing Technology, Chinese Academy of Sciences, Xuehai HongInstitute of Computing Technology, Chinese Academy of Sciences, Guangming TanChinese Academy of Sciences(CAS)
DOI
10:55 - 12:35
Session 2: Heterogeneous Platforms and GPUMain Conference at Salon 12/13
Chair(s): Xu LiuCollege of William and Mary
10:55
25m
Talk
Throughput-Oriented GPU Memory Allocation
Main Conference
Isaac GeladoNVIDIA, Michael GarlandNVIDIA Research
DOI
11:20
25m
Talk
SEP-Graph: Finding Shortest Execution Paths for Graph Processing under a Hybrid Framework on GPU
Main Conference
Hao WangThe Ohio State University, USA, Liang GengThe Ohio State University, USA, Rubao LeeUnited Parallel Computing Corporation, USA, Kaixi HouVirginia Tech, USA, Yanfeng Zhang, Xiaodong ZhangThe Ohio State University, USA
DOI
11:45
25m
Talk
Incremental Flattening for Nested Data Parallelism
Main Conference
Troels HenriksenUniversity of Copenhagen, Denmark, Frederik ThorøeDIKU, University of Copenhagen, Martin ElsmanUniversity of Copenhagen, Denmark, Cosmin OanceaUniversity of Copenhagen, Denmark
DOI
12:10
25m
Talk
Adaptive Sparse Matrix-Matrix Multiplication on the GPU
Main Conference
Martin WinterGraz University of Technology, Austria, Daniel MlakarGraz University of Technology, Austria, Rhaleb ZayerMax Planck Institute for Informatics, Hans-Peter SeidelMax Planck Institute for Informatics, Markus SteinbergerGraz University of Technology, Austria
DOI
14:00 - 15:40
Session 3: Transactional MemoryMain Conference at Salon 12/13
Chair(s): Milind ChabbiUber Technologies
14:00
25m
Talk
Modular Transactions: Bounding Mixed Races in Space and Time
Main Conference
Brijesh DongolUniversity of Surrey, Radha JagadeesanDePaul University, James RielyDePaul University
DOI
14:25
25m
Talk
Leveraging Hardware TM in Haskell
Main Conference
Ryan Yates, Michael ScottUniversity of Rochester
DOI Authorizer link File Attached
14:50
25m
Talk
Stretching the capacity of Hardware Transactional Memory in IBM POWER architectures
Main Conference
Ricardo Jorge Duarte Filipe, Shady IssaINESC-ID, João BarretoINESC-ID, Paolo RomanoUniversity of Lisbon, Portugal
DOI
15:15
25m
Talk
Processing Transactions in a Predefined Order
Main Conference
Mohamed M. SaadVirginia Tech, Masoomeh Javidi KishiLehigh University, Shihao JingLehigh University, Sandeep HansIBM India Research Lab, Roberto PalmieriLehigh University
DOI
16:10 - 17:00
Session 4: GPU B-TreesMain Conference at Salon 12/13
Chair(s): Ang LiPacific Northwest National Laboratory
16:10
25m
Talk
Harmonia: A High Throughput B+tree for GPUs
Main Conference
Zhaofeng YanFudan University, Yuzhe LinFudan University, Lu Peng, Weihua ZhangFudan University
DOI
16:35
25m
Talk
Engineering a High-Performance GPU B-Tree
Main Conference
Muhammad Awad, Saman AshkianiUniversity of California, Davis, Rob JohnsonVMWare Research, Martin Farach-ColtonRutgers University, John D. OwensUniversity of California, Davis
DOI
17:00 - 18:00
Business MeetingMain Conference at Salon 12/13

Tue 19 Feb
Times are displayed in time zone: Guadalajara, Mexico City, Monterrey change

08:00 - 17:00
08:15 - 09:30
Plenary Session 2Main Conference at Salon 6
Chair(s): Idit KeidarTechnion - Israel institute of technology
08:15
75m
Talk
PPoPP Keynote: Karin Strauss (Microsoft Research)
Main Conference
S: Karin StraussMicrosoft Research
09:35 - 10:25
Session 5, PerformanceMain Conference at Salon 12/13
Chair(s): Cosmin OanceaUniversity of Copenhagen, Denmark
09:35
25m
Talk
QTLS: High-Performance TLS Asynchronous Offload Framework with Intel® QuickAssist Technology
Main Conference
Xiaokang HuShanghai Jiao Tong University, Changzheng WeiIntel Asia-Pacific Research and Development Ltd., Li Jian, Brian WillIntel Corporation, Ping YuIntel Asia-Pacific Research and Development Ltd., Lu GongIntel Asia-Pacific Research and Development Ltd., Haibing GuanShanghai Jiao Tong University
DOI
10:00
25m
Talk
Data-Flow/Dependence Profiling for Structured Transformations
Main Conference
Fabian GruberUniversité Grenoble Alpes / INRIA Grenoble Rhônes-Alpes, Manuel SelvaUniversité Grenoble Alpes, Diogo SampaioInria, Christophe GuillonSTMicroelectronics, Antoine MoynaultSTMicroelectronics, Louis-Noël PouchetColorado State University, Fabrice RastelloINRIA
DOI
10:55 - 12:35
Session 6, Best Paper CandidatesMain Conference at Salon 12/13
Chair(s): Rudolf EigenmannUniversity of Delaware
10:55
25m
Talk
Lightweight Hardware Transactional Memory Profiling
Main Conference
Qingsen WangCollege of William and Mary, Pengfei SuCollege of William and Mary, Milind ChabbiUber Technologies, Xu LiuCollege of William and Mary
DOI
11:20
25m
Talk
A Pattern Based Algorithmic Autotuner for Graph Processing on GPUs
Main Conference
Ke Meng, Jiajia LiGeorgia Institute of Technology, Pacific Northwest National Laboratory, Guangming TanChinese Academy of Sciences(CAS), Ninghui SunState Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences, University of Chinese Academy of Sciences
DOI
11:45
25m
Talk
Provably and Practically Efficient Granularity Control
Main Conference
Umut A. AcarCarnegie Mellon University, Vitaly AksenovInria & ITMO University, Arthur CharguéraudInria, Mike RaineyIndiana University, USA
DOI
12:10
25m
Talk
A Coordinated Tiling and Batching Framework for Efficient GEMM on GPUs
Main Conference
Xiuhong LiPeking University, Eric LiangPeking University, Shengen YanSenseTime, Jia LianchengPeking University, Yinghan LiSenseTime
DOI
14:00 - 15:15
Session 7: SchedulingMain Conference at Salon 12/13
Chair(s): Jidong ZhaiTsinghua University
14:00
25m
Talk
Semantics-Aware Scheduling Policies for Synchronization Determinism
Main Conference
Qi ZhaoNorth Carolina State University, Zhengyi QiuNorth Carolina State University, Guoliang JinNorth Carolina State University
DOI
14:25
25m
Talk
Proactive Work Stealing for Futures
Main Conference
Kyle SingerWashington University in St. Louis, Yifan XuWashington University in St. Louis, I-Ting Angelina LeeWashington University in St. Louis
DOI
14:50
25m
Talk
A Round-Efficient Distributed Betweenness Centrality Algorithm
Main Conference
Loc HoangUniversity of Texas at Austin, USA, Matteo PontecorviNokia Bell Labs, Roshan DathathriUniversity of Texas at Austin, USA, Gurbinder GillUniversity of Texas at Austin, USA, Bozhi YouXi'an Jiaotong University, Keshav PingaliUniversity of Texas at Austin, USA, Vijaya RamachandranUniversity of Texas at Austin
DOI
15:45 - 16:35
Session 8: HPCMain Conference at Salon 12/13
Chair(s): I-Ting Angelina LeeWashington University in St. Louis
15:45
25m
Talk
Corrected Trees for Reliable Group Communication
Main Conference
Martin KüttlerTU Dresden, Maksym PlanetaTU Dresden, Germany, Jan BierbaumTU Dresden, Carsten WeinholdTU Dresden, Hermann HärtigTU Dresden, Amnon BarakThe Hebrew University of Jerusalem, Torsten HoeflerETH Zurich
DOI
16:10
25m
Talk
Adaptive Sparse Tiling for Sparse Matrix Multiplication
Main Conference
Changwan Hong, Aravind Sukumaran-RajamOhio State University, USA, Israt Nisa, Kunal SinghThe Ohio State University, P. SadayappanOhio State University
DOI
17:10 - 18:30
19:00 - 21:00
Excursion and Banquet DinnerMain Conference at Air and Space Museum

Wed 20 Feb
Times are displayed in time zone: Guadalajara, Mexico City, Monterrey change

08:00 - 13:00
08:15 - 09:30
Plenary Session 3Main Conference at Salon 6
08:15
75m
Talk
CGO Keynote: Michael O’Boyle (University of Edinburgh)
Main Conference
S: Michael F. P. O'BoyleUniversity of Edinburgh
09:35 - 10:50
Session 9: SynchronizationMain Conference at Salon 12/13
Chair(s): Erez PetrankTechnion
09:35
25m
Talk
Encapsulated Open Nesting for STM: Fine-Grained Higher-Level Conflict Detection
Main Conference
Martin BättigDepartment of Computer Science, ETH Zurich, Thomas GrossETH Zurich
DOI
10:00
25m
Talk
A Specialized B-Tree for Concurrent Datalog Evaluation
Main Conference
Herbert JordanUniversity of Innsbruck, Pavle SuboticUniversity College London, David ZhaoThe University of Sydney, Bernhard ScholzUniversity of Sydney, Australia
DOI
10:25
25m
Talk
Efficient Race Detection with Futures
Main Conference
Robert UtterbackMonmouth College, Kunal AgrawalWashington University in St. Louis, Jeremy Fineman, I-Ting Angelina LeeWashington University in St. Louis
DOI
11:20 - 12:35
Session 10: VerificationMain Conference at Salon 12/13
Chair(s): Michael LamJames Madison University
11:20
25m
Talk
Verifying C11 Programs Operationally
Main Conference
Simon DohertyUniversity of Sheffield, Brijesh DongolUniversity of Surrey, Heike WehrheimPaderborn University, John DerrickUniversity of Sheffield
DOI
11:45
25m
Talk
Checking Linearizability Using Hitting Families
Main Conference
Burcu Kulahcioglu OzkanMPI-SWS, Germany, Rupak MajumdarMPI-SWS, Germany, Filip NiksicUniversity of Pennsylvania
DOI
12:10
25m
Talk
Transitive Joins: A Sound and Efficient Online Deadlock-Avoidance Policy
Main Conference
Caleb VossGeorgia Institute of Technology, Tiago CogumbreiroUniversity of Massachusetts Boston, Vivek SarkarRice University, USA
DOI
12:35 - 13:00
Best Paper Award and ClosingMain Conference at Salon 12/13

Call for Papers

PPoPP 2019: 24th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming

Washington DC, USA, Feb 16 – 20, 2019 (collocated with HPCA-2019 and CGO-2019)

Important dates

  • Paper registration and abstract submission: August 13, 2018
  • Full paper submission: August 20, 2018
  • Author response period: October 28–November 1, 2018
  • Author Notification: November 15, 2018
  • Artifact submission to AE committee: November 23, 2018
  • Artifact notification by AE committee: December 20, 2018
  • Final paper due: January 4, 2019

All deadlines are at midnight anywhere on earth (AoE), and are firm.

Scope

PPoPP is the premier forum for leading work on all aspects of parallel programming, including theoretical foundations, techniques, languages, compilers, runtime systems, tools, and practical experience. In the context of the symposium, “parallel programming” encompasses work on concurrent and parallel systems (multicore, multi-threaded, heterogeneous, clustered, and distributed systems; grids; datacenters; clouds; and large scale machines). Given the rise of parallel architectures in the consumer market (desktops, laptops, and mobile devices) and data centers, PPoPP is particularly interested in work that addresses new parallel workloads and issues that arise out of extreme-scale applications or cloud platforms, as well as techniques and tools that improve the productivity of parallel programming or work towards improved synergy with such emerging architectures.

Specific topics of interest include (but are not limited to):

  • Compilers and runtime systems for parallel and heterogeneous systems
  • Concurrent data structures
  • Development, analysis, or management tools
  • Fault tolerance for parallel systems
  • Formal analysis and verification
  • High-performance / scientific computing
  • Libraries
  • Middleware for parallel systems
  • Parallel algorithms
  • Parallel applications and frameworks
  • Parallel programming for deep memory hierarchies including nonvolatile memory
  • Parallel programming languages
  • Parallel programming theory and models
  • Parallelism in non-scientific workloads: web, search, analytics, cloud, machine learning
  • Performance analysis, debugging and optimization
  • Programming tools for parallel and heterogeneous systems
  • Software engineering for parallel programs
  • Software for heterogeneous architectures
  • Software productivity for parallel programming
  • Synchronization and concurrency control

Papers should report on original research relevant to parallel programming and should contain enough background materials to make them accessible to the entire parallel programming research community. Papers describing experience should indicate how they illustrate general principles or lead to new insights; papers about parallel programming foundations should indicate how they relate to practice.

PPoPP submissions will be evaluated based on their technical merit and accessibility. Submissions should clearly motivate the importance of the problem being addressed, compare to the existing body of work on the topic, and explicitly and precisely state the paper’s key contributions and results towards addressing the problem. Submissions should strive to be accessible both to a broad audience and to experts in the area.

Paper Submission

Conference submission site

All submissions must be made electronically through the conference web site and include an abstract (100–400 words), author contact information, the full list of authors and their affiliations. Full paper submissions must be in PDF formatted printable on both A4 and US letter size paper.

All papers must be prepared in ACM Conference Format using the acmart format (use the SIGPLAN proceedings template acmart-sigplanproc-template.tex). You may also want to consult the offical ACM information on the Master Article Template and related tools.

Papers should contain a maximum of 10 pages of text (in a typeface no smaller than 10 point) or figures, NOT INCLUDING references. There is no page limit for references and they must include the name of all authors (not {et al.}). Appendices are not allowed, but the authors may submit supplementary material, such as proofs or source code; all supplementary material must be in PDF or ZIP format. Looking at supplementary material is at the discretion of the reviewers.

Submission is double blind and authors will need to identify any potential conflicts of interest with PC and Extended Review Committee members, as defined here: http://www.sigplan.org/Resources/Policies/Review/ (ACM SIGPLAN policy). Detailed instructions for electronic submission and other important ACM SIGPLAN Policies are posted here: Submission Guidelines.

PPoPP 2018 will employ a lightweight double-blind reviewing process. To facilitate this process, submissions should not reveal the identity of the authors in any way. Authors should leave out author names and affiliations from the body of their submission. They should also ensure that any references to authors’ own related work should be in the third person (e.g., not “We build on our previous work …” but rather “We build on the work of …”). The purpose of this process is to help the PC and external reviewers come to an initial judgment about the paper without bias, not to make it impossible for them to discover the authors if they were to try. Nothing should be done in the name of anonymity that weakens the submission or makes the job of reviewing the paper more difficult. In particular, important background references should not be omitted or anonymized. In addition, authors should feel free to disseminate their ideas or draft versions of their paper as they normally would. For instance, authors may post drafts of their papers on the web or give talks on their research ideas. Authors with further questions on double-blind reviewing are encouraged to contact the Program Chair by email.

Submissions should be in PDF and printable on both US Letter and A4 paper. Papers may be resubmitted to the submission site multiple times up until the deadline, but the last version submitted before the deadline will be the version reviewed. Papers that exceed the length requirement, that deviate from the expected format, or that are submitted late will be rejected.

All submissions that are not accepted for regular presentations will automatically be considered for posters. Two-page summaries of posters will be included in the conference proceedings (authors must decide by December 15 if they want to submit a poster).

To allow reproducibility, we encourage authors of accepted papers to submit their papers for Artifact Evaluation (AE). The AE process begins after the acceptance notification, and is run by a separate committee whose task is to assess how the artifacts support the work described in the papers. Artifact evaluation is voluntary and will not affect paper acceptance, but will be taken into consideration when selecting papers for awards. Papers that go through the AE process successfully will receive one or several of the ACM reproducibility badges, printed on the papers themselves. For more information, see: http://ctuning.org/ae/ppopp2019.html.

Deadlines expire at midnight anywhere on earth.

Publication Date

The titles of all accepted papers are typically announced shortly after the author notification date (around mid-November 2018). Note, however, that this is not the official publication date. The official publication date is the date the proceedings are made available in the ACM Digital Library. ACM will make the proceedings available via the Digital Library for one month, up to 2 weeks prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work.

Committees:

Accepted Papers

Title
A Coordinated Tiling and Batching Framework for Efficient GEMM on GPUs
Main Conference
DOI
A Pattern Based Algorithmic Autotuner for Graph Processing on GPUs
Main Conference
DOI
A Round-Efficient Distributed Betweenness Centrality Algorithm
Main Conference
DOI
A Specialized B-Tree for Concurrent Datalog Evaluation
Main Conference
DOI
Adaptive Sparse Matrix-Matrix Multiplication on the GPU
Main Conference
DOI
Adaptive Sparse Tiling for Sparse Matrix Multiplication
Main Conference
DOI
Beyond Human-Level Accuracy: Computational Challenges in Deep Learning
Main Conference
DOI
Checking Linearizability Using Hitting Families
Main Conference
DOI
Corrected Trees for Reliable Group Communication
Main Conference
DOI
Data-Flow/Dependence Profiling for Structured Transformations
Main Conference
DOI
Efficient Race Detection with Futures
Main Conference
DOI
Encapsulated Open Nesting for STM: Fine-Grained Higher-Level Conflict Detection
Main Conference
DOI
Engineering a High-Performance GPU B-Tree
Main Conference
DOI
Harmonia: A High Throughput B+tree for GPUs
Main Conference
DOI
Incremental Flattening for Nested Data Parallelism
Main Conference
DOI
Leveraging Hardware TM in Haskell
Main Conference
DOI Authorizer link File Attached
Lightweight Hardware Transactional Memory Profiling
Main Conference
DOI
Modular Transactions: Bounding Mixed Races in Space and Time
Main Conference
DOI
Proactive Work Stealing for Futures
Main Conference
DOI
Processing Transactions in a Predefined Order
Main Conference
DOI
Provably and Practically Efficient Granularity Control
Main Conference
DOI
QTLS: High-Performance TLS Asynchronous Offload Framework with Intel® QuickAssist Technology
Main Conference
DOI
S-EnKF: Co-designing for Scalable Ensemble Kalman Filter
Main Conference
DOI
SEP-Graph: Finding Shortest Execution Paths for Graph Processing under a Hybrid Framework on GPU
Main Conference
DOI
Semantics-Aware Scheduling Policies for Synchronization Determinism
Main Conference
DOI
Stretching the capacity of Hardware Transactional Memory in IBM POWER architectures
Main Conference
DOI
Throughput-Oriented GPU Memory Allocation
Main Conference
DOI
Transitive Joins: A Sound and Efficient Online Deadlock-Avoidance Policy
Main Conference
DOI
Verifying C11 Programs Operationally
Main Conference
DOI

Title: When Moore met Feynman: Ultra-dense data storage and extreme parallelism with electronic-molecular systems

Abstract: Sustaining Moore’s law is an increasingly challenging proposition. This talk will cover an alternative approach: going directly to the molecular level, as suggested by Feynman in his famous lecture, “There’s Plenty of Room at the Bottom.” Although we have yet to achieve scalable, general-purpose molecular computation, there are areas of IT in which a molecular approach shows growing promise.

In this talk, I will explain how molecules, specifically synthetic DNA, can store digital data and perform certain types of special-purpose computation by leveraging tools already developed by the biotechnology industry. I will also discuss the architectural implications of molecular storage and processing systems and advocate for hybrid electronic-molecular systems as potential solutions to difficult computational problems, such as large-scale similarity search.

Bio: Karin Strauss is a Principal Researcher at Microsoft Corporation and an Affiliate Professor at the University of Washington. She co-leads the Molecular Information System Laboratory with Luis Ceze, working on using molecules, currently DNA, to benefit the IT industry. Her background is in computer architecture, systems, and most recently biology. Her research interests include emerging storage technologies, scaling of computation and storage, and special-purpose accelerators. Selected as one of the “100 Most Creative People in Business in 2016” by Fast Company Magazine, she got her PhD from the Department of Computer Science at the University of Illinois, Urbana-Champaign in 2007.