PPoPP 2019
Sat 16 - Wed 20 February 2019 Washington, DC, United States

Call for Papers

PPoPP 2019: 24th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming

Washington DC, USA, Feb 16 – 20, 2019 (collocated with HPCA-2019 and CGO-2019)

Important dates

  • Paper registration and abstract submission: August 13, 2018
  • Full paper submission: August 20, 2018
  • Author response period: October 28–November 1, 2018
  • Author Notification: November 15, 2018
  • Artifact submission to AE committee: November 23, 2018
  • Artifact notification by AE committee: December 20, 2018
  • Final paper due: January 4, 2019

All deadlines are at midnight anywhere on earth (AoE), and are firm.


PPoPP is the premier forum for leading work on all aspects of parallel programming, including theoretical foundations, techniques, languages, compilers, runtime systems, tools, and practical experience. In the context of the symposium, “parallel programming” encompasses work on concurrent and parallel systems (multicore, multi-threaded, heterogeneous, clustered, and distributed systems; grids; datacenters; clouds; and large scale machines). Given the rise of parallel architectures in the consumer market (desktops, laptops, and mobile devices) and data centers, PPoPP is particularly interested in work that addresses new parallel workloads and issues that arise out of extreme-scale applications or cloud platforms, as well as techniques and tools that improve the productivity of parallel programming or work towards improved synergy with such emerging architectures.

Specific topics of interest include (but are not limited to):

  • Compilers and runtime systems for parallel and heterogeneous systems
  • Concurrent data structures
  • Development, analysis, or management tools
  • Fault tolerance for parallel systems
  • Formal analysis and verification
  • High-performance / scientific computing
  • Libraries
  • Middleware for parallel systems
  • Parallel algorithms
  • Parallel applications and frameworks
  • Parallel programming for deep memory hierarchies including nonvolatile memory
  • Parallel programming languages
  • Parallel programming theory and models
  • Parallelism in non-scientific workloads: web, search, analytics, cloud, machine learning
  • Performance analysis, debugging and optimization
  • Programming tools for parallel and heterogeneous systems
  • Software engineering for parallel programs
  • Software for heterogeneous architectures
  • Software productivity for parallel programming
  • Synchronization and concurrency control

Papers should report on original research relevant to parallel programming and should contain enough background materials to make them accessible to the entire parallel programming research community. Papers describing experience should indicate how they illustrate general principles or lead to new insights; papers about parallel programming foundations should indicate how they relate to practice.

PPoPP submissions will be evaluated based on their technical merit and accessibility. Submissions should clearly motivate the importance of the problem being addressed, compare to the existing body of work on the topic, and explicitly and precisely state the paper’s key contributions and results towards addressing the problem. Submissions should strive to be accessible both to a broad audience and to experts in the area.

Paper Submission

All submissions must be made electronically through the conference web site and include an abstract (100–400 words), author contact information, the full list of authors and their affiliations. Full paper submissions must be in PDF formatted printable on both A4 and US letter size paper.

Papers should contain a maximum of 10 pages of text (in a typeface no smaller than 10 point) or figures, NOT INCLUDING references. There is no page limit for references and they must include the name of all authors (not {et al.}). Appendices are not allowed, but the authors may submit supplementary material, such as proofs or source code; all supplementary material must be in PDF or ZIP format. Looking at supplementary material is at the discretion of the reviewers.

Submission is double blind and authors will need to identify any potential conflicts of interest with PC and Extended Review Committee members, as defined here: http://www.sigplan.org/Resources/Policies/Review/ (ACM SIGPLAN policy). Detailed instructions for electronic submission and other important ACM SIGPLAN Policies are posted here: Submission Guidelines.

All submissions that are not accepted for regular presentations will automatically be considered for posters. Two-page summaries of posters will be included in the conference proceedings (authors must decide by December 15 if they want to submit a poster).

To allow reproducibility, we encourage authors of accepted papers to submit their papers for Artifact Evaluation (AE). The AE process begins after the acceptance notification, and is run by a separate committee whose task is to assess how the artifacts support the work described in the papers. Artifact evaluation is voluntary and will not affect paper acceptance, but will be taken into consideration when selecting papers for awards. Papers that go through the AE process successfully will receive one or several of the ACM reproducibility badges, printed on the papers themselves. For more information, see: http://ctuning.org/ae/ppopp2019.html.

Publication Date

The titles of all accepted papers are typically announced shortly after the author notification date (around mid-November 2018). Note, however, that this is not the official publication date. The official publication date is the date the proceedings are made available in the ACM Digital Library. ACM will make the proceedings available via the Digital Library for one month, up to 2 weeks prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work.


Title: When Moore met Feynman: Ultra-dense data storage and extreme parallelism with electronic-molecular systems

Abstract: Sustaining Moore’s law is an increasingly challenging proposition. This talk will cover an alternative approach: going directly to the molecular level, as suggested by Feynman in his famous lecture, “There’s Plenty of Room at the Bottom.” Although we have yet to achieve scalable, general-purpose molecular computation, there are areas of IT in which a molecular approach shows growing promise.

In this talk, I will explain how molecules, specifically synthetic DNA, can store digital data and perform certain types of special-purpose computation by leveraging tools already developed by the biotechnology industry. I will also discuss the architectural implications of molecular storage and processing systems and advocate for hybrid electronic-molecular systems as potential solutions to difficult computational problems, such as large-scale similarity search.

Bio: Karin Strauss is a Principal Researcher at Microsoft Corporation and an Affiliate Professor at the University of Washington. She co-leads the Molecular Information System Laboratory with Luis Ceze, working on using molecules, currently DNA, to benefit the IT industry. Her background is in computer architecture, systems, and most recently biology. Her research interests include emerging storage technologies, scaling of computation and storage, and special-purpose accelerators. Selected as one of the “100 Most Creative People in Business in 2016” by Fast Company Magazine, she got her PhD from the Department of Computer Science at the University of Illinois, Urbana-Champaign in 2007.


Workshops and Tutorials

  • Workshop: Workshop on Programming Models for SIMD/Vector Processing
  • Tutorial: Popper Hands-on Tutorial
  • Tutorial (half day, AM): Building parallel programming language constructs in the AbleC extensible C compiler framework
  • Tutorial (half day, PM): Implementing Parallel and Concurrent Tree Structures


Workshops and Tutorials

  • Workshop: Programming Models and Applications for Multicores and Manycores
  • Tutorial: Programming Quantum Computers: A Primer with IBM Q and D-Wave Exercises
  • Tutorial (half day, AM): High Performance Distributed Deep Learning
  • Tutorial (half day, PM): Performance-Portable C++ Programming with RAJA

18:00-20:00: Reception

  • Joint reception; posters for HPCA mentoring workshop, CGO, PPoPP.


8:15-9:30: Plenary Session

  • Chairs’ welcome and HPCA Keynote: Srini Devadas (MIT)

9:35-10:25: Session 1: Big Data

  • “Beyond Human-Level Accuracy: Computational Challenges in Deep Learning”, Joel Hestness, Newsha Ardalani, Gregory Diamos (Baidu Research)
  • “S-EnKF: Co-designing for Scalable Ensemble Kalman Filter”, Junmin Xiao, Guangming Tan (Institute of Computing Technology, Chinese Academy of Sciences)

10:25-10:55 Coffee Break

10:55-12:35: Session 2: Heterogeneous Platforms and GPU

  • “Throughput-Oriented GPU Memory Allocation”, Isaac Gelado, Michael Garland (NVIDIA)
  • “SEP-Graph: Finding Shortest Execution Paths for Graph Processing under a Hybrid Framework on GPU”, Liang Geng, Hao Wang, Rubao Lee (Ohio State University, USA); Kaixi Hou (Virginia Tech, USA); Yanfeng Zhang (Northeastern University, China); Xiaodong Zhang (Ohio State University, USA)
  • “Incremental Flattening for Nested Data Parallelism”, Troels Henriksen, Frederik Thorøe, Martin Elsman, Cosmin E. Oancea (DIKU, University of Copenhagen)
  • “Adaptive Sparse Matrix-Matrix Multiplication on the GPU”, Martin Winter, Daniel Mlakar (Graz University of Technology, Austria); Rhaleb Zayer, Hans-Peter Seidel (Max Planck Institute for Informatics); Markus Steinberger (Graz University of Technology, Austria)

12:35-14:00: Lunch

14:00-15:40: Session 3: Transactional Memory

  • “Modular Transactions: Bounding Mixed Races in Space and Time”, Brijesh Dongol (University of Surrey); Radha Jagadeesan, James Riely (DePaul University)
  • “Leveraging Hardware TM in Haskell”, Ryan Yates, Michael L. Scott (University of Rochester)
  • “Stretching the capacity of Hardware Transactional Memory in IBM POWER architectures”, Ricardo Filipe, Shady Issa, João Barreto, Paolo Romano (INESC-ID)
  • “Processing Transactions in a Predefined Order”, Mohamed M. Saad (Alexandria University); Masoomeh Javidi Kishi, Shihao Jing (Lehigh University); Sandeep Hans (IBM India Research Lab); Roberto Palmieri (Lehigh University)

15:40-16:10 Coffee Break

16:10-17:00 Session 4: GPU B-Trees

  • “Harmonia: A High Throughput B+tree for GPUs”, Zhaofeng Yan, Yuzhe Lin (Fudan University); Lu Peng (Louisiana State University); Weihua Zhang (Fudan University)
  • “Engineering a High-Performance GPU B-Tree”, Muhammad A. Awad (University of California, Davis); Saman Ashkiani (MapD); Rob Johnson (VMWare Research); Martin Farach-Colton (Rutgers University); John D. Owens (University of California, Davis)

17:00-18:00pm: Business Meeting


8:15-9:30: Plenary Session

  • PPoPP Keynote: Karin Strauss (Microsoft Research)

9:35-10:25am: Session 5, Performance

  • “QTLS: High-Performance TLS Asynchronous Offload Framework with Intel® QuickAssist Technology“, Xiaokang Hu (Shanghai Jiao Tong University); Changzheng Wei (Intel Corporation); Jian Li (Shanghai Jiao Tong University); Brian Will, Ping Yu, Lu Gong (Intel Corporation); Haibing Guan (Shanghai Jiao Tong University)
  • “Data-Flow/Dependence Profiling for Structured Transformations”, Fabian Gruber (Université Grenoble Alpes); Manuel Selva, Diogo Sampaio (Inria); Christophe Guillon, Antoine Moynault (STMicroelectronics); Louis-Noel Pouchet (Colorado State University); Fabrice Rastello (Inria)

10:25-10:55 Coffee Break

10:55am-12:35pm: Session 6, Best Paper Candidates

  • “Lightweight Hardware Transactional Memory Profiling”, Qingsen Wang, Pengfei Su (The College of William & Mary); Milind Chabbi (Uber); Xu Liu (The College of William & Mary)
  • “A Pattern Based Algorithmic Autotuner for Graph Processing on GPUs”, Ke Meng (State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences, University of Chinese Academy of Sciences); Jiajia Li (Georgia Institute of Technology, Pacific Northwest National Laboratory); Guangming Tan (State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences, University of Chinese Academy of Sciences)
  • “Provably and Practically Efficient Granularity Control”, Umut Acar (Carnegie Mellon University); Vitaly Aksenov (Inria & ITMO University); Arthur Chargueraud (Inria); Mike Rainey (Indiana University)
  • “A Coordinated Tiling and Batching Framework for Efficient GEMM on GPUs”, Xiuhong Li, Yun Liang (Peking University); Shengen Yan (SenseTime); Liancheng Jia (Peking University); Yinghan Li (SenseTime)

12:35-14:00: Lunch

14:00-15:15: Session 7: Scheduling

  • “Semantics-Aware Scheduling Policies for Synchronization Determinism”, Qi Zhao, Zhengyi Qiu, Guoliang Jin (North Carolina State University)
  • “Proactive Work Stealing for Futures”, Kyle Singer, Yifan Xu, I-Ting Angelina Lee (Washington University in St. Louis)
  • “A Round-Efficient Distributed Betweenness Centrality Algorithm”, Loc Hoang (University of Texas at Austin); Matteo Pontecorvi (Nokia Bell Labs); Roshan Dathathri, Gurbinder Gill (University of Texas at Austin); Bozhi You (Xi’an Jiaotong University); Keshav Pingali, Vijaya Ramachandran (University of Texas at Austin)

15:15-15:45 Coffee Break

15:45-16:35: Session 8: HPC

  • “Reliable Group Communication using Corrected Trees”, Martin Küttler, Maksym Planeta, Jan Bierbaum, Carsten Weinhold, Hermann Härtig (TU Dresden); Amnon Barak (The Hebrew University of Jerusalem); Torsten Hoefler (ETH Zurich)
  • “Adaptive Sparse Tiling for Sparse Matrix Multiplication”, Changwan Hong, Aravind Sukumaran-Rajam, Israt Nisa, Kunal Singh, P. Sadayappan (The Ohio State University)

17:10-18:30: Panel

19:00: Excursion and Banquet Dinner


8:15-9:30am: Plenary Session

  • CGO Keynote: Michael O’Boyle (University of Edinburgh)

9:35-10:50: Session 9: Synchronization

  • "Encapsulated Open Nesting for STM: Fine-Grained Higher-Level Conflict Detection”, Martin Bättig, Thomas Gross (Department of Computer Science, ETH Zurich)
  • “A Specialized B-Tree for Concurrent Datalog Evaluation”, Herbert Jordan (University of Innsbruck); Pavle Subotic (University College London); David Zhao, Bernhard Scholz (The University of Sydney)
  • “Efficient Race Detection with Futures”, Robert Utterback (Monmouth College); Kunal Agrawal (Washington University in St. Louis); Jeremy Fineman (Georgetown University); I-Ting Angelina Lee (Washington University in St. Louis)

10:50-11:20 Coffee Break

11:20-12:35: Session 10: Verification

  • “Verifying C11 Programs Operationally”, Simon Doherty (University of Sheffield); Brijesh Dongol (University of Surrey); Heike Wehrheim (Paderborn University); John Derrick (University of Sheffield)
  • “Checking Linearizability Using Hitting Families”, Burcu Kulahcioglu Ozkan, Rupak Majumdar, Filip Niksic (Max Planck Institute for Software Systems (MPI-SWS))
  • “Transitive Joins: A Sound and Efficient Online Deadlock-Avoidance Policy”, Caleb Voss, Tiago Cogumbreiro, Vivek Sarkar (Georgia Institute of Technology)

12:35-13:00 Best Paper Award, Closing

POSTER: Profiling based out-of-core Hybrid method for large neural networks

  • Yuki Ito, Tokyo Institute of Technology
  • Haruki Imai, IBM Research - Tokyo
  • Tung Le Duc, IBM Research - Tokyo
  • Yasushi Negishi, IBM Research - Tokyo
  • Kiyokuni Kawachiya, IBM Research - Tokyo
  • Ryo Matsumiya, Tokyo Institute of Technology
  • Toshio Endo, Tokyo Institute of Technology

POSTER: BASMAT: Bottleneck-Aware Sparse Matrix-Vector Multiplication Auto-Tuning on GPGPUs

  • Athena Elafrou, National Technical University of Athens
  • Georgios Goumas, National Technical University of Athens
  • Nectarios Koziris, National Technical University of Athens

POSTER: VEBO: A Vertex- and Edge-Balanced Ordering Heuristic to Load Balance Parallel Graph Processing

  • Jiawen Sun, Queen’s University Belfast
  • Hans Vandierendonck, Queen’s University Belfast
  • Dimitrios Nikolopoulos, Queen’s University Belfast

POSTER: GPU-based 3D Cryo-EM Reconstruction with Key-Value Streams

  • Kunpeng Wang, Tsinghua University
  • Shizhen Xu, Tsinghua University
  • Hongkun Yu, Tsinghua University
  • Haohuan Fu, Tsinghua University
  • Guangwen Yang, Tsinghua University
  • Kunpeng Wang, Tsinghua University

POSTER: T-thinker: A Task-Centric Distributed Framework For Compute-Intensive Divide-and-Conquer Algorithms

  • Da Yan, The University of Alabama at Birmingham
  • Guimu Guo, The University of Alabama at Birmingham
  • Md Mashiur Rahman Chowdhury, The University of Alabama at Birmingham
  • M. Tamer Özsu, University of Waterloo
  • John C.S. Lui, The Chinese University of Hong Kong
  • Weida Tan, The University of Alabama at Birmingham

POSTER: Blockchain Abstract Data Type

  • Emmanuelle Anceaume, CNRS, IRISA
  • Antonella Del Pozzo, CEA LIST
  • Romaric Ludinard, IMT Atlantique, IRISA
  • Maria Potop-Butucaru, Sorbonne Université, CNRS, Laboratoire d’Informatique de Paris 6, LIP6
  • Sara Tucci-Piergiovanni, CEA LIST
  • Antonella Del Pozzo, CEA List

POSTER: Exploiting the Input Sparsity to Accelerate Deep Neural Networks

  • Xiao Dong, Institute of Computing Technology,Chinese Academy of Sciences;University of Chinese Academy of Sciences
  • Lei Liu, Institute of Computing Technology,Chinese Academy of Sciences
  • Guangli Li, Institute of Computing Technology,Chinese Academy of Sciences;University of Chinese Academy of Sciences
  • Jiansong Li, Institute of Computing Technology,Chinese Academy of Sciences;University of Chinese Academy of Sciences
  • Peng Zhao, Institute of Computing Technology,Chinese Academy of Sciences;University of Chinese Academy of Sciences
  • Xueying Wang, Institute of Computing Technology,Chinese Academy of Sciences;University of Chinese Academy of Sciences
  • Xiaobing Feng, Institute of Computing Technology,Chinese Academy of Sciences;University of Chinese Academy of Sciences

POSTER: LOFT: Lock-Free Transactional Data Structures

  • Avner Elizarov, Technion, Israel
  • Guy Golan-Gueta, VMWare research
  • Erez Petrank, Technion, Israel

POSTER: A Distributed Hypervisor for Resource Aggregation

  • Yubin Chen, Shanghai Jiao Tong University
  • Zhuocheng Ding, Shanghai Jiao Tong University
  • Jin Zhang, Shanghai Jiao Tong University
  • Yun Wang, Shanghai Jiao Tong University
  • Zhengwei Qi, Shanghai Jiao Tong University
  • Haibing Guan, Shanghai Jiao Tong University
  • Haibing Guan, Shanghai Jiao Tong U

POSTER: GPOP: A cache and memory-efficient framework for Graph Processing Over Partitions

  • Kartik Lakhotia, University of Southern California
  • Rajgopal Kannan, Army Research Lab-West
  • Sourav Pati, University of Southern California
  • Viktor Prasanna, University of Southern California

POSTER: Scheduling HPC Workloads on Heterogeneous-ISA Architectures

  • Mohamed L. Karaoui, Virginia Tech
  • Anthony Carno, Virginia Tech
  • Rob Lyerly, Virginia Tech
  • Sang-Hoon Kim, Virginia Tech
  • Pierre Olivier, Virginia Tech
  • Changwoo Min, Virginia Tech
  • Binoy Ravidran, Virginia Tech

POSTER: Optimizing GPU Programs by Register Demotion

  • Putt Sakdhnagool, NECTEC, Thailand
  • Amit Sabne, Google Brain
  • Rudolf Eigenmann, University of Delaware

POSTER: Lock-free Channels for Programming via Communicating Sequential Processes

  • Nikita Koval, IST Austria, JetBrains
  • Dan Alistarh, IST Austria
  • Roman Elizarov, JetBrains

POSTER: Making Concurrent Algorithms Detectable

  • Naama Ben-David, Carnegie Mellon University
  • Guy E. Blelloch, Carnegie Mellon University
  • Michal Friedman, Technion
  • Yuanhao Wei, Carnegie Mellon University

POSTER: A GPU Memory Efficient Speed-up Scheme for Training Ultra-deep Neural Networks

  • Jinrong Guo, Institute of Information Engineering, Chinese Academy of Sciences & School of Cyber Security, University of Chinese Academy of Sciences
  • Wantao Liu, Institute of Information Engineering, Chinese Academy of Sciences
  • Wang Wang, Institute of Information Engineering, Chinese Academy of Sciences & School of Cyber Security, University of Chinese Academy of Sciences
  • Qu Lu, Institute of Information Engineering, Chinese Academy of Sciences & School of Cyber Security, University of Chinese Academy of Sciences
  • Songlin Hu, Institute of Information Engineering, Chinese Academy of Sciences
  • Jizhong Han, Institute of Information Engineering, Chinese Academy of Sciences
  • Ruixuan Li, Institute of Information Engineering, Chinese Academy of Sciences

POSTER: Automated Multi-dimensional Elasticity for Streaming Runtimes

  • Xiang Ni, IBM T.J. Watson Research Center
  • Scott Schneider, IBM T.J. Watson Research Center
  • Raju Pavuluri, IBM T.J. Watson Research Center
  • Jonathan Kaus, IBM
  • Kun-Lung Wu, IBM T.J. Watson Research Center

POSTER: Optimizing Computation-Communication Overlap in Asynchronous Task-Based Programs

  • Emilio Castillo, Barcelona Supercomputing Center
  • Nikhil Jain, Lawrence Livermore National Laboratory
  • Marc Casas, Barcelona Supercomputing Center
  • Miquel Moreto, Technical Univeristy of Catalonia
  • Martin Schulz, Technical University of Munich
  • Ramon Beivide, University of Cantabria
  • Mateo Valero, Barcelona Supercomputing Center
  • Abhinav Bhatele, Lawrence Livermore National Laboratory

POSTER: Compiler-assisted Adaptive Program Scheduling in big.LITTLE Systems

  • Marcelo Novaes, UFMG
  • Vinicius Petrucci, UFBA
  • Abdoulaye Gamatié, CNRS
  • Fernando Magno Quintao Pereira, UFMG

POSTER: GOPipe: A Granularity-Oblivious Programming Framework for Pipelined Stencil Executions on GPU

  • Chanyoung Oh, University of Seoul
  • Zhen Zheng, Tsinghua University
  • Xipeng Shen, North Carolina State University
  • Jidong Zhai, Tsinghua University
  • Youngmin Yi, University of Seoul
  • Chanyoung Oh, School of Electrical and Computer Engineering, University of Seoul, Seoul, Korea

POSTER: Toward Efficient Architecture-Independent Algorithms for Dynamic Programs

  • Mohammad Mahdi Javanmard, Stony Brook University, NY, USA
  • Pramod Ganapathi, Indian Institute of Technology, Indore, India
  • Rathish Das, Stony Brook University, NY, USA
  • Zafar Ahmad, Stony Brook University, NY, USA
  • Stephen Tschudi, Google, CA, USA
  • Rezaul Chowdhury, Stony Brook University, NY, USA

POSTER: Accelerating Distributed Stochastic Gradient Descent with Adaptive Periodic Parameter Averaging

  • Peng Jiang, The Ohio State University
  • Gagan Agrawal, The Ohio State University

POSTER: High-Throughput Image Alignment for Connectomics using Frugal Snap Judgments

  • Tim Kaler, MIT CSAIL
  • Brian Wheatman, MIT CSAIL
  • Sarah Wooders, MIT CSAIL

POSTER: CuLDA_CGS: Solving Large-scale LDA Problems on GPUs

  • Xiaolong Xie, Peking University
  • Yun Liang, Peking University
  • Xiuhong Li, Peking University
  • Wei Tan, Citadel LLC

POSTER: Managing Application Parallelism via Parallel Efficiency Regulation

  • Sharanyan Srikanthan, University of Rochester
  • Princeton Ferro, University of Rochester
  • Sandhya Dwarkadas, University of Rochester
  • Sayak Chakraborti, University of Rochester

POSTER: Optimizing Graph Processing on GPUs using Approximate Computing

  • Somesh Singh, Indian Institute of Technology Madras
  • Rupesh Nasre, Indian Institute of Technology Madras

Conference submission site

All papers must be prepared in ACM Conference Format using the acmart format (use the SIGPLAN proceedings template acmart-sigplanproc-template.tex). You may also want to consult the offical ACM information on the Master Article Template and related tools.

PPoPP 2018 will employ a lightweight double-blind reviewing process. To facilitate this process, submissions should not reveal the identity of the authors in any way. Authors should leave out author names and affiliations from the body of their submission. They should also ensure that any references to authors’ own related work should be in the third person (e.g., not “We build on our previous work …” but rather “We build on the work of …”). The purpose of this process is to help the PC and external reviewers come to an initial judgment about the paper without bias, not to make it impossible for them to discover the authors if they were to try. Nothing should be done in the name of anonymity that weakens the submission or makes the job of reviewing the paper more difficult. In particular, important background references should not be omitted or anonymized. In addition, authors should feel free to disseminate their ideas or draft versions of their paper as they normally would. For instance, authors may post drafts of their papers on the web or give talks on their research ideas. Authors with further questions on double-blind reviewing are encouraged to contact the Program Chair by email.

Submissions should be in PDF and printable on both US Letter and A4 paper. Papers may be resubmitted to the submission site multiple times up until the deadline, but the last version submitted before the deadline will be the version reviewed. Papers that exceed the length requirement, that deviate from the expected format, or that are submitted late will be rejected.

Deadlines expire at midnight anywhere on earth.

Here is the Artifact Evaluation committee for PPoPP’19:

  • Sundaram Ananthanarayanan (Stanford University, USA)
  • Chandranil Chakraborttii (UCSC, USA)
  • Younghyun Cho (Seoul National University, Korea)
  • Marco Cianfriglia (CNR, Italy)
  • Biagio Cosenza (TU Berlin, Germany)
  • Apurba Das (Iowa State University, USA)
  • Subhasis Das (Zoox Inc, USA)
  • Nikoli Dryden (UIUC, USA)
  • Salvatore Di Girolamo (ETH Zurich, Switzerland)
  • Troels Henriksen (DIKU, University of Copenhagen, Denmark)
  • Nikita Koval (IST, JetBrains, Austria)
  • Snehashish Kumar (Google, USA)
  • Ang Li (Pacific Northwest National Laboratory, USA)
  • Kan Liu (Tsinghua University, China)
  • Tobias Maier (KIT, Germany)
  • Karthik Murthy (Rice University, USA)
  • Aman Nougrahiya ( IIT Madras, India)
  • Devangi Parikh (University of Texas at Austin, USA)
  • Jyothi Krishna V S ( IIT Madras, India)
  • Reza Salkhordeh (Johannes Gutenberg-Universität Mainz, Germany)
  • Tao Song (Shanghai Jiao Tong University, China)
  • Bogdan-Alexandru Stoica (EPFL, Switzerland)
  • Pengfei Su (The College of William and Mary, USA)
  • Xulong Tang (Pennsylvania State University, USA)
  • Ilias Vougioukas (Arm, UK)
  • Ivan Walulya (Chalmers University of Technology, Sweden)
  • Shasha Wen (University of Washington, USA)
  • Sebastian Wild (University of Waterloo, Canada)
  • Adarsh Yoga (Rutgers University, USA)
  • Pantea Zardoshti (Lehigh University, USA)
  • Justs Zarins (Edinburgh University, UK)
  • Chi Zhang (University of Pittsburgh, USA)
  • Wei Zhang (Microsoft, USA)
  • Zhen Zheng (Tsinghua University, China)
  • Tingzhe Zhou (Lehigh University, USA)