PPoPP 2019
Sat 16 - Wed 20 February 2019 Washington, DC, United States
Mon 18 Feb 2019 09:35 - 10:00 at Salon 12/13 - Session 1: Big Data Chair(s): Roberto Palmieri

Deep learning (DL) research yields accuracy and product improvements from both model architecture changes and scale: larger data sets and models, and more computation. For hardware design, it is difficult to predict DL model changes. However, recent prior work shows that as dataset sizes grow, DL model accuracy and model size grow predictably. This paper leverages the prior work to project the dataset and model size growth required to advance DL accuracy beyond human-level, to frontier targets defined by machine learning experts. Datasets will need to grow $33$–$971\times$, while models will need to grow $6.6$–$456\times$ to achieve target accuracies.

We further characterize and project the computational requirements to train these applications at scale. Our characterization reveals an important segmentation of DL training challenges for recurrent neural networks (RNNs) that contrasts with prior studies of deep convolutional networks. RNNs will have comparatively moderate operational intensities and very large memory footprint requirements. In contrast to emerging accelerator designs, large-scale RNN training characteristics suggest designs with significantly larger memory capacity and on-chip caches.

Mon 18 Feb

Displayed time zone: Guadalajara, Mexico City, Monterrey change

09:35 - 10:25
Session 1: Big DataMain Conference at Salon 12/13
Chair(s): Roberto Palmieri Lehigh University
09:35
25m
Talk
Beyond Human-Level Accuracy: Computational Challenges in Deep Learning
Main Conference
Joel Hestness Baidu Research, Newsha Ardalani Baidu Research, Gregory Diamos Baidu Research
DOI
10:00
25m
Talk
S-EnKF: Co-designing for Scalable Ensemble Kalman Filter
Main Conference
Junmin Xiao , Shijie Wang Institute of Computing Technology, Chinese Academy of Sciences, Weiqiang Wan Institute of Computing Technology, Chinese Academy of Sciences, Xuehai Hong Institute of Computing Technology, Chinese Academy of Sciences, Guangming Tan Chinese Academy of Sciences(CAS)
DOI