Beyond Human-Level Accuracy: Computational Challenges in Deep Learning
Deep learning (DL) research yields accuracy and product improvements from both model architecture changes and scale: larger data sets and models, and more computation. For hardware design, it is difficult to predict DL model changes. However, recent prior work shows that as dataset sizes grow, DL model accuracy and model size grow predictably. This paper leverages the prior work to project the dataset and model size growth required to advance DL accuracy beyond human-level, to frontier targets defined by machine learning experts. Datasets will need to grow $33$–$971\times$, while models will need to grow $6.6$–$456\times$ to achieve target accuracies.
We further characterize and project the computational requirements to train these applications at scale. Our characterization reveals an important segmentation of DL training challenges for recurrent neural networks (RNNs) that contrasts with prior studies of deep convolutional networks. RNNs will have comparatively moderate operational intensities and very large memory footprint requirements. In contrast to emerging accelerator designs, large-scale RNN training characteristics suggest designs with significantly larger memory capacity and on-chip caches.
Mon 18 FebDisplayed time zone: Guadalajara, Mexico City, Monterrey change
09:35 - 10:25 | |||
09:35 25mTalk | Beyond Human-Level Accuracy: Computational Challenges in Deep Learning Main Conference DOI | ||
10:00 25mTalk | S-EnKF: Co-designing for Scalable Ensemble Kalman Filter Main Conference Junmin Xiao , Shijie Wang Institute of Computing Technology, Chinese Academy of Sciences, Weiqiang Wan Institute of Computing Technology, Chinese Academy of Sciences, Xuehai Hong Institute of Computing Technology, Chinese Academy of Sciences, Guangming Tan Chinese Academy of Sciences(CAS) DOI |