HyPar: Towards Hybrid Parallelism for Deep Learning Accelerator Array

Linghao Song12
Estimated H-index: 12
(Duke University),
Jiachen Mao6
Estimated H-index: 6
(Duke University)
+ 3 AuthorsYiran Chen73
Estimated H-index: 73
(Duke University)
Sources
Abstract
With the rise of artificial intelligence in recent years, Deep Neural Networks (DNNs) have been widely used in many domains. To achieve high performance and energy efficiency, hardware acceleration (especially inference) of DNNs is intensively studied both in academia and industry. However, we still face two challenges: large DNN models and datasets, which incur frequent off-chip memory accesses; and the training of DNNs, which is not well-explored in recent accelerator designs. To truly provide high throughput and energy efficient acceleration for the training of deep and large models, we inevitably need to use multiple accelerators to explore the coarse-grain parallelism, compared to the fine-grain parallelism inside a layer considered in most of the existing architectures. It poses the key research question to seek the best organization of computation and dataflow among accelerators. In this paper, we propose a solution HyPar to determine layer-wise parallelism for deep neural network training with an array of DNN accelerators. HyPar partitions the feature map tensors (input and output), the kernel tensors, the gradient tensors, and the error tensors for the DNN accelerators. A partition constitutes the choice of parallelism for weighted layers. The optimization target is to search a partition that minimizes the total communication during training a complete DNN. To solve this problem, we propose a communication model to explain the source and amount of communications. Then, we use a hierarchical layer-wise dynamic programming method to search for the partition for each layer.
Figures & Tables
Download
📖 Papers frequently viewed together
1 Citations
2018NeurIPS: Neural Information Processing Systems
12 Authors (Noam Shazeer, ..., Blake Hechtman)
References109
Newest
Cited By5
Newest
#1Gousia Habib (National Institute of Technology, Srinagar)
#2Shaima Qureshi (National Institute of Technology, Srinagar)H-Index: 3
Source
Feb 1, 2020 in HPCA (High-Performance Computer Architecture)
#1Kyle Shiflett (OU: Ohio University)H-Index: 1
#2Dylan Wright (OU: Ohio University)H-Index: 1
Last. Ahmed Louri (GW: George Washington University)H-Index: 21
view all 4 authors...
Machine learning (ML) architectures such as Deep Neural Networks (DNNs) have achieved unprecedented accuracy on modern applications such as image classification and speech recognition. With power dissipation becoming a major concern in ML architectures, computer architects have focused on designing both energy-efficient hardware platforms as well as optimizing ML algorithms. To dramatically reduce power consumption and increase parallelism in neural network accelerators, disruptive technology su...
1 CitationsSource
Oct 12, 2019 in MICRO (International Symposium on Microarchitecture)
#1Maohua Zhu (UCSB: University of California, Santa Barbara)H-Index: 7
#2Tao ZhangH-Index: 13
Last. Yuan Xie (UCSB: University of California, Santa Barbara)H-Index: 63
view all 4 authors...
Deep neural networks have become the compelling solution for the applications such as image classification, object detection, speech recognition, and machine translation. However, the great success comes at the cost of excessive computation due to the over-provisioned parameter space. To improve the computation efficiency of neural networks, many pruning techniques have been proposed to reduce the amount of multiply-accumulate (MAC) operations, which results in high sparsity in the networks. Unf...
19 CitationsSource