Header menu link for other important links
X
Low Complexity Reconfigurable-Scalable Architecture Design Methodology for Deep Neural Network Inference Accelerator
A. Nimbekar, C.S. Vatti, Y.V.S. Dinesh, S. Singh, T. Gupta, R.R. Chandrapu,
Published in IEEE Computer Society
2022
Volume: 2022-September
   
Abstract
Convolutional Neural networks (CNNs) are useful in a wide range of applications such as image recognition, automatic translation and advertisement recommendation. Due to the ever-increasing deep structure, state-of-the-art CNNs are recognized to be computationally and memory intensive. The requirements of neural networks are continuously evolving and a reconfigurable architecture plays a major role in addressing this challenge. In this paper, we propose a Low-Complexity Reconfigurable architecture for implementation of Convolutional Neural Networks. The architecture can be configured as per the requirements of the neural network. The input image size is dependent on the dataset, hence the size varies from network to network. In order to compute any network, the proposed architecture has the flexibility to compute any size of input image. Experimental results shows that the proposed CNN inference accelerator achieves a peak throughput of 0.5 TOPS with an area of 9.58 mm2 consuming 3.02 Watts in TSMC 40nm technology. The area of the proposed architecture is 50% smaller than the state of the art solutions. An FPGA prototype achieves throughput of 102.4 GOPs consuming 5.057 Watts on ZYNQ Ultrascale+ MPSoC ZCU102 FPGA. © 2022 IEEE.
About the journal
JournalInternational System on Chip Conference
PublisherIEEE Computer Society
ISSN21641676