Header menu link for other important links
X
Fragmented Huffman-Based Compression Methodology for CNN Targeting Resource-Constrained Edge Devices
C. Pal, S. Pankaj, W. Akram, D. Biswas, G. Mattela,
Published in Birkhauser
2022
Volume: 41
   
Issue: 7
Pages: 3957 - 3984
Abstract
In this paper, we introduce a fragmented Huffman compression methodology for compressing convolution neural networks executing on edge devices. Present scenario demands deployment of deep networks on edge devices, since application needs to adhere to low latency, enhanced security and long-term cost effectiveness. However, the primary bottleneck lies in the expanded memory footprint on account of the large size of the neural net models. Existing software implementation of deep compression strategies do exist, where Huffman compression is applied on the quantized weights, reducing the deep neural network model size. However, there is a further possibility of compression in memory footprint from a hardware design perspective in edge devices, where our proposed methodology can be complementary to the existing strategies. With this motivation, we proposed a fragmented Huffman coding methodology, that can be applied to the binary equivalent of the numeric weights of a neural network model stored in device memory. Subsequently, we also introduced the static and dynamic storage methodology on device memory space which is left behind even after storing the compressed file, that led to a big reduction in area and energy consumption of approximately 38% in case of dynamic storage methodology in comparison with static one. To the best of our knowledge, this is the first study where Huffman compression technique has been revisited by applying it to compress binary files, from a hardware design perspective, based on multiple bit pattern sequences, to achieve a maximum compression rate of 64%. A compressed hardware memory architecture and a decompression module design has also been undertaken, being synthesized at 500 MHz, using GF 40-nm low-power cell library with a nominal voltage of 1.1 V achieving a reduction of 62% dynamic power consumption with a decompression time of about 63 microseconds (μ s) without trading-off accuracy. © 2022, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
About the journal
JournalCircuits, Systems, and Signal Processing
PublisherBirkhauser
ISSN0278081X