Loading...
Please wait, while we are loading the content...
Similar Documents
Towards Convolutional Neural Network Acceleration and Compression Based on Simonk-Means
Content Provider | MDPI |
---|---|
Author | Wei, Mingjie Zhao, Yunping Chen, Xiaowen Li, Chen Lu, Jianzhuang |
Copyright Year | 2022 |
Description | Convolutional Neural Networks (CNNs) are popular models that are widely used in image classification, target recognition, and other fields. Model compression is a common step in transplanting neural networks into embedded devices, and it is often used in the retraining stage. However, it requires a high expenditure of time by retraining weight data to atone for the loss of precision. Unlike in prior designs, we propose a novel model compression approach based on Simonk-means, which is specifically designed to support a hardware acceleration scheme. First, we propose an extension algorithm named Simonk-means based on simple k-means. We use Simonk-means to cluster trained weights in convolutional layers and fully connected layers. Second, we reduce the consumption of hardware resources in data movement and storage by using a data storage and index approach. Finally, we provide the hardware implementation of the compressed CNN accelerator. Our evaluations on several classifications show that our design can achieve 5.27× compression and reduce 74.3% of the multiply–accumulate (MAC) operations in AlexNet on the FASHION-MNIST dataset. |
Starting Page | 4298 |
e-ISSN | 14248220 |
DOI | 10.3390/s22114298 |
Journal | Sensors |
Issue Number | 11 |
Volume Number | 22 |
Language | English |
Publisher | MDPI |
Publisher Date | 2022-06-06 |
Access Restriction | Open |
Subject Keyword | Sensors Artificial Intelligence Convolutional Neural Networks Deep Learning K-means Model Compression Weight Quantization |
Content Type | Text |
Resource Type | Article |