当前位置: X-MOL 学术arXiv.cs.CV › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
PFGDF: Pruning Filter via Gaussian Distribution Feature for Deep Neural Networks Acceleration
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2020-06-23 , DOI: arxiv-2006.12963
Jianrong Xu, Chao Li, Bifeng Cui, Kang Yang, Yongjun Xu

The existence of a lot of redundant information in convolutional neural networks leads to the slow deployment of its equipment on the edge. To solve this issue, we proposed a novel deep learning model compression acceleration method based on data distribution characteristics, namely Pruning Filter via Gaussian Distribution Feature(PFGDF) which was to found the smaller interval of the convolution layer of a certain layer to describe the original on the grounds of distribution characteristics . Compared with revious advanced methods, PFGDF compressed the model by filters with insignificance in distribution regardless of the contribution and sensitivity information of the convolution filter. The pruning process of the model was automated, and always ensured that the compressed model could restore the performance of original model. Notably, on CIFAR-10, PFGDF compressed the convolution filter on VGG-16 by 66:62%, the parameter reducing more than 90%, and FLOPs achieved 70:27%. On ResNet-32, PFGDF reduced the convolution filter by 21:92%. The parameter was reduced to 54:64%, and the FLOPs exceeded 42%

中文翻译:

PFGDF:通过用于深度神经网络加速的高斯分布特征修剪滤波器

卷积神经网络中存在大量冗余信息,导致其设备在边缘部署缓慢。针对这个问题,我们提出了一种新的基于数据分布特征的深度学习模型压缩加速方法,即Pruning Filter via Gaussian Distribution Feature(PFGDF),即找到某一层卷积层的较小区间来描述原始以分布特性为由。与以往的先进方法相比,PFGDF 通过分布无关紧要的滤波器压缩模型,而不管卷积滤波器的贡献和敏感度信息如何。模型的剪枝过程是自动化的,始终确保压缩后的模型能够恢复原始模型的性能。尤其,在 CIFAR-10 上,PFGDF 将 VGG-16 上的卷积滤波器压缩了 66:62%,参数减少了 90% 以上,FLOPs 达到了 70:27%。在 ResNet-32 上,PFGDF 将卷积滤波器减少了 21:92%。参数降低到54:64%,FLOPs超过42%
更新日期:2020-06-24
down
wechat
bug