B-VGG16: Red Neuronal de Convolución cuantizada binariamente para la clasificación de imágenes

Nicolás Urbano Pintos, Héctor Lacomi, Mario Lavorato

Resumen


En este trabajo se entrena y evalúa una red neuronal de convolución cuantizada de forma binaria para la clasificación de imágenes. Las redes neuronales binarizadas reducen la cantidad de memoria, y es posible implementarlas con menor hardware que las redes que utilizan variables de valor real (Floating Point 32 bits). Este tipo de redes se pueden implementar en sistemas embebidos, como FPGA. Se realizó una cuantización consciente del entrenamiento, de modo de poder compensar los errores provocados por la pérdida de precisión de los parámetros. El modelo obtuvo una precisión de evaluación de un 88% con el conjunto de evaluación de CIFAR-10.

Palabras clave


Redes Neuronales de Convolución; Clasificación; Cuantización

Texto completo:

PDF HTML

Referencias


X. Chen, J. Chen, X. Han, C. Zhao, D. Zhang, K. Zhu, and Y. Su, “A light-weighted cnn model for wafer structural defect detection,” IEEE Access, vol. 8, pp. 24 006–24 018, 2020, doi: 10.1109/ACCESS.2020. 2970461.

H. Gao, B. Cheng, J. Wang, K. Li, J. Zhao, and D. Li, “Object classification using cnn-based fusion of vision and lidar in autonomous vehicle environment,” IEEE Transactions on Industrial Informatics, vol. 14, no. 9, pp. 4224–4231, 2018, doi: 10.1109/TII.2018.2822828.

F. Foroughi, Z. Chen, and J. Wang, “A cnn-based system for mobile robot navigation in indoor environments via visual localization with a small dataset,” World Electric Vehicle Journal, vol. 12, no. 3, 2021, doi: 10.3390/wevj12030134. [Online]. Available: https://www.mdpi.com/2032-6653/12/3/134

N. Aloysius and M. Geetha, “A review on deep convolutional neural networks,” Proceedings of the 2017 IEEE International Conference on Communication and Signal Processing, ICCSP 2017, vol. 2018- Janua, pp. 588–592, 2018, doi: 10.1109/ICCSP.2017.8286426.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 2014, doi: 10.48550/ARXIV. 1409.1556. [Online]. Available: https://arxiv.org/abs/1409.1556

A. Shawahna, S. M. Sait, and A. El-Maleh, “FPGA-Based accelerators of deep learning networks for learning and classification: A review,” IEEE Access, vol. 7, pp. 7823–7859, 2019, doi: 10.1109/ACCESS.2018.2890150.

I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio, “Quantized neural networks: Training neural networks with low precision weights and activations,” Journal of Machine Learning Research, vol. 18, pp. 1–30, 2018.

M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, and Y. Bengio, “Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1,” arXiv: Learning, 2016. [Online]. Available: http://arxiv.org/abs/1602.02830

A. Pappalardo, “Xilinx/brevitas,” 2021, doi: 10.5281/zenodo.3333552.

M. Blott, T. B. Preuber, N. J. Fraser, G. Gambardella, K. O’Brien, Y. Umuroglu, M. Leeser, and K. Vissers, “FinN-R: An end-to-end deep-learning framework for fast exploration of quantized neural networks,” ACM Transactions on Reconfigurable Technology and Systems, vol. 11, no. 3, 2018, doi: 10.1145/3242897.

A. Krizhevsky, “Learning multiple layers of features from tiny im- ages,” 2009.

N. Urbano Pintos, H. Lacomi, and M. Lavorato, “Bvgg16 - entrenamiento en google colab,” 2022. [Online]. Available: https://colab.research.google.com/drive/1irvyEzHj7tAvIfV56bFHP50fCNDCHZsu?usp=sharing

NVIDIA, “Tensorrt open source software,” 2022. [Online]. Available: https://github.com/NVIDIA/TensorRT

XILINX, “Vitis ai - adaptable & real-time ai inference acceleration,” 2022. [Online]. Available: https://github.com/Xilinx/Vitis-AI

A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: An imperative style, high-performance deep learning library,” Advances in Neural Information Processing Systems, vol. 32, no. NeurIPS, 2019.

A. Gholami, S. Kim, Z. Dong, Z. Yao, M. W. Mahoney, and K. Keutzer, “A Survey of Quantization Methods for Efficient Neural Network Inference,” Low-Power Computer Vision, pp. 291–326, 2022, doi: 10.1201/9781003162810-13.

Y. Bengio, N. Léonard, and A. Courville, “Estimating or propagating gradients through stochastic neurons for conditional computation,” arXiv preprint arXiv:1308.3432, 2013.

M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, “Xnor-net: Imagenet classification using binary convolutional neural networks,” in European conference on computer vision. Springer, 2016, pp. 525-542.

J. Zhang, Y. Pan, T. Yao, H. Zhao, and T. Mei, “dabnn: A super fast inference framework for binary neural networks on arm devices,” in Proceedings of the 27th ACM international conference on multimedia, 2019, pp. 2272–2275.

Y. Umuroglu, N. J. Fraser, G. Gambardella, M. Blott, P. Leong, M. Jahre, and K. Vissers, “FINN: A framework for fast, scalable binarized neural network inference,” FPGA 2017 - Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, no. February, pp. 65–74, 2017, doi: 10.1145/3020078.3021744.

M. Courbariaux, Y. Bengio, and J.-P. David, “Binaryconnect: Training deep neural networks with binary weights during propagations,” Advances in neural information processing systems, vol. 28, 2015. [22] L. Hou, Q. Yao, and J. T. Kwok, “Loss-aware binarization of deep networks,” arXiv preprint arXiv:1611.01600, 2016.

D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” 2014, doi: 10.48550/ARXIV.1412.6980. [Online]. Available: https://arxiv.org/abs/1412.6980

A. Mishra and D. Marr, “Apprentice: Using knowledge distillation techniques to improve low-precision network accuracy,” arXiv preprint arXiv:1711.05852, 2017.

Y. Xu, X. Dong, Y. Li, and H. Su, “A main/subsidiary network framework for simplifying binary neural networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 7154–7162.

R. Gong, X. Liu, S. Jiang, T. Li, P. Hu, J. Lin, F. Yu, and J. Yan, “Differentiable soft quantization: Bridging full-precision and low-bit neural networks,” in Proceedings of the IEEE/CVF International

Conference on Computer Vision, 2019, pp. 4852–4861.

O. R. developers, “ONNX Runtime,” 11 2018. [Online]. Available: https://github.com/microsoft/onnxruntime

T. Chen, T. Moreau, Z. Jiang, L. Zheng, E. Yan, H. Shen, M. Cowan, L. Wang, Y. Hu, L. Ceze et al., “{TVM}: An automated {End-to-End} optimizing compiler for deep learning,” in 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18), 2018, pp. 578–594.

Xilinx, “PyXIR,” 11 2019. [Online]. Available: https://github.com/Xilinx/pyxir

S. Liu and W. Deng, “Very deep convolutional neural network based image classification using small training sample size,” Proceedings - 3rd IAPR Asian Conference on Pattern Recognition, ACPR 2015, pp. 730–734, 2016, doi: 10.1109/ACPR.2015.7486599.

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, vol. 15, pp. 1929 1958, 2014.

S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” 2015, doi: 10.48550/ARXIV.1502.03167. [Online]. Available: https: //arxiv.org/abs/1502.03167




DOI: https://doi.org/10.37537/rev.elektron.6.2.169.2022

Enlaces de Referencia

  • Por el momento, no existen enlaces de referencia


Copyright (c) 2022 Nicolás Urbano Pintos

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.


Revista elektron,  ISSN-L 2525-0159
Facultad de Ingeniería. Universidad de Buenos Aires 
Paseo Colón 850, 3er piso
C1063ACV - Buenos Aires - Argentina
revista.elektron@fi.uba.ar
+54 (11) 528-50889