Based on Frond-End and Back-End Platfrom and Image Processing Algorithm to Design People Counting Analysis System
Volume 5, Issue 2 Po-Hsiang Liao, De-Ciang Ye, Hung-Pang Lin, Hsuan-Ta Lin
Published online:26 April 2019
Article Views: 36
Abstract
This paper presents a people counting method that is derived from traditional machine learning and deep learning algorithms. The proposed design mainly provides cognition information of a period of time, peak hour or off hour in specific public places, such as transportation tools, hotel lobby, and bus shelter. In previous literature, the traditional machine learning technique, such as Support Vector Machine (SVM) was adopted for the people counting. However, the pedestrian recognition rate of the previous means is lower than the deep learning method. Hence, the Convolutional Neural Network (CNN) is derived to the improved drawback of a worse recognition rate. However, given its computation task is very heavy when the processing of operating the system. Therefore, the proposed system is designed based on a two-stage architecture that contains the previous two methods in front-end and back-end, respectively. Among these, the first stage, which is front-end that mainly be used for pedestrian recognition. According to the above results, the people number counting could be executed. After that, the statistics consequence is classified into two-level, and then the back-end stage only needs to process pedestrian recognition of level two. Finally, the experimental results show that pedestrian recognition is increased and computational complexity is reduced when comparing with traditional machine learning and deep learning, respectively. The experimental results indicated that the proposed front-end design had 84.56% accuracy for detection performance. The other proposed architecture, which is the back-end, can obtain a detection accuracy of 93.59%. On the other hand, the proposed method also improves the average 29% execution time compared to the related designs. This system could be implemented to save management costs.
Reference
C. Hong, J. Yu, J. Wan, D. Tao, and M. Wang,“Multimodal deep autoencoder for human pose recovery,” IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 5659–5670, 2015. doi: https://doi.org/10.1109/tip.2015.2487860
R. Suryanita, H. Maizir, and H. Jingga, “Prediction of structural response based on ground acceleration using artificial neural networks,” International Journal of Technology and Engineering Studies, vol. 3, no. 2, pp. 74–83, 2017. doi: https://doi.org/10.20469/ijtes.3.40005-2
N. Bernini, L. Bombini, M. Buzzoni, P. Cerri,and P. Grisleri, “An embedded system for counting passengers in public transportation vehicles,”in IEEE/ASME 10th International Conference on Mechatronic and Embedded Systems and Applications (MESA), Senigallia, Italy, 2014. doi: https://doi.org/10.1109/mesa.2014.6935562
J. Jeon, D.-H. Kim, B. Choi, G. Kim, and Y.-S. Kim, “A construction of vehicle image and ground truth database for developing vehicle maker and model recognitions,” International Journal of Technology and Engineering Studies, vol. 3, no. 6, pp. 229–235, 2017. doi: https://doi.org/10.20469/ijtes.3.40002-6
N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, 2005.
T. Ojala, M. Pietikainen, and D. Harwood, “Performance evaluation of texture measures with classification based on kullback discrimination of distributions,” in Proceedings of 12th International Conference on Pattern Recognition, Jerusalem, Israel, 1994. doi: https://doi.org/10.1109/icpr.1994.576366
X. Wang, T. X. Han, and S. Yan, “An HOG-LBP human detector with partial occlusion handling,” in IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 2009. doi: https://doi.org/10.1109/iccv.2009.5459207
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi,“You only look once: Unified, real-time object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, 2016.
Z.-Q. Zhao, P. Zheng, S.-t. Xu, and X. Wu, “Object detection with deep learning: A review,” IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 11, pp. 3212–3232, 2019. doi:https://doi.org/10.1109/TNNLS.2018.2876865
R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, 2014. doi: https://doi.org/10.1109/cvpr.2014.81
R. Girshick, “Fast r-cnn,” in Proceedings of the IEEE International Conference on Computer Vision, Las Condes, Chile, 2015. doi: https://doi.org/10.1109/iccv.2015.169
S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn:Towards real-time object detection with region proposal networks,” in Advances in neural information processing systems, Montreal, Canada, 2015.
K. He, G. Gkioxari, P. Dollár, and R. Girshick,“Mask r-cnn,” in Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 2017.
C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” in Proceedings IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat.No PR00149), Fort Collins, CO, 1999. doi: https://doi.org/10.1109/cvpr.1999.784637
To Cite this article
P.-H. Liao, D.-C. Ye, H.-P. Lin, and H.-T. Lin, “Based on frond-end and back-end platfrom and image processing algorithm to design people counting analysis system,” International Journal of Technology and Engineering Studies, vol. 5, no. 2, pp. 40-46, 2019. doi: https://dx.doi.org/10.20469/ijtes.5.40002-2