Distracted driver behavior recognition using modified capsule networks
Abstract
Human activity recognition (HAR) is an increasingly active study field within the computer vision community. In HAR, driver behavior can be detected to ensure safe travel. Detect driver behaviors using a capsule network with leave-one-subject-out validation. The study was done using CapsNet with leave-one-subject-out validation to identify driving habits. The proposed method in this study consists of two parts, namely encoder and decoder. The encoder used in this study modifies Sabour’s capsule network architecture by adding a convolution layer before going to the primary capsule layer. The proposed method is evaluated using a primary dataset with 10 classes and 300 images for each class. The dataset is split based on hold-out validation and leave-one-subject-out validation. The resulting models were then compared to conventional CNN architecture. The objective of the research is to identify driving behavior. In this study, the proposed method results an accuracy rate of 97.83 % in the split dataset using hold-out validation. However, the accuracy decreased by 53.11 % when the proposed method was used on a split dataset using leave-one-subject-out validation. This is because the proposed method extracts all features including the attributes of each participant contained in the input image (user-independent). Thus, the resulting model in this study tends to overfit.
Keywords
Full Text:
PDFReferences
D. Wu, N. Sharma and M. Blumenstein, "Recent advances in video-based human action recognition using deep learning: A review," 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, pp. 2865-2872, 2017.
F. Demrozi, G. Pravadelli, A. Bihorac, and P. Rashidi, “Human Activity Recognition Using Inertial, Physiological and Environmental Sensors: A Comprehensive Survey,” IEEE Access, vol. 8, pp. 210816–210836, 2020.
M. M. Islam, S. Nooruddin, F. Karray, and G. Muhammad, “Human activity recognition using tools of convolutional neural networks: A state of the art review, data sets, challenges, and future prospects,” Comput Biol Med, vol. 149, p. 106060, 2022.
S. Zhang, Z. Wei, J. Nie, L. Huang, S. Wang, and Z. Li, “A Review on Human Activity Recognition Using Vision-Based Method,” Journal of Healthcare Engineering, vol. 2017. Hindawi Limited, 2017.
L. Guarda, J. E. Tapia, E. L. Droguett, and M. Ramos, “A novel Capsule Neural Network based model for drowsiness detection using electroencephalography signals,” Expert Syst Appl, vol. 201, p. 116977, 2022.
C. Jobanputra, J. Bavishi, and N. Doshi, “Human activity recognition: A survey,” in Procedia Computer Science, Elsevier B.V., 2019, pp. 698–703.
S. Indolia, A. K. Goswami, S. P. Mishra, and P. Asopa, “Conceptual Understanding of Convolutional Neural Network-A Deep Learning Approach,” in Procedia Computer Science, Elsevier B.V., 2018, pp. 679–688.
C. Yan, F. Coenen, and B. Zhang, “Driving posture recognition by convolutional neural networks,” IET Computer Vision, vol. 10, no. 2, pp. 103–114, Mar. 2016.
C. Zhang, R. Li, W. Kim, D. Yoon, and P. Patras, “Driver behavior recognition via interwoven deep convolutional neural nets with multi-stream inputs,” IEEE Access, vol. 8, pp. 191138-191151, 2020.
K. A. AlShalfan and M. Zakariah, “Detecting Driver Distraction Using Deep-Learning Approach,” Computers, Materials and Continua, vol. 68, no. 1, pp. 689–704, Mar. 2021.
X. Rao, F. Lin, Z. Chen, and J. Zhao, “Distracted driving recognition method based on deep convolutional neural network,” J Ambient Intell Humaniz Comput, vol. 12, no. 1, pp. 193–200, Jan. 2021.
V. Sarveshwaran, I. T. Joseph, M. M, and K. P, “Investigation on Human Activity Recognition using Deep Learning,” Procedia Comput Sci, vol. 204, pp. 73–80, 2022.
N. Akhtar and U. Ragavendran, “Interpretation of intelligence in CNN-pooling processes: a methodological survey,” Neural Computing and Applications, vol. 32, no. 3. Springer, pp. 879–898, Feb. 01, 2020.
R. Shi and L. Niu, “A brief survey on capsule network,” in Proceedings - 2020 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology, WI-IAT 2020, Institute of Electrical and Electronics Engineers Inc., Dec. 2020, pp. 682–686.
M. Sun, Z. Song, X. Jiang, J. Pan, and Y. Pang, “Learning Pooling for Convolutional Neural Network,” Neurocomputing, vol. 224, pp. 96–104, Feb. 2017.
Z. Sun, G. Zhao, R. Scherer, W. Wei, and M. Woźniak, “Overview of Capsule Neural Networks,” Journal of Internet Technology, vol. 23, no. 1. Taiwan Academic Network Management Committee, pp. 33–44, 2022.
G. E. Hinton, A. Krizhevsky, and S. D. Wang, “Transforming Auto-Encoders,” in Artificial Neural Networks and Machine Learning – ICANN 2011, W. and G. M. and K. S. Honkela Timo and Duch, Ed., Berlin, Heidelberg: Springer Berlin Heidelberg, 2011, pp. 44–51.
S. Sabour, N. Frosst, and G. E. Hinton, “Dynamic Routing Between Capsules,” Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 3859–3869, Oct. 2017.
J. Cai, S. Wang, and W. Guo, “Unsupervised embedded feature learning for deep clustering with stacked sparse auto-encoder,” Expert Syst Appl, vol. 186, p. 115729, 2021.
B. Mandal, S. Dubey, S. Ghosh, R. Sarkhel, and N. Das, “Handwritten Indic Character Recognition using Capsule Networks,” in 2018 IEEE Applied Signal Processing Conference (ASPCON), 2018, pp. 304–308.
A. Moudgil, S. Singh, V. Gautam, S. Rani, and S. H. Shah, “Handwritten devanagari manuscript characters recognition using capsnet,” International Journal of Cognitive Computing in Engineering, vol. 4, pp. 47–54, 2023.
M. L. Mekhalfi, M. B. Bejiga, D. Soresina, F. Melgani, and B. Demir, “Capsule networks for object detection in UAV imagery,” Remote Sens (Basel), vol. 11, no. 14, 2019.
F. KINLI and F. KIRAÇ, “FashionCapsNet: Clothing Classification with Capsule Networks,” Bilişim Teknolojileri Dergisi, vol. 13, no. 1, pp. 87–96, Jan. 2020.
G. Madhu, A. Govardhan, B. S. Srinivas, S. A. Patel, B. Rohit, and B. L. Bharadwaj, “Capsule Networks for Malaria Parasite Classification: An Application Oriented Model,” in 2020 IEEE International Conference for Innovation in Technology, INOCON 2020, Institute of Electrical and Electronics Engineers Inc., Nov. 2020.
Y. Wu, L. Cen, S. Kan, and Y. Xie, “Multi-layer capsule network with joint dynamic routing for fire recognition,” Image Vis Comput, vol. 139, p. 104825, 2023.
I. Brishtel, S. Krauss, M. Chamseddine, J. R. Rambach, and D. Stricker, “Driving Activity Recognition Using UWB Radar and Deep Neural Networks,” Sensors, vol. 23, no. 2, Jan. 2023.
E. Juralewicz and U. Markowska-Kaczmar, “Capsule Network Versus Convolutional Neural Network in Image Classification: Comparative Analysis,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer Science and Business Media Deutschland GmbH, 2021, pp. 17–30.
S. Choudhary, S. Saurav, R. Saini, and S. Singh, “Capsule Networks for Computer Vision Applications: A Comprehensive Review,” Applied Intelligence, vol. 53, no. 19, pp. 21799–21826, Jun. 2023.
E. Goceri, Analysis of Capsule Networks for Image Classification. International Conference Scientific Computing, 2021.
M. K. Patrick, A. F. Adekoya, A. A. Mighty, and B. Y. Edward, “Capsule Networks – A survey,” Journal of King Saud University - Computer and Information Sciences, vol. 34, no. 1. King Saud bin Abdulaziz University, pp. 1295–1310, Jan. 01, 2022.
S. J. Pawan and J. Rajan, “Capsule networks for image classification: A review,” Neurocomputing, vol. 509. Elsevier B.V., pp. 102–120, Oct. 14, 2022.
F. Abdul Manaf and S. Singh, “Computer vision-based survey on Human Activity Recognition system, challenges and applications,” in 2021 3rd International Conference on Signal Processing and Communication, ICPSC 2021, Institute of Electrical and Electronics Engineers Inc., May 2021, pp. 110–114.
O. C. Ann and L. B. Theng, “Human activity recognition: A review,” in 2014 IEEE International Conference on Control System, Computing and Engineering (ICCSCE 2014), 2014, pp. 389–393.
D. R. Beddiar, B. Nini, M. Sabokrou, and A. Hadid, “Visionbased human activity recognition:a survey,” Multimed Tools Appl, vol. 79, no. 41–42, pp. 30509–30555, Nov. 2020.
H. Bragança, J. G. Colonna, H. A. B. F. Oliveira, and E. Souto, “How Validation Methodology Influences Human Activity Recognition Mobile Systems,” Sensors, vol. 22, no. 6, Mar. 2022.
D. Gholamiangonabadi, N. Kiselov, and K. Grolinger, “Deep Neural Networks for Human Activity Recognition with Wearable Sensors: Leave-One-Subject-Out Cross-Validation for Model Selection,” IEEE Access, vol. 8, pp. 133982–133994, 2020.
Article Metrics
Metrics powered by PLOS ALM
Refbacks
- There are currently no refbacks.
Copyright (c) 2023 Journal of Mechatronics, Electrical Power, and Vehicular Technology
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.