- [1] S. Edriss, C. Romagnoli, L. Caprioli, A. Zanela, E. Panichi, F. Campoli, E. Padua, G. Annino, and V. Bonaiuto, (2024) “The role of emergent technologies in the dynamic and kinematic assessment of human movement in sport and clinical applications" Applied Sciences 14(3): 1012. DOI: 10.3390/app14031012.
- [2] R. Leib, I. S. Howard, M. Millard, and D. W. Franklin, (2024) “Behavioral motor performance" Comprehensive Physiology 14(1): 5179–5224. DOI: 10.1002/j.2040-4603.2024.tb00286.x.
- [3] A. Jisi, S. Yin, et al., (2021) “A new feature fusion network for student behavior recognition in education" Jour nal of Applied Science and Engineering 24(2): 133 140. DOI: 10.6180/jase.202104_24(2).0002.
- [4] J. Luo, W. Wang, and H. Qi, (2014) “Spatio-temporal feature extraction and representation for RGB-D human action recognition" Pattern Recognition Letters 50: 139–148. DOI: 10.1016/j.patrec.2014.03.024.
- [5] M. Lovanshi and V. Tiwari, (2024) “Human skeleton pose and spatio-temporal feature-based activity recognition using ST-GCN" Multimedia Tools and Applications 83(5): 12705–12730. DOI: 10.1007/s11042-023-16001-9.
- [6] Z. Wang, H. Lu, J. Jin, and K. Hu, (2022) “Human action recognition based on improved two-stream convolution network" Applied Sciences 12(12): 5784. DOI: 10.3390/app12125784.
- [7] M. Xiao, (2024) “The best angle correction of basketball shooting based on the fusion of time series features and dual CNN" Egyptian Informatics Journal 28: 100579. DOI: 10.1016/j.eij.2024.100579.
- [8] A. -A. Liu, N. Xu, W. -Z. Nie, Y. -T. Su, and Y. -D. Zhang, (2018) “Multi-domain and multi-task learning for hu man action recognition" IEEE Transactions on Image Processing 28(2): 853–867. DOI: 10.1109/TIP.2018.2872879.
- [9] A. Laghari, H. He, A. Khan, R. Laghari, S. Yin, and J. Wang, (2022) “Crowdsourcing platform for QoE evaluation for cloud multimedia services" Computer Science and Information Systems 19(3): 1305. DOI: 10.2298/csis220322038l.
- [10] E. Aksan, M. Kaufmann, P. Cao, and O. Hilliges. “A spatio-temporal transformer for 3d human motion prediction”. In: 2021 International Conference on 3D Vision (3DV). IEEE. 2021, 565–574. DOI: 10.1109/3DV53792.2021.00066.
- [11] K. Ding, A. J. Liang, B. Perozzi, T. Chen, R. Wang, L. Hong, E. H. Chi, H. Liu, and D. Z. Cheng. “Hyper Former: Learning expressive sparse feature representations via hypergraph transformer”. In: Proceedings of the 46th international ACM SIGIR conference on research and development in information retrieval. 2023, 2062–2066. DOI: 10.1145/3539618.3591999.
- [12] Y. Pang, Q. Ke, H. Rahmani, J. Bailey, and J. Liu. “Igformer: Interaction graph transformer for skeleton based human interaction recognition”. In: European Conference on Computer Vision. Springer. 2022, 605 622. DOI: 10.1007/978-3-031-19806-9_35.
- [13] S. Madden, M. J. Franklin, J. M. Hellerstein, and W. Hong, (2002) “TAG: A tiny aggregation service for ad hoc sensor networks" ACM SIGOPS Operating Systems Review 36(SI): 131–146. DOI: 10.1145/844128.844142.
- [14] H. Rao, S. Wang, X. Hu, M. Tan, Y. Guo, J. Cheng, X. Liu, and B. Hu, (2021) “A self-supervised gaiten coding approach with locality-awareness for 3D skeleton based person re-identification" IEEE Transactions on Pattern Analysis and Machine Intelligence 44(10): 6649–6666. DOI: 10.1109/TPAMI.2021.3092833.
- [15] J. Jiang, J. Chen, and Y. Guo. “A dual-masked auto encoder for robust motion capture with spatial temporal skeletal token completion”. In: Proceedings of the 30th ACM International Conference on Multimedia. 2022, 5123–5131. DOI: 10.1145/3503161.3547796.
- [16] H.-g. Chi, M. H. Ha, S. Chi, S. W. Lee, Q. Huang, and K. Ramani. “Infogcn: Representation learning for human skeleton-based action recognition”. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022, 20186–20196. DOI: 10.1109/CVPR52688.2022.01955.
- [17] S. Guan, H. Lu, L. Zhu, and G. Fang, (2022) “AFE CNN: 3D skeleton-based action recognition with action feature enhancement" Neurocomputing 514: 256–267. DOI: 10.1016/j.neucom.2022.10.016.
- [18] J. Liu, A. Shahroudy, M. Perez, G. Wang, L.-Y. Duan, and A. C. Kot, (2019) “Ntu rgb+ d 120: A large-scale benchmark for 3d human activity understanding" IEEE transactions on pattern analysis and machine intel ligence 42(10): 2684–2701. DOI: 10.1109/TPAMI.2019.2916873.
- [19] X. Li, X. Zhang, L. Zhang, X. Chen, and P. Zhou, (2023) “A transformer-based multi-task learning frame work for myoelectric pattern recognition supporting muscle force estimation" IEEE Transactions on Neural Sys tems and Rehabilitation Engineering 31: 3255–3264. DOI: 10.1109/TNSRE.2023.3298797.
- [20] W. Xin, R. Liu, Y. Liu, Y. Chen, W. Yu, and Q. Miao, (2023) “Transformer for skeleton-based action recognition: A review of recent advances" Neurocomputing 537: 164–186. DOI: 10.1016/j.neucom.2023.03.001.
- [21] C. Shi and S. Liu, (2024) “Human action recognition with transformer based on convolutional features" Intelligent Decision Technologies 18(2): 881–896. DOI: 10.3233/IDT-240159.
- [22] Y. Xing, Z. Hu, X. Mo, P. Hang, S. Li, Y. Liu, Y. Zhao, and C. Lv, (2024) “Driver steering behaviour modelling based on neuromuscular dynamics and multi-task time series transformer" Automotive Innovation 7(1): 45 58. DOI: 10.1007/s42154-023-00272-x.
- [23] C. Fan, S. Lin, B. Cheng, D. Xu, K. Wang, Y. Peng, and S. Kwong, (2024) “EEG-TransMTL: A transformer based multi-task learning network for thermal comfort evaluation of railway passenger from EEG" Information Sciences 657: 119908. DOI: 10.1016/j.ins.2023.119908.
- [24] W. Li, N. Zhou, and X. Qu. “Enhancing eye tracking performance through multi-task learning transformer”. In: International Conference on Human Computer Interaction. Springer. 2024, 31–46. DOI: 10. 1007/978-3-031-61572-6_3.