Rui WangThis email address is being protected from spambots. You need JavaScript enabled to view it.
School of Culture and Media, Yantai University of Science and Technology, Penglai, Shandong, 265600, China
Received: October 25, 2025 Accepted: December 12, 2025 Publication Date: March 27, 2026
Copyright The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are cited.
The rapid development of digital media art (DMA) has created a demand for intelligent systems that can understand and respond to users via numerous sensory channels. However, current systems frequently rely on single-modal inputs, which limit their ability to understand user states and adjust experiences accordingly. This research seeks to create an intelligent DMA system architecture that combines multimodal perception to enhance user experience dynamically. A novel Dolphin Swarm Optimized Deep Neural Network (DSO DeepNet) is proposed, where DSO dynamically tunes hyperparameters of the DeepNet to maximize multimodal emotion recognition performance, leading to improved user experience adaptation in DMA systems. The system combines physiological inputs (heart rate variability (HRV), electrodermal activity (EDA)), visual data (facial ex pressions), and audio cues (speech tone). A customized dataset was created from 250 individuals who interacted with digital artworks, recording synchronized biometric, visual, and audio data. Pre-processing techniques include signal denoising with band-pass filters and Wiener filtering. For feature extraction, Mel Frequency Cepstral Coefficients (MFCCs) and ResNet50 are used. Deep Canonical Correlation Analysis (DCCA) is recommended for effective fusion, alignment, and integration of information from multiple modalities. Experimental results showed that the fused multimodal model outperformed the other models in terms of emotion recognition accuracy (0.925). Emotional involvement, interpersonal satisfaction, and perceived system responsiveness all improved significantly, according to user experience evaluations. Overall, the proposed architecture effectively improves the sensitivity and adaptability of DMA systems through multimodal perception fusion, indicating a promising route for producing more immersive and individualized art experiences.
Keywords: Digital media art (DMA) system, user experience, emotion recognition, Dolphin Swarm Optimized Deep Neural Network (DSO-DeepNet).
[1] Y. S. Shen, (2024) “Technical Evaluation of Personalized Recommendations Based on the Fuzzy Logic of the LDA Framework for Digital Media Art and Design" SSRN Electronic Journal: DOI: 10.2139/ssrn.5047179.
[2] K. Li and J. Zhu, (2024) “Immersive Experience Design of Digital Media Interactive Art Based on Virtual Reality" Computer-Aided Design and Applications S7: 164 177. DOI: 10.14733/cadaps.2024.S7.164-177.
[3] J. Yu, (2025) “Design of a Neural Network-Based Auto mated Style Migration Technique in Digital Media Art" Journal of Combinatorial Mathematics and Combinatorial Computing 127: 9561–9576. DOI: 10.61091/jcmcc127a-527.
[4] Y. Li and W. Zhuge, (2022) “Application of Animation Control Technology Based on Internet Technology in Digital Media Art" Mobile Information Systems 2022: 4009053. DOI: 10.1155/2022/4009053.
[5] W. Sun, R. Zhang, and W. Lv, (2025) “Classification and Creation of New Media Design Element Based on Reinforcement Learning" Computer-Aided Design and Applications S7: 28–40. DOI: 10.14733/cadaps.2025. S7.28-40.
[6] M. Gao and P. Pu, (2024) “Generative Adversarial Network-Based Experience Design for Visual Communication: An Innovative Exploration in Digital Media Arts" IEEE Access: DOI: 10.1109/ACCESS.2024.3419212.
[7] M. Xia and Y. Zhou, (2022) “Research on UI Design and Optimization of Digital Media Based on Artificial Intelligence" Journal of Sensors 2022: 7014070. DOI: 10.1155/2022/7014070.
[8] H.Zhang, (2024) “Intelligent Computer Technology and Its Application in Environmental Art Design" International Journal of Information and Communication Technology 24(2): 213–227. DOI: 10.1504/IJICT.2024.137222.
[9] W.YuandT.Jiang, (2021) “Research on the Direction of Innovation and Entrepreneurship Education Reform within the Digital Media Art Design Major in the Digital Economy" Frontiers in Psychology 12: 719754. DOI: 10.3389/fpsyg.2021.719754.
[10] Y. Pan. “Application of Computer Visual Art in Digital Media Art”. In: Journal of Physics: Conference Series. 1961. 1. IOP Publishing, 2021, 012059. DOI: 10.1088/1742-6596/1961/1/012059.
[11] X. Hao, (2024) “Intelligent User Experience Design in Digital Media Art under Internet of Things Environment" Informatica 48(15): DOI: 10.31449/inf.v48i15.6405.
[12] W. Li, J. Jiang, and Z. Wen, (2025) “Practice of Computer Aided Design and Multimodal Integration in New Media Art" Computer-Aided Design and Applications S3: 215–229. DOI: 10.14733/cadaps.2025.S3.215 229.
[13] P. Liu, C. Song, X. Ma, and X. Tang, (2022) “Visual Space Design of Digital Media Art Using Virtual Real ity and Multidimensional Space" Mobile Information Systems 2022: 8220572. DOI: 10.1155/2022/8220572.
[14] M.Bai,(2025) “Digital Innovation in Environmental Art Design: The Combination of CAD and Multimodal Fusion Technology" Computer-Aided Design and Applications S3: 92–104. DOI: 10.14733/cadaps.2025.S3.92-104.
[15] Y. Li and Q. Shi, (2025) “The Implementation Path of NewMedia Art Design Integrating Deep Learning Style Transfer and CAD Interaction Technology" Computer Aided Design and Applications S2: 268–281. DOI: 10.14733/cadaps.2025.S2.268-281.
[16] P.ChenandY.Huang,(2024) “Intelligent Scene Design of Digital Media Art Based on Blockchain and Application of AR Technology" Research Square: DOI: 10.21203/rs.3.rs-3393973/v1.
[17] H. Tian, (2022) “Application and Analysis of Artificial Intelligence Graphic Element Algorithm in Digital Media Art Design" Mobile Information Systems 2022: 6946616. DOI: 10.1155/2022/6946616.
[18] J. Tang, (2024) “The Deep Convolution Network in Immersive Design of Digital Media Art in Smart City" Scientific Reports 14(1): 28219. DOI: 10.1038/s41598 024-79742z.
[19] K. Li, (2024) “Digital Media System Design and Visual Art Analysis Based on Information Security" Measurement: Sensors 31: 100978. DOI: 10.1016/j.measen. 2023.100978.
[20] N. ZhanandP. Mei, (2024) “Intelligent Optimization Algorithm for Digital Media Art CAD Design Combining Media Big Data" Computer-Aided Design and Applications S21: 181–197. DOI: 10.14733/cadaps.2024.S21.181-197.
[21] K. MengandF.Liu, (2023) “Digital Media Art and Visual Communication Design under the Background of Artificial Intelligence Research on Innovation Development" Research Square: DOI: 10.21203/rs.3.rs-3204689/v1.
[22] H.WangandJ.Li, (2024) “Integration Path of Digital Media Art and Environmental Design Based on Virtual Reality Technology" Open Computer Science 14(1): 20240012. DOI: 10.1515/comp-2024-0012.
[23] R. Palanivel, D. K. R. Basani, B. R. Gudivaka, M. H. Fallah, and N. Hindumathy. “Support Vector Ma chine with Tunicate Swarm Optimization Algorithm for Emotion Recognition in Human–Robot Interaction”. In: Proceedings of IACIS. 2024, 1–4. DOI: 10.1109/IACIS61494.2024.10721631.
[24] H.WangandZ.Wei, (2023) “Training of Digital Media Art Talents in the Era of Digital Full Connection Intelligence" Computer-Aided Design and Applications S8: 191–201. DOI: 10.14733/cadaps.2023.S8.191-201.
[25] C. Wu, (2021) “Application of Digital Image Based on Machine Learning in Media Art Design" Computational Intelligence and Neuroscience 2021: 8546987. DOI: 10.1155/2021/8546987.
[26] J. Qian, (2022) “Research on Artificial Intelligence Technology of Virtual Reality Teaching Method in Digital Media Art Creation" Journal of Internet Technology 23(1): 125–132. DOI: 10.53106/160792642022012301013.
[27] W. Liu and H. G. Kim, (2025) “The Analysis of Body Emotion Recognition in New Media Art Exhibition Space Based Artificial Intelligence" IEEE Access: DOI: 10.1109/ACCESS.2025.3560422.
[28] A. W˛edołowska, D. Weber, and B. Kostek, (2023) “Predicting Emotion from Color in Images and Video Excerpts Using Machine Learning" IEEE Access 11: 66357–66373. DOI: 10.1109/ACCESS.2023.3289713.
[29] L. Zhao and J. Yu, (2026) “YOLOv5-DTW: Gesture Recognition Based on YOLOv5 and Dynamic Time Warping for Digital Media Design" Journal of Applied Science and Engineering 29(2): 445–453. DOI: 10.6180/jase.202602_29(2).0019.
We use cookies on this website to personalize content to improve your user experience and analyze our traffic. By using this site you agree to its use of cookies.