Journal of Applied Science and Engineering

Published by Tamkang University Press

1.30

Impact Factor

2.10

CiteScore

Yue WangThis email address is being protected from spambots. You need JavaScript enabled to view it.

Faculty of Teacher Education, Qilu Normal University, 2 Wenbo Road, Jinan 250200, Shandong, China


 

Received: March 7, 2026
Accepted: March 27, 2026
Publication Date: April 8, 2026

 Copyright The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are cited.


Download Citation: ||https://doi.org/10.6180/jase.202609_32.003  


With the rapid development of intelligent software engineering and computer science education, automatic programming quality enhancement and quantitative programming performance evaluation have become increasingly critical research directions. Traditional evaluation approaches mainly rely on manual scoring, static code checking tools, and limited test case execution, which are inefficient, subjective, and incapable of capturing deep semantic information and long-range logical dependencies in source code. Meanwhile, existing code optimization methods focus on single tasks such as bug fixing or code summarization, lacking a unified framework that supports both code enhancement and comprehensive performance assessment. To address these limitations, this paper proposes a novel end-to-end deep learning framework named CodeProNet for jointly enhancing programming quality and evaluating programming performance. The model integrates multi-modal feature extraction, semantic-aware graph representation, multi-scale Transformer encoding, and contrastive-learning-based performance prediction. Specifically, we design a semantic-structure fused code representation that combines lexical sequence information, abstract syntax tree (AST) structure, and data-flow graph (DFG) semantics to fully encode intrinsic characteristics of source code. A multi-scale Transformer encoder is introduced to capture both local syntactic patterns and global logical dependencies. Furthermore, a dual-task learning mechanism is constructed to simultaneously optimize code enhancement and performance evaluation. Extensive experiments are conducted on three representative datasets: CodeSearchNet, Human Eval, and a self-built enterprise-level annotated programming dataset (Enterprise Programming Dataset (EPD)). Quantitative results demonstrate that CodeProNet achieves 92.3% accuracy in programming performance grading, 13.7% code error rate, and 85.7% Pass@1 in code functional correctness, significantly outperforming baseline models including Code BERT, Graph Code BERT, and CodeT5. Ablation studies verify the effectiveness of each core component. This work provides a unified, scalable, and interpretable solution for intelligent programming education, automated code review, and developer capability evaluation.


Keywords: Deep Neural Networks; Programming Performance; Source Code Representation; Code Enhancement; Contrastive Learning; Intelligent Software Engineering


  1. [1] W. W. Lau and A. H. Yuen, (2011) “Modelling programming performance: Beyond the influence of learner characteristics" Computers & Education 57(1): 1202 1213. DOI: 10.1016/j.compedu.2011.01.002.
  2. [2] S. A. Sakib, M. M. H. Misat, T. R. Akanto, J. Islam, and F. A. Antara, (2026) “IoT Enabled Smart Poultry Farming System With Deep Learning for Chicken Health Detection in Real-Time" Journal of Sensors 2026(1): 1433795. DOI: 10.1155/js/1433795.
  3. [3] G. Gutierrez-Del-Val, V. Serrano-Fernandez, V. Mazoteras-Pardo, R. M. Molina-Madueño,C.Bouzas Mosquera, J. M. Carmona-Torres, and J. A. Laredo Aguilera, (2026) “Physical and respiratory training in patients with myasthenia gravis: a systematic review with meta-analysis" Scientific Reports: DOI: 10.1038/ s41598-026-42949-3.
  4. [4] S. Mahmood, (2026) “Detecting inline code comment smells leveraging Code BERT Model" IEEE Access 14: 28367–28382. DOI: 10.1109/ACCESS.2026.3666288.
  5. [5] I. R. Indurthi, S. A. Hameed, P. Sushma, J. Pitchaiya, V. S. N. Reddy, and M. Syamala, (2026) “A proac tive approach to software security using DCodeBERT for vulnerability management" Bulletin of Electrical Engineering and Informatics 15(1): 461–469. DOI: 10.11591/eei.v15i1.11100.
  6. [6] Y. Wang, W. Wang, S. Joty, and S. C. Hoi. “Codet5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation”. In: Proceedings of the 2021 conference on empirical methods in natural language processing. 2021, 8696–8708. DOI: 10.18653/v1/2021.emnlp-main.685.
  7. [7] B. Zou, Q. Lyu, Y. Han, Z. Li, and W. Zhang, (2025) “Exploring students’ acceptance of an artificial intelligence speech evaluation program for EFL speaking practice: an application of the Integrated Model of Technology Acceptance" Computer Assisted Language Learning 38(5 6): 1366–1391. DOI: 10.1080/09588221.2023.2278608.
  8. [8] M. Messer, N. C. Brown, M. Kölling, and M. Shi, (2024) “Automated grading and feedback tools for programming education: A systematic review" ACM Trans actions on Computing Education 24(1): 1–43. DOI: 10.1145/3636515.
  9. [9] J. Yu, L. Zhao, S. Yin, and M. Ivanovi´c, (2024) “News recommendation model based on encoder graph neural net work and bat optimization in online social multimedia art education" Computer Science and Information Systems 21(3): 989–1012. DOI: 10.2298/CSIS231225025Y.
  10. [10] Y. Almeida, D. Albuquerque, E. Dantas Filho, F. Muniz, K. de Farias Santos, M. Perkusich, H. Almeida, and A. Perkusich, (2024) “AICodeReview: Advancing code quality with AI-enhanced reviews" SoftwareX 26: 101677. DOI: 10.1016/j.softx.2024.101677.
  11. [11] Y. M. Abd Algani, (2024) “A novel deep learning attention based sequence to sequence model for automatic abstractive text summarization" International Journal of Information Technology 16(6): 3597–3603. DOI: 10.1007/s41870-024-01934-7.
  12. [12] D.Držík and F. Forgac, (2024) “Slovak morphological tokenizer using the Byte-Pair Encoding algorithm" PeerJ Computer Science 10: e2465. DOI: 10.7717/peerj-cs.2465.
  13. [13] R. S. Durge and V. M. Deshmukh. “Analyzing Byte Level Tokenization for two Layer Encryption Tech nique”. In: 2024 2nd DMIHER International Conference on Artificial Intelligence in Healthcare, Education and Industry (IDICAIEI). IEEE. 2024, 1–6. DOI: 10.1109/IDICAIEI61867.2024.10842702.
  14. [14] S. Liu, X. Xie, J. Siow, L. Ma, G. Meng, and Y. Liu, (2023) “Graphsearchnet: Enhancing gnns via capturing global dependencies for semantic code search" IEEE Transactions on Software Engineering 49(4): 2839 2855. DOI: 10.1109/TSE.2022.3233901.
  15. [15] Q. Zheng, X. Xia, X. Zou, Y. Dong, S. Wang, Y. Xue, L. Shen, Z. Wang, A. Wang, Y. Li, et al. “Codegeex: Apre-trained model for code generation with multilingual benchmarking on human eval-x”. In: Proceedings of the 29th ACM SIGKDD conference on knowledge discovery and data mining. 2023, 5673–5684. DOI: 10.1145/3580305.3599790.
  16. [16] G. Swapna, S. Kp, and R. Vinayakumar, (2018) “Auto mated detection of diabetes using CNN and CNN-LSTM network and heart rate signals" Procedia computer science 132: 1253–1262. DOI: 10.1016/j.procs.2018.05.041.
  17. [17] A. Abdulaziz, R. Sonbol, and M. Alnoukari, (2026) “Automated Detection of Self-Admitted Technical Debt: A Systematic Literature Review" IEEE Access 14: 20803 20825. DOI: 10.1109/ACCESS.2026.3661069.
  18. [18] M. Zagane, A .T. Azar, and W. El-Shafai, (2026) “Deep Semantic Embeddings for Scalable Co-Change Recommendation in Software Systems" IEEE Access 14: 14331 14340. DOI: 10.1109/ACCESS.2026.3656097.
  19. [19] H. B. Mulyadi, W. E. Y. Retnani, and R. N. E. Anggraini. “AI-Assisted Code Generation: Seman tic and Structural Evaluation on Code Forces C++ Problems”. In: 2025 Computing, Communications and IoT Applications (ComComAp). IEEE. 2025, 18–23. DOI: 10.1109/ComComAp68359.2025.11353176.
  20. [20] S. Zheng, Y. Li, and X. Ma, (2026) “Enhancing Parameter-Efficient Code Representations with Retrieval and Structural Priors" Applied Sciences 16(2): 1106. DOI: 10.3390/app16021106.


    



 

2.1
2023CiteScore
 
 
69th percentile
Powered by  Scopus

SCImago Journal & Country Rank

Enter your name and email below to receive latest published articles in Journal of Applied Science and Engineering.