Journal of Applied Science and Engineering

Published by Tamkang University Press

1.30

Impact Factor

2.10

CiteScore

Yongxin ZhouThis email address is being protected from spambots. You need JavaScript enabled to view it.

Zhengzhou University of Science and Technology, Zhengzhou, HeNan, China 450064


 

 

Received: April 3, 2025
Accepted: May 14, 2025
Publication Date: May 30, 2025

 Copyright The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are cited.


Download Citation: ||https://doi.org/10.6180/jase.202602_29(2).0006  


Carbon neutrality, as a fundamental goal of global sustainability, is constrained by the carbon lock-in effect, which limits the progress of low-carbon transitions. Ecological compensation mechanisms offer a promising solution by optimizing land use and enhancing carbon sequestration capacity. However, existing approaches often oversimplify ecological processes, restrict decision-making to discrete actions, and lack robustness against environmental uncertainties. To address these limitations, this paper proposes a reinforcement learning framework based on the deep deterministic policy gradient algorithm (RL-MEC-CL), enabling a more precise representation of the dynamic interactions among carbon emissions, carbon sequestration, and land use. Specifically, RL-MEC-CL, in a continuous action space, leverages an actor-critic architecture with experience replay and target networks to optimize compensation strategies adaptively, balancing carbon reduction benefits, sequestration enhancement, and policy costs. Experiment results demonstrate that RL-MEC-CL not only improves the efficiency of ecological compensation strategies but also exhibits strong robustness and adaptability, offering valuable insights for optimizing ecological governance pathways.


Keywords: Ecological Compensation; reinforcement learning; carbon lock-in


  1. [1] C. Zhao, K. Dong, K. Wang, and F. Taghizadeh Hesary, (2023) “How can Chinese cities escape from car bon lock-in? The role of low-carbon city policy" Urban Climate 51: 101629. DOI: 10.1016/j.uclim.2023.10162.
  2. [2] J. Gao, P. Li, A. A. Laghari, G. Srivastava, T. R. Gadekallu, S. Abbas, and J. Zhang, (2024) “Incomplete multiview clustering via semidiscrete optimal transport for multimedia data mining in IoT" ACM Transactions on Multimedia Computing, Communications and Applications 20(6): 1–20. DOI: 10.1145/3625548.
  3. [3] J. Gao, M. Liu, P. Li, A. A. Laghari, A. R. Javed, N. Victor, and T. R. Gadekallu, (2023) “Deep Incomplete Multiview Clustering via Information Bottleneck for Pat tern Mining of Data in Extreme-Environment IoT" IEEE Internet of Things Journal 11(16): 26700–26712. DOI: 10.1109/JIOT.2023.3325272.
  4. [4] J. Gao, M. Liu, P. Li, J. Zhang, and Z. Chen, (2024) “Deep Multiview Adaptive Clustering With Semantic In variance" IEEE Transactions on Neural Networks and Learning Systems 35(9): 12965–12978. DOI: 10.1109/TNNLS.2023.3265699.
  5. [5] P. Li, A. A. Laghari, M. Rashid, J. Gao, T. R. Gadekallu, A. R. Javed, and S. Yin, (2022) “A deep multimodal adversarial cycle-consistent network for smart enterprise system" IEEE Transactions on Industrial Informatics 19(1): 693–702. DOI: 10.1109/TII.2022.3197201.
  6. [6] M.Zoll, (2021) “Path-Dependencies of Carbon Lock-In Shaping Coal Phase-Out in Poland’s Electricity Sector: A Herculean Task of Decarbonization?" From Economic to Energy Transition: Three Decades of Transitions in Central and Eastern Europe: 341–368.
  7. [7] L. Mattauch, F. Creutzig, and O. Edenhofer, (2015) “Avoiding carbon lock-in: Policy options for advancing structural change" Economic modelling 50: 49–63.
  8. [8] E. Yirdaw, M. Kanninen, and A. Monge, (2023) “Synergies and trade-offs between biodiversity and carbon in ecological compensation" Sustainability 15(15): 11930. DOI: 10.3390/su151511930.
  9. [9] H. Mania, A. Guy, and B. Recht, (2018) “Simple random search of static linear policies is competitive for rein forcement learning" Advances in neural information processing systems 31:
  10. [10] Y. Zhou, F. Wang, Z. Shi, and D. Feng, (2024) “The Static Allocation is Not a Static: Optimizing SSD Ad dress Allocation through Boosting Static Policy" IEEE Transactions on Parallel and Distributed Systems: DOI: 10.1109/TPDS.2024.3407367.
  11. [11] D. Shi, H. Xu, S. Wang, J. Hu, L. Chen, and C. Yin, (2024) “Deep reinforcement learning based adaptive energy management for plug-in hybrid electric vehicle with double deep Q-network" Energy 305: 132402. DOI: 10. 1016/j.energy.2024.132402.
  12. [12] Z.Wu,C.Yu,D.Ye,J.Zhang,H.H.Zhuo,etal.,(2021) “Coordinated proximal policy optimization" Advances in Neural Information Processing Systems 34: 26437 26448.
  13. [13] Z. Zhan, X. Mao, H. Liu, and S. Yu, (2025) “STGL: Self-Supervised Spatio-Temporal Graph Learning for Traffic Forecasting" Journal of Artificial Intelligence Research 2(1): 1–8. DOI: 10.70891/JAIR.2025.040001.
  14. [14] W. Zhang and J. Wang, (2024) “English Text Sentiment Analysis Network based on CNN and U-Net" Journal of Science and Engineering 1(1): 13–18. DOI: 10.70891/ JSE.2024.100009.
  15. [15] W. Hu, Y. Wu, and Z. Yang, (2024) “An Analysis of Credit Risk Prediction for Small and Micro Enterprises" Journal of Artificial Intelligence Research 1(2): 1–21. DOI: 10.70891/JAIR.2024.110004.
  16. [16] A. Karbasi, N. L. Kuang, Y. Ma, and S. Mitra. “Langevin thompson sampling with logarithmic communication: bandits and reinforcement learning”. In: International Conference on Machine Learning. 2023, 15828–15860.
  17. [17] C. Lu, P. Ball, Y. W. Teh, and J. Parker-Holder, (2023) “Synthetic experience replay" Advances in Neural Information Processing Systems 36: 46323–46344.


    



 

2.1
2023CiteScore
 
 
69th percentile
Powered by  Scopus

SCImago Journal & Country Rank

Enter your name and email below to receive latest published articles in Journal of Applied Science and Engineering.