Journal of Applied Science and Engineering

Published by Tamkang University Press

1.30

Impact Factor

2.10

CiteScore

Helen. K. Joy  1 and Manjunath R Kounte2

1School of Electronics and Communication Engineering, REVA University, Bengaluru, 560064, India
2Department of Electronics and Computer Engineering, School of ECE, REVA University, Bengaluru, 560064, India


 

Received: November 2, 2021
Accepted: May 5, 2022
Publication Date: June 3, 2022

 Copyright The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are cited.


Download Citation: ||https://doi.org/10.6180/jase.202303_26(3).0002  


ABSTRACT


Video compression and transmission is an ever-growing area of research with continuous development in both software and hardware domain, especially when it comes to medical field. Lung ultra sound (LUS) is identified as one of the best, inexpensive and harmless option to identify various lung disorders including COVID-19. The paper proposes a model to compress and transfer the LUS sample with high quality and less encoding time than the existing models. Deep convolutional neural network is exploited to work on this, as it focusses on content, more than pixels. Here two deep convolutional neural networks, ie, P(prediction)-net and B(bi-directional)-net model are proposed that takes the input as Prediction, Bidirectional frame of existing Group of Pictures and learn. The network is trained with data set of lung ultrasound sample. The trained network is validated to predict the P, B frame from the GOP. The result is evaluated with 23 raw videos and compared with existing video compression techniques. This also shows that deep learning methods might be a worthwhile endeavor not only for COVID-19, but also in general for lung pathologies. The graph shows that the model outperforms the replacement of block-based prediction algorithm in existing video compression with P-net, B-net for lower bit rates.


Keywords: CNN, Motion estimation, COVID-19, P-frame, B-frame


REFERENCES


  1. [1] H. K. Joy and M. R. Kounte, (2020) “A comprehensive review of traditional video processing" Advances in Science, Technology and Engineering Systems 5(6): 274–279. DOI: 10.25046/aj050633.
  2. [2] S. Bouaafia, R. Khemiri, F. E. Sayadi, and M. Atri, (2020) “Fast CU partition-based machine learning approach for reducing HEVC complexity" Journal of Real-Time Image Processing 17(1): 185–196. DOI: 10.1007/s11554-019-00936-0.
  3. [3] S. Tanujaya, T. Chu, J.-H. Liu, and W.-H. Peng. “Semantic segmentation on compressed video using block motion compensation and guided inpainting”. In: 2020-October. Cited by: 0. 2020.
  4. [4] H. K. Joy, M. R. Kounte, and B. Sujatha, (2022) “Design and Implementation of Deep Depth Decision Algorithm for Complexity Reduction in High Efficiency Video Coding (HEVC)" International Journal of Advanced Computer Science and Applications 13(1): 553–560. DOI: 10.14569/IJACSA.2022.0130168.
  5. [5] G. Farnebäck, (2003) “Two-frame motion estimation based on polynomial expansion" Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 2749: 363–370. DOI: 10.1007/3-540-45103-x_50.
  6. [6] T. Shiodera, A. Tanizawa, and T. Chujoh. “Block based extra/inter-polating prediction for intra coding”. In: 6. Cited by: 18. 2006, VI445–VI448. DOI: 10.1109/ICIP.2007.4379617.
  7. [7] L. Zhao, S.Wang, X. Zhang, S.Wang, S. Ma, and W. Gao. “Enhanced ctu-level inter prediction with deep frame rate up-conversion for high efficiency video coding”. In: 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE. 2018, 206–210.
  8. [8] Y.Wang, X. Fan, C. Jia, D. Zhao, andW. Gao. “Neural Network Based Inter Prediction for HEVC”. In: 2018-July. Cited by: 20. 2018. DOI: 10.1109/ICME.2018.8486600.
  9. [9] N. Yan, D. Liu, H. Li, B. Li, L. Li, and F. Wu, (2019) “Convolutional Neural Network-Based Fractional-Pixel Motion Compensation" IEEE Transactions on Circuits and Systems for Video Technology 29(3): 840–853. DOI: 10.1109/TCSVT.2018.2816932.
  10. [10] S. Naveen, M. R. Kounte, and M. R. Ahmed, (2021) “Low Latency Deep Learning Inference Model for Distributed Intelligent IoT Edge Clusters" IEEE Access 9: 160607–160621. DOI: 10.1109/ACCESS.2021.3131396.
  11. [11] J. Tian, F. Geng, F. Zhao, F. Gao, and X. Niu, (2022) “Application of convolutional neural network in fault line selection of distribution network" Journal of Applied Science and Engineering (Taiwan) 25(1): 195–205. DOI: 10.6180/jase.202202_25(1).0020.
  12. [12] S. Wang, L. Zhao, S. Wang, X. Zhang, S. Ma, and W. Gao, (2019) “Enhanced motion-compensated video coding with deep virtual reference frame generation" IEEE Transactions on Image Processing 28(10): 4832–4844. DOI: 10.1109/TIP.2019.2913545.
  13. [13] M. Xu, T. Li, Z.Wang, X. Deng, R. Yang, and Z. Guan, (2018) “Reducing complexity of HEVC: A deep learning approach" IEEE Transactions on Image Processing 27(10): 5044–5059. DOI: 10.1109/TIP.2018.2847035.
  14. [14] J.-K. Lee, N. Kim, S. Cho, and J.-W. Kang, (2020) “Deep video prediction network-based inter-frame coding in HEVC" IEEE Access 8: 95906–95917.
  15. [15] M. M. Ho, J. Zhou, G. He, M. Li, and L. Li. “SR-CLDMC: P-frame coding with super-resolution, color learning, and deep motion compensation”. In: 2020-June. Cited by: 3. 2020, 538–542. DOI: 10.1109/CVPRW50498.2020.00070.
  16. [16] W. Park and M. Kim, (2021) “Deep Predictive Video Compression Using Mode-Selective Uni- And Bi-Directional Predictions Based on Multi-Frame Hypothesis" IEEE Access 9: 72–85. DOI: 10.1109/ACCESS.2020.3046040.
  17. [17] J. Lei, Z. Zhang, D. Liu, Y. Chen, and N. Ling. “Deep Virtual Reference Frame Generation for Multiview Video Coding”. In: 2020-October. Cited by: 1. 2020, 1123–1127. DOI: 10.1109/ICIP40778.2020.9191112.
  18. [18] J. R. Pare, I. Camelo, K. C. Mayo, M. M. Leo, J. N. Dugas, K. P. Nelson, W. E. Baker, F. Shareef, P. M. Mitchell, and E. M. Schechter-Perkins, (2020) “Point-of-care lung ultrasound is more sensitive than chest radiograph for evaluation of COVID-19" Western Journal of Emergency Medicine 21(4): 771–778. DOI: 10.5811/westjem.2020.5.47743.
  19. [19] E. Shumilov, A. S. A. Hosseini, G. Petzold, H. Treiber, J. Lotz, V. Ellenrieder, S. Kunsch, and A. Neesse, (2020) “Comparison of chest ultrasound and standard X-ray imaging in COVID-19 patients" Ultrasound International Open 6(2): 36–40. DOI: 10.1055/a-1217-1603.
  20. [20] I. E. Richardson. The H.264 Advanced Video Compression Standard: Second Edition. Cited by: 403. 2010. DOI: 10.1002/9780470989418.
  21. [21] S. Kamble, N. Thakur, and P. Bajaj, (2017) “Modified three-step search block matching motion estimation and weighted finite automata based fractal video compression":
  22. [22] K. . .-. M. S. Zhu, (2000) “Correction to “A New Diamond Search Algorithm for Fast Block-Matching Motion Estimation”" IEEE Transactions on Image Processing 9(3): 525. DOI: 10.1109/TIP.2000.826791.
  23. [23] Y. Nie and K.-K. Ma, (2002) “Adaptive rood pattern search for fast block-matching motion estimation" IEEE Transactions on Image Processing 11(12): 1442–1449. DOI: 10.1109/TIP.2002.806251.
  24. [24] S. Jha, S. Ahmad, H. A. Abdeljaber, A. Hamad, and M. B. Alazzam, (2022) “A post covid machine learning approach in teaching and learning methodology to alleviate drawbacks of the e-whiteboards" Journal of Applied Science and Engineering (Taiwan) 25(2): 285–294. DOI: 10.6180/jase.202204_25(2).0014.


    



 

2.1
2023CiteScore
 
 
69th percentile
Powered by  Scopus

SCImago Journal & Country Rank

Enter your name and email below to receive latest published articles in Journal of Applied Science and Engineering.