Journal of Applied Science and Engineering

Published by Tamkang University Press

1.30

Impact Factor

2.10

CiteScore

Chih-Hsien Hsia This email address is being protected from spambots. You need JavaScript enabled to view it.1, Wei-Hsuan Chang2 and Jen-Shiun Chiang2

1Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan, R.O.C.
2Department of Electrical Engineering, Tamkang University, Tamsui, Taiwan 251, R.O.C.


 

Received: April 25, 2011
Accepted: September 26, 2011
Publication Date: June 1, 2012

Download Citation: ||https://doi.org/10.6180/jase.2012.15.2.12  


ABSTRACT


The research of autonomous robots is one of the most important challenges in recent years. Among the numerous robot researches, the humanoid robot soccer competition is very popular. The robot soccer players rely on their vision systems very intensively when they are in the unpredictable and dynamic environments. This work proposes a simple and real-time object recognition system for the RoboCup soccer humanoid league rules of the 2009 competition. This vision system can help the robot to collect various environment information as the terminal data to complete the functions of robot localization, robot tactic, barrier avoiding, etc. It can reduce the computing complexity by using our proposed approach, adaptive resolution method (ARM), to recognize the critical objects in the contest field by object features which can be obtained easily. The experimental results indicate that the proposed approach can increase the real-time and accurate recognition efficiency.


Keywords: Robot, RoboCup, Adaptive Resolution Method, Object Recognition, Real-Time


REFERENCES


  1. [1] Kitano, H., Asada, M., Kuniyoshi, Y., Noda, I. and Osawa, E., “Robocup: The Robot World Cup Initiative,” IJCAI-95 Workshop on Entertainment and AI/ ALife, pp. 1924 (1995).
  2. [2] RoboCup Soccer Humanoid League Rules and Setup for the 2009 competition, http://www.robocup2009. org/153-0-rules.
  3. [3] Chaumette, F., “Visual Servoing Using Image Features Defined on Geometrical Primitives,” IEEE Conference on Decision and Control (1994).
  4. [4] Jean, J.-H. and Wu, R.-Y., “Adaptive Visual Tracking of Moving Objects Modeled with Unknown Parameterized Shape Contour,” IEEE International Conference on Networking, Sensing and Control (2004).
  5. [5] Sun, S. J., Haynor, D. R. and Kim, Y. M., “Semiautomatic Video Object Segmentation Using V Snakes,” IEEE Transactions on Circuits System Video Technology, Vol. 13, pp. 7582 (2003).
  6. [6] Kass, M., Witkin, A. and Terzopoulos, D., “Snakes: Active Contour Models,” International Journal of Computer Vision, Vol. 1, pp. 321331 (1988).
  7. [7] Canny, J., “A Computational Approach to Edge Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 8, pp. 679698 (1986).
  8. [8] Herodotou N., Plataniotis K. N. and Venetsanopoulos A. N., “A Color Segmentation Scheme for ObjectBased Video Coding,” IEEE Symposium on Advances in Digital Filtering and Signal Processing (1998).
  9. [9] Ikeda, O., “Segmentation of Faces in Video Footage Using HSV Color for Face Detection and Image Retrieval,” International Conference on Image Processing, Vol. 2, pp. III-913III-916 (2003).
  10. [10] Sugandi, B., Kim, H., Tan, J. K. and Ishikawa, S., “Real Time Tracking and Identification of Moving Persons by Using a Camera in Outdoor Environment,” International Journal of Innovative Computing, Information and Control, Vol. 5, pp. 11791188 (2009).
  11. [11] Gonazlez, R. C. and Woods, R. E., Digital Image Processing (2nd Ed), Addison-Wesley (1992).
  12. [12] Cheng, F.-H. and Chen, Y.-L., “Real Time Multiple Objects Tracking and Identification Based on Discrete Wavelet Transform,” Pattern Recognition, Vol. 39, pp. 11261139 (2006)
  13. [13] Hsia, C.-H., Guo, J.-M. and Chiang, J.-S., “Improved Low-Complexity Algorithm for 2-D Integer LiftingBased Discrete Wavelet Transform Using Symmetric Mask-Based Scheme,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 19, pp. 17 (2009).
  14. [14] The experiment result videos, http://www.youtube. com/view_play_list?p=3046A8B443052360.
  15. [15] Grana, C., Borghesani, D. and Cucchiara, R., “Optimized Block-Based Connected Components Labeling with Decision Trees,” IEEE Transactions on Image Processing, Vol. 19, pp. 15961609 (2010).