FOR OBSTACLE AVOIDANCE BEHAVIOR
Khairul Anam1,Prihastono2,4,Handy Wicaksono3,4, Rusdhianto Effendi4, Indra Adji S5, Son Kuswadi5, Achmad
Jazidie4, Mitsuji Sampei6
1
Department of Electrical Engineering, University of Jember, Jember, Indonesia
(Tel : +62-0331-484977 ; E-mail: kh.anam.sk@gmail.com)
2
Department of Electrical Engineering, University of Bhayangkara, Surabaya, Indonesia
(Tel : + 62-031-8285602; E-mail: prihtn@yahoo.com)
3
Department of Electrical Engineering, Petra Christian University, Surabaya, Indonesia
(Tel : +62-031-8439040; E-mail: handy@petra.ac.id )
4
Department of Electrical Engineering, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia
(Tel : +62 031-599 4251; E-mail: ditto@ee.its.ac.id, jazidie@ee.its.ac.id)
5
Electronics Eng. Polytechnics Institute of Surabaya, Surabaya Indonesia
(Tel : +62 031-5947280; E-mail: indra@eepis-its.edu , sonk@eepis-its.edu)
6
Department of Mechanical and Control Engineering, Tokyo Institute of Technology, Tokyo, Japan
(Tel : +81-3-5734-2552; E-mail: sampei@ctrl.titech.ac.jp)
Abstract: Fuzzy Q-learning is extending of Q-learning algorithm that uses fuzzy inference system to enable Qlearning holding continuous action and state. This learning has been implemented in various robot learning application like obstacle avoidance and target searching. However, most of them have not been realized in embedded robot. This paper presents implementation of fuzzy Q-learning for obstacle avoidance navigation in embedded mobile robot. The experimental result demonstrates that fuzzy Q-learning enables robot to be able to learn the right policy i.e. to avoid obstacle.
Keywords: fuzzy q-learning, obstacle avoidance
EMBEDDED LEARNING ROBOT USING FUZZY Q-LEARNING FOR
OBSTACLE AVOIDANCE BEHAVIOR
ABSTRACT
Fuzzy Q-learning is extending of Q-learning algorithm that uses fuzzy inference system to enable Qlearning holding continuous action and state.