Suziati Bt Salleh 1, Alaaaldin abdulrahman mohamed 2
Mechatronics Division, Faculty of Engineering
University Selangor, Bestari Jaya Campus, Batang Bejuntai, Selangor Darul Ehsan, Malaysia
1
suziati83@yahoo.com , 2alaa-oo7@hotmail.com
II.
Abstract— Voice recognition system these days
plays a major role in most of the trendy machinecontrolled technologies. It easiness the communication between the user and system due to the non-necessity of the user’s to make any physical movement. During this research, the visual basic program was used as the foundation and center command for the vacuum mobile system. A set of six commands were created and installed on the visual basic program and can be seen through the visual basic window. The user has an option of either stating the command directly through the microphone of the computer or pressing the command on the window of the visual basic. All the pre-installed commands were sent and received with success from the program to the mobile robot. On the other hand, in the case of absence of the middle command unit which is the visual basic, a manual switch was provided to control the mobile mechanism automatically. This advantage allows the mobile robot to function autonomously.
THE OVERALL MOBILE ROBOT
DESIGN
Once a command is entered through the visual basic verbally, the visual basic will run it through the text file. If the command is spelled correctly and matches the one of the words in the grammar text file, a code will be sent to the PIC
16f873P through the MAX 232. This command is then translated and sent to the PT 2262 to be encoded and transmitted to the PT 2272M4 via the transmitter TX/RX module to be decoded. This command is then sent to the PIC
16f873A to be translated according to the program preinstalled in the PIC. Based on the command, the PIC sends a specific command to the servo motors via L298N to operate either in the forward,
References: (1994). A human machine interface for distributed virtual laboratories Valtchev, V., and Woodland, P.(2000). HTK: Hidden Markov Model Toolkit V3.0 E., and Bugajska, M. (2001). Building a multimodal human-robot interface