Introduction
Implementation of an Map Following Vehicle (MFV) is a step towards making a ground vehicle that can work at its own, which can make decisions and do the tasks which we require from it to be done. It would be a vehicle which can process the data in real time and take the respective decisions on the bases of that processed data. The project is based on integration of different hardware and software modules and their interaction to produce an efficient outcome.
1.1 Motivation The basic idea of this project is to design a map following vehicle that would be autonomous. Meaning that, it can go to the places where sending a human would be a life threatening process and do the tasks which we specify it to do. It can take decisions at its own using the camera as an input device to acquire the video of the path, then this video is sent to the onboard computer system which processes the desired frame (image) of the video in real time and takes the decisions on the basis of that processed image that whether the space in front of the vehicle is a clear way or is it a hurdle. If the system finds it as a hurdle, then it changes the route of the vehicle and switch to a way where the track is clear for the vehicle to move forward and so on.
The vehicle is designed on a simple toy car of the size of approximately 2x2 feet, having a computer system onboard with a camera, a pair of batteries, microcontroller circuitry, motor driver circuitry, and motors to drive the car.
The project requires the programming skills to be utilized in the processing of the image and as well as for the micro controller. The programs for Digital Image Processing (DIP) are written in MATLAB. The C language and VMLAB software is used for the programming of the microcontroller. The project also requires the skills of circuit designing using electronic components; to design a biasing circuit of a microcontroller, the motor driver circuitry, and
Bibliography: [2] J. L. Giesbrecht, H. K. Goi, T. D. Barfoot, and B. A. Francis. A Vision-based Robotic Follower Vehicle. In Proceedings of the SPIE Unmanned Systems Technology XI, volume 7332, 2009 [3] Guna Seetharaman, Arun Lakhotia,and Erik Philip Blasch [8] Vasseur, L., Lecointe, O., Dento, J., Cherfaoui, N., Marion, V., and Morillon, J. (2004), Leader-follower function for autonomous military convoys, Proceedings SPIE Unmanned Ground Technology Conference, Vol. 5422. [10] P. Chandak,“Study and implementation of follow-the-leader (Bearcat III)”.Master Thesis, University of Cincinnati, 2002. [13] T. Lemaire, C. Berger, I. Jung and S. Lacroix, Vision-Based SLAM: Stereo and Monocular Approaches, International Journal of Computer Vision, vol. 74, no. 3, 2007, pp 343-364. [14] Kubinger, W., Borbely, S., Hemetsberger, H., and Isaacs, R. (2005), Platform for evalutation of embedded computer vision, Proceedings 13th European Signal Processing Conference, pp. 130–133. [15] Smith, J. (2001), A Tracking System for a Seismic Surveying Mobile Robot, Master’s thesis, University of Alberta, Edmonton, Alberta, Canada. [16] Nguyen, H., Kogut, G., Barua, R., and Burmeister, A. (2004), A Segway RMP-based Robotic Transport System, Proceedings of SPIE Mobile Robots XVII, 5609, 244–255. [17] Benhimane, S., Malis, E., Rives, P., and Azinheira, J. (2005), Vision-based controlfor car platooning using homography decomposition, Proceedings of the IEEE International Conference on Robotics and Automation. [18] Sukthankar, R. (1993), Racoon: A Real Time Autonomous Car Chaser Operating Optimally at Night, Proceedings of the IEEE Intelligent Vehicles and Systems. [19] Chiem, S. and Cervera, E. (2004), Vision-Based Robot Formations with Bezier Trajectories, In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems. [22] Lepetit, V. and Fua, P. (2005), Monocular Model-Based 3D Tracking of Rigid Objects A Survey, Computer Graphics and Vision, 1(1), 1–89. [23] Hong, P., Sahli, H., Colon, E., and Baudoin, Y. (2001), Visual Servoing for Robot Navigation, In Third International Conference on Climbing and Walking Robots.58 DRDC [24] Mio, M., Tachibana, A., Aoki, K., and Nishida, M [25] Armstrong, M. and Zisserman, A. (1995), Robust object tracking, In Proceedings Asian Conference on Computer Vision. [26] Gennery, D. (1992), Visual tracking of known three-dimensional objects, International Journal of Computer Vision, 7, 243–270. [27] Preisig, P. and Kragic, D. (2006), Robust Statistics for 3D Object Tracking, In Proceedings IEEE InternationalRobotics and Automation. [28] Wavering, A. J., Schneiderman, H., and Fiala, J. C. (1995), High-Performance Tracking with TRICLOPS, In Asian Conference on Computer Vision. [29] Vacchetti, L. and Lepetit, V. (2004), Stable Real-Time 3D Tracking Using Online and Offline Information, IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(10), 1385–1391. [30] Merad, D., Didier, J., and L, M. S. (2006), Tracking 3D free form object in video sequence, In Proceedings of the The 3rd Canadian Conference on Computer and Robot Vision. [31] Lou, J., Tan, T., Hu, W., Yang, H., and Maybank, S. (2005), 3-D Model-based Vehicle Tracking, IEEE Transactions on Image Processing, 14(10), 1561–1569. [32] Juza, M., Marik, K., Rojicek, J., and Stluka, P. (2006), 3D Template-Based Single Camera Multiple Object Tracking, In Computer Vision Winter Workshop, Czech Pattern Recognition Society. [33] Wu, H., Chen, Q., Oike, H., Hua, C., Wada, T., and Kato, T. (2005), High Performance Object Tracking System Using Active Cameras, Proceedings of the IEEE International Conference on Computer Vision, 37(213-223). [34] Kang, S., Paikab, J., Koschana, A., Abidia, B., and Abidi, M. A. (2003), Real-time video tracking using PTZ cameras, In Proceedings of SPIE 6th International [35] Barreto, J. P., Batista, J., Peixoto, P., and Araujo, H. (2002), Integrating Vision and Control to Achieve High Perfomance Active Tracking, Technical Report ISR/DEEC – University of Coimbra, Coimbra, Portugal. [37] Chen, Z. and Birchfield, S. (2006), Autonomous Person Following Using A Vision-Based Mobile Robot, Technical Report National Institute for Medical Informatics. [38] Lee, S., Kim, E., and Park, Y. (2006), 3D Object Recognition Using multiple features for robotic manipulation, In Proceedings IEEE International Conference on Robotics and Automation. [39] Yanlin, G., Sawhney, H., Kumar, R., and Hsu, S. (2007), Robust object matching or persistent tracking with heterogeneous features, IEEE Transactions on Pattern Analysis Machine Intelligence, 29(5), 824–839. [40] Schlegel, C., Illmann, J., Jaberg, H., Schuster, M., and Worz, R. (1998), Vision based person tracking with a mobile robot, In Proceedings Ninth British Machine Vision Conference. [41] Xiong, T. and Debrunner, C. (2004), Stochastic Car Tracking with Line and Color Based Features, IEEE Transactions on Intelligent Transportation Systems, 5(4). [42] Bellotto, N. and Hu, H. (2006), Vision and laser data fusion for tracking people with a mobile robot, In IEEE International Conference On Robotics and Biomimetics. [43] Vasseur, L., Lecointe, O., Dento, J., Cherfaoui, N., Marion, V., and Morillon, J. (2004), Leader-follower function for autonomous military convoys, Proceedings SPIE Unmanned Ground Technology Conference, Vol. 5422. [44] S. LaValle. Planning Algorithms. Cambridge University Press, 2006. [45] R. Siegwart and I. Nourbakhsh. Introduction to Autonomous Mobile Robots. The MIT Press, Cambridge, Massachusetts, 2004. [49] T. Fong, C. Thorpe, and C. Baur. Robot, Asker of Questions. Robotics and Autonomous Systems, 42(3-4):235–243, 2003.