Andrew I. Comport, Éric Marchand, François Chaumette IRISA - INRIA Rennes Campus de Beaulieu, 35042 Rennes, France E-Mail : Firstname.Lastname@irisa.fr
Abstract
Augmented Reality has now progressed to the point where real-time applications are being considered and needed. At the same time it is important that synthetic elements are rendered and aligned in the scene in an accurate and visually acceptable way. In order to address these issues a real-time, robust and efficient 3D model-based tracking algorithm is proposed for a ’video see through’ monocular vision system. The tracking of objects in the scene amounts to calculating the pose between the camera and the objects. Virtual objects can then be projected into the scene using the pose. Here, non-linear pose computation is formulated by means of a virtual visual servoing approach. In this context, the derivation of point-to-curves interaction matrices are given for different features including lines, circles, cylinders and spheres. A local moving edges tracker is used in order to provide real-time tracking of points normal to the object contours. A method is proposed for combining local position uncertainty and global pose uncertainty in an efficient and accurate way by propagating uncertainty. Robustness is obtained by integrating a M-estimator into the visual control law via an iteratively re-weighted least squares implementation. The method presented in this paper has been validated on several complex image sequences including outdoor environments. Results show the method to be robust to occlusion, changes in illumination and misstracking.
focus on the registration techniques that allow alignment of real and virtual worlds using images acquired in real-time by a moving camera. In such systems AR is mainly a pose (or viewpoint) computation issue. In this paper a markerless model-based algorithm is used for the tracking of 3D objects in monocular image
References: [1] R. Azuma. A survey of augmented reality. Presence: Teleoperators and Virtual Environments, 6(4):355–385, Aug 1997. [2] R. Azuma, Y. Baillot, R. Behringer, S. Feiner, S. Julier, and B. MacIntyre. Recent advances in augmented reality. IEEE Computer Graphics and Application, 21(6):34–47, November 2001. [3] M. Billinghurst, H. Kato, and I. Poupyrev. The magicbook: Moving seamlessly between reality and virtuality. IEEE Computer Graphics and Applications, 21(3):6–8, May 2001. [4] P. Bouthemy. A maximum likelihood framework for determining moving edges. IEEE Trans. on Pattern Analysis and Machine intelligence, 11(5):499–511, May 1989. [5] K.-W. Chia, A.-D. Cheok, and S. Prince. Online 6 dof augmented reality registration from natural features. In IEEE Int. Symp. on Mixed and Augmented Reality (ISMAR’02), pages 305–316, Darmstadt, Germany, September 2002. [6] S. de Ma. Conics-based stereo, motion estimation and pose determination. Int. J. of Computer Vision, 10(1):7–25, 1993. [7] D. Dementhon and L. Davis. Model-based object pose in 25 lines of codes. Int. J. of Computer Vision, 15:123–141, 1995. [8] M. Dhome, J.-T. Lapresté, G. Rives, and M. Richetin. Determination of the attitude of modelled objects of revolution in monocular perspective vision. In European Conference on Computer Vision, ECCV’90, volume LNCS 427, pages 475– 485, Antibes, April 1990. [9] M. Dhome, M. Richetin, J.-T. Lapresté, and G. Rives. Determination of the attitude of 3-d objects from a single perspective view. IEEE Trans. on Pattern Analysis and Machine Intelligence, 11(12):1265–1278, December 1989. [10] T. Drummond and R. Cipolla. Real-time visual tracking of complex structures. IEEE Trans. on Pattern Analysis and Machine Intelligence, 27(7):932–946, July 2002. [11] B. Espiau, F. Chaumette, and P. Rives. A new approach to visual servoing in robotics. IEEE Trans. on Robotics and Automation, 8(3):313–326, June 1992. [12] N. Fischler and R. Bolles. Random sample consensus: A paradigm for model fitting with application to image analysis and automated cartography. Communication of the ACM, 24(6):381–395, June 1981. [13] R. Haralick, H. Joo, C. Lee, X. Zhuang, V. Vaidya, and M. Kim. Pose estimation from corresponding point data. IEEE Trans on Systems, Man and Cybernetics, 19(6):1426– 1445, November 1989. [14] K. Hashimoto, editor. Visual Servoing : Real Time Control of Robot Manipulators Based on Visual Sensory Feedback. World Scientific Series in Robotics and Automated Systems, Vol 7, World Scientific Press, Singapor, 1993. [15] P.-W. Holland and R.-E. Welsch. Robust regression using iteratively reweighted least-squares. Comm. Statist. Theory Methods, A6:813–827, 1977. [16] P.-J. Huber. Robust Statistics. Wiler, New York, 1981. [17] S. Hutchinson, G. Hager, and P. Corke. A tutorial on visual servo control. IEEE Trans. on Robotics and Automation, 12(5):651–670, October 1996. [18] M. Isard and A. Blake. Condensation – conditional density propagation for visual tracking. Int. J. Computer Vision, 29(1):5–28, January 1998. [19] H. Kato, M. Billinghurst, I. Poupyrev, K. Imamoto, and K. Tachibana. Virtual object manipulation on a table-top ar environment. In Proceedings of Int. Symp. on Augmented Reality 2000, October 2000. [20] R. Kumar and A. Hanson. Robust methods for estimating pose and a sensitivity analysis. CVGIP: Image Understanding, 60(3):313–342, Novembre 1994. [21] D. Lowe. Three-dimensional object recognition from single two-dimensional images. Artificial Intelligence, 31:355–394, 1987. [22] D. Lowe. Robust model-based motion tracking trough the integration of search and estimation. Int. J. of Computer Vision, 8(2):113–122, 1992. [23] C. Lu, G. Hager, and E. Mjolsness. Fast and globally convergent pose estimation from video images. IEEE trans on Pattern Analysis and Machine Intelligence, 22(6):610–622, June 2000. [24] E. Marchand, P. Bouthemy, F. Chaumette, and V. Moreau. Robust real-time visual tracking using a 2d-3d model-based approach. In IEEE Int. Conf. on Computer Vision, ICCV’99, volume 1, pages 262–268, Kerkira, Greece, September 1999. [25] E. Marchand and F. Chaumette. Virtual visual servoing: a framework for real-time augmented reality. In EUROGRAPHICS’02 Conference Proceeding, volume 21(3) of Computer Graphics Forum, pages 289–298, Saarebrücken, Germany, September 2002. [26] U. Neumann, S. You, Y. Cho, J. Lee, and J. Park. Augmented reality tracking in natural environments. In International Symposium on Mixed Realities, Tokyo, Japan, 1999. [27] J. Park, B. Jiang, and U. Neumann. Vision-based pose computation: Robust and accurate augmented reality tracking. In ACM/IEEE International Workshop on Augmented Reality, pages 3–12, San Francisco, California, October 1998. [28] R. Safaee-Rad, I. Tchoukanov, B. Benhabib, and K. Smith. Three dimentional location estimation of circular features for machine vision. IEEE trans on Robotics and Automation, 8(2):624–639, october 1992. [29] C. Samson, M. Le Borgne, and B. Espiau. Robot Control: the Task Function Approach. Clarendon Press, Oxford, 1991. [30] G. Simon and M.-O. Berger. Reconstructing while registering: A novel approach for markerless augmented reality. In IEEE Int. Symp. on Mixed and Augmented Reality (ISMAR’02), pages 285–294, Darmstadt, Germany, Sept 2002. [31] C.-V. Stewart. Robust parameter estimation in computer vision. SIAM Review, 41(3):513–537, September 1999. [32] V. Sundareswaran and R. Behringer. Visual servoing-based augmented reality. In IEEE Int. Workshop on Augmented Reality, San Francisco, November 1998. [33] X. Zhang, S. Fronz, and N. Navab. Visual marker detection and decoding in ar systems: A comparative study. In IEEE Int. Symp. on Mixed and Augmented Reality (ISMAR’02), pages 79–106, Darmstadt, Germany, September 2002. Proceedings of the Second IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR ’03) 0-7695-2006-5/03 $17.00 © 2003 IEEE