Multimodal Information Spaces for Content-based Image Retrieval
Abstract Currently, image retrieval by content is a research problem of great interest in academia and the industry, due to the large collections of images available in different contexts. One of the main challenges to develop effective image retrieval systems is the automatic identification of semantic image contents. This research proposal aims to design a model for image retrieval able to take advantage of different data sources, i.e. using multimodal information, to improve the response of an image retrieval system. In particular two data modalities associated to contents and context of images are considered in this proposal: visual features and unstructured text annotations. The proposed framework is based on kernel methods that provide two main important advantages over the traditional multimodal approaches: first, the structure of each modality is preserved in a high dimensional feature space, and second, they provide natural ways to fuse feature spaces in a unique information space. This document presents the research agenda to build a Multimodal Information Space for searching images by content.
Presented by
Juan Carlos Caicedo Rueda
Research Advisor
Prof. Fabio A. Gonz´lez O. Ph.D a
Area
Computer Science
Research Fields
Information Retrieval and Machine Learning.
1
1 INTRODUCTION
2
1
INTRODUCTION
Content-Based Image Retrieval (CBIR) is an active research discipline focused on computational strategies to search for relevant images based on visual content analysis. In this proposal, multimodal analysis is considered to develop CBIR systems, specially for image collections in which there is some text associated to images. Multimodality in Information Retrieval is sometimes referred to the interaction mechanisms and devices used to query the system. However, since the Multimedia Information Retrieval perspective, multimodality is referred to
References: [1] M. S. Lew, N. Sebe, C. Djeraba, and R. Jain, “Content-based multimedia information retrieval: State of the art and challenges,” ACM Trans. Multimedia Comput. Commun. Appl., vol. 2, no. 1, pp. 1–19, February 2006. [2] R. Datta, D. Joshi, J. Li, and J. Z. Wang, “Image retrieval: Ideas, influences, and trends of the new age,” ACM Comput. Surv., vol. 40, no. 2, pp. 1–60, April 2008. [3] J. Shawe-Taylor and N. Cristianini, Kernel methods for pattern analysis. Press, 2004. Cambridge University [4] J. C. Caicedo, A. Cruz, and F. Gonzalez, “Histopathology image classification using bag of features and kernel functions,” Artificial Intelligence in Medicine Conference, AIME 2009, vol. LNAI 5651, pp. 126–135, 2009. [5] J. C. Caicedo, F. A. Gonzalez, and E. Romero, “Content-based medical image retrieval using a kernel-based semantic annotation framework.” Technical Report UN-BI-2009-01 - National University of Colombia. Submitted to the Artificial Intelligence in Medicine Journal, Tech. Rep., 2009. [6] C. D. Manning, P. Raghavan, and H. Sch¨tze, Introduction to Information Retrieval. Cambridge u University Press, 2008. [7] T. G¨rtner, J. W. Lloyd, and P. A. Flach, “Kernels and distances for structured data,” Machine a Learning, vol. 57, no. 3, pp. 205–232, December 2004. [8] N. Cristianini, J. Shawe-Taylor, and H. Lodhi, “Latent semantic kernels,” Journal of Intelligent Information Systems, vol. 18, no. 2, pp. 127–152, March 2002. [9] Z. Cao, T. Qin, T. Y. Liu, M. F. Tsai, and H. Li, “Learning to rank: from pairwise approach to listwise approach,” in ICML ’07: Proceedings of the 24th international conference on Machine learning. New York, NY, USA: ACM, 2007, pp. 129–136. [10] K. Grauman and T. Darrell, “The pyramid match kernel: discriminative classification with sets of image features,” in Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on, vol. 2, 2005.