MVSS: Mobile Visual Search Based on Saliency

Abstract

With development of content-based image retrieval (CBIR), mobile visual search (MVS) is a promising application. In typical MVS, similar images are retrieved from the database maintained by the server, given a query image taken by mobile devices. Different from general CBIR, the problem of transmission latency should be considered in MVS. In existing work, the progressive transmission is proposed to minimize the data size in transmission by low-dimensional feature descriptors and compression coding in order to reduce the transmission latency in MVS. Although the retrieval speed is improved by existing progressive transmission methods, the result accuracy is decreased because of the information loss in these methods. To address this problem, this paper proposes a novel framework for MVS which consists of a new progressive transmission model based on image saliency (MVSS) and a new distance metric corresponding to the proposed progressive transmission model. In our framework, we use SIFT descriptors to represent images, which can preserve more information than other low-dimensional feature descriptors and compression coding. Although SIFT is high-dimensional descriptor, we only transmit the SIFT descriptors in salient regions of image to reduce the transmission latency. We evaluate our framework on Stanford image set, and the results demonstrate that our framework not only reduces the transmission latency but also achieves a better retrieval accuracy.

Publication
2013 IEEE 10th International Conference on High Performance Computing (HPCC)
Yegang Du
Yegang Du
Assistant Professor

My research interests include intelligent system, HCI, AIoT, and pervasive computing.