Gong, Y., Cosma, G. and Finke, A., 2024. VITR: Augmenting Vision Transformers with Relation-Focused Learning for Cross-modal Information Retrieval. ACM Transactions on Knowledge Discovery from Data, 18 (9), 220.
Full text available as:
|
PDF (OPEN ACCESS ARTICLE)
VITR Augmenting.pdf - Published Version Available under License Creative Commons Attribution. 6MB | |
Copyright to original material in this document is with the original owner(s). Access to this content through BURO is granted on condition that you use it only for research, scholarly or other non-commercial purposes. If you wish to use it for any other purposes, you must contact BU via BURO@bournemouth.ac.uk. Any third party copyright material in this document remains the property of its respective owner(s). BU grants no licence for further use of that third party material. |
DOI: 10.1145/3686805
Abstract
The relations expressed in user queries are vital for cross-modal information retrieval. Relation-focused cross-modal retrieval aims to retrieve information that corresponds to these relations, enabling effective retrieval across different modalities. Pre-Trained networks, such as Contrastive Language-Image Pre-Training networks, have gained significant attention and acclaim for their exceptional performance in various cross-modal learning tasks. However, the Vision Transformer (ViT) used in these networks is limited in its ability to focus on image region relations. Specifically, ViT is trained to match images with relevant descriptions at the global level, without considering the alignment between image regions and descriptions. This article introduces VITR, a novel network that enhances ViT by extracting and reasoning about image region relations based on a local encoder. VITR is comprised of two key components. Firstly, it extends the capabilities of ViT-based cross-modal networks by enabling them to extract and reason with region relations present in images. Secondly, VITR incorporates a fusion module that combines the reasoned results with global knowledge to predict similarity scores between images and descriptions. The proposed VITR network was evaluated through experiments on the tasks of relation-focused cross-modal information retrieval. The results derived from the analysis of the Flickr30K, MS-COCO, RefCOCOg, and CLEVR datasets demonstrated that the proposed VITR network consistently outperforms state-of-The-Art networks in image-To-Text and text-To-image retrieval.
Item Type: | Article |
---|---|
ISSN: | 1556-4681 |
Uncontrolled Keywords: | General and reference- Cross-modal retrieval; Multimedia and multimodal retrieval; Computing methodologies- Reasoning; Machine learning approaches |
Group: | Faculty of Science & Technology |
ID Code: | 40585 |
Deposited By: | Symplectic RT2 |
Deposited On: | 09 Dec 2024 15:34 |
Last Modified: | 09 Dec 2024 15:34 |
Downloads
Downloads per month over past year
Repository Staff Only - |