Skip to main content

Language-Led Visual Grounding and Future Possibilities.

Sui, Z., Zhou, M., Feng, Z., Stefanidis, A. and Jiang, N., 2023. Language-Led Visual Grounding and Future Possibilities. Electronics, 12 (14), 3142.

Full text available as:

[img]
Preview
PDF (OPEN ACCESS ARTICLE)
electronics-12-03142.pdf - Published Version
Available under License Creative Commons Attribution.

1MB

DOI: 10.3390/electronics12143142

Abstract

In recent years, with the rapid development of computer vision technology and the popularity of intelligent hardware, as well as the increasing demand for human–machine interaction in intelligent products, visual localization technology can help machines and humans to recognize and locate objects, thereby promoting human–machine interaction and intelligent manufacturing. At the same time, human–machine interaction is constantly evolving and improving, becoming increasingly intelligent, humanized, and efficient. In this article, a new visual localization model is proposed, and a language validation module is designed to use language information as the main information to increase the model’s interactivity. In addition, we also list the future possibilities of visual localization and provide two examples to explore the application and optimization direction of visual localization and human–machine interaction technology in practical scenarios, providing reference and guidance for relevant researchers and promoting the development and application of visual localization and human–machine interaction technology.

Item Type:Article
ISSN:2079-9292
Uncontrolled Keywords:visual grounding; human-computer interaction; intelligent systems; user experience; interaction design
Group:Faculty of Science & Technology
ID Code:38909
Deposited By: Symplectic RT2
Deposited On:18 Aug 2023 12:46
Last Modified:18 Aug 2023 12:46

Downloads

Downloads per month over past year

More statistics for this item...
Repository Staff Only -