Skip to main content

Surgical Instruction Generation with Transformers.

Zhang, J., Nie, Y., Chang, J. and Zhang, J., 2021. Surgical Instruction Generation with Transformers. In: MICCAI 2021: International Conference on Medical Image Computing and Computer-Assisted Intervention, 27 September- 1 October 2021, Strasbourg, France, 290 - 299.

Full text available as:

[img]
Preview
PDF
2107.06964.pdf - Accepted Version
Available under License Creative Commons Attribution Non-commercial.

1MB

DOI: 10.1007/978-3-030-87202-1_28

Abstract

Automatic surgical instruction generation is a prerequisite towards intra-operative context-aware surgical assistance. However, generating instructions from surgical scenes is challenging, as it requires jointly understanding the surgical activity of current view and modelling relationships between visual information and textual description. Inspired by the neural machine translation and imaging captioning tasks in open domain, we introduce a transformer-backboned encoder-decoder network with self-critical reinforcement learning to generate instructions from surgical images. We evaluate the effectiveness of our method on DAISI dataset, which includes 290 procedures from various medical disciplines. Our approach outperforms the existing baseline over all caption evaluation metrics. The results demonstrate the benefits of the encoder-decoder structure backboned by transformer in handling multimodal context.

Item Type:Conference or Workshop Item (UNSPECIFIED)
ISSN:0302-9743
Group:Faculty of Media & Communication
ID Code:36118
Deposited By: Symplectic RT2
Deposited On:19 Oct 2021 10:49
Last Modified:14 Mar 2022 14:29

Downloads

Downloads per month over past year

More statistics for this item...
Repository Staff Only -