Qu, J., Huang, D., Shi, Y., Liu, J. and Tang, W., 2025. Diffusion-driven multi-modality medical image fusion. Medical and Biological Engineering and Computing, 63, 2105-2118.
Full text available as:
![]() |
PDF
Diffusion-driven multi-modality medical image fusion.pdf - Accepted Version Restricted to Repository staff only until 11 February 2026. Available under License Creative Commons Attribution Non-commercial. 8MB |
Copyright to original material in this document is with the original owner(s). Access to this content through BURO is granted on condition that you use it only for research, scholarly or other non-commercial purposes. If you wish to use it for any other purposes, you must contact BU via BURO@bournemouth.ac.uk. Any third party copyright material in this document remains the property of its respective owner(s). BU grants no licence for further use of that third party material. |
DOI: 10.1007/s11517-025-03300-6
Abstract
Abstract: Multi-modality medical image fusion (MMIF) technology utilizes the complementarity of different modalities to provide more comprehensive diagnostic insights for clinical practice. Existing deep learning-based methods often focus on extracting the primary information from individual modalities while ignoring the correlation of information distribution across different modalities, which leads to insufficient fusion of image details and color information. To address this problem, a diffusion-driven MMIF method is proposed to leverage the information distribution relationship among multi-modality images in the latent space. To better preserve the complementary information from different modalities, a local and global network (LAGN) is suggested. Additionally, a loss strategy is designed to establish robust constraints among diffusion-generated images, original images, and fused images. This strategy supervises the training process and prevents information loss in fused images. The experimental results demonstrate that the proposed method surpasses state-of-the-art image fusion methods in terms of unsupervised metrics on three datasets: MRI/CT, MRI/PET, and MRI/SPECT images. The proposed method successfully captures rich details and color information. Furthermore, 16 doctors and medical students were invited to evaluate the effectiveness of our method in assisting clinical diagnosis and treatment.
Item Type: | Article |
---|---|
ISSN: | 0140-0118 |
Uncontrolled Keywords: | Deep learning; Diffusion; Local and global fusion; Medical image fusion; Humans; Multimodal Imaging; Image Processing, Computer-Assisted; Deep Learning; Magnetic Resonance Imaging; Algorithms; Tomography, X-Ray Computed; Tomography, Emission-Computed, Single-Photon |
Group: | Faculty of Media & Communication |
ID Code: | 41387 |
Deposited By: | Symplectic RT2 |
Deposited On: | 22 Sep 2025 14:14 |
Last Modified: | 22 Sep 2025 14:14 |
Downloads
Downloads per month over past year
Repository Staff Only - |