Skip to main content

Explainable Approach Using Semantic-Guided Alignment for Radiology Imaging Diagnosis.

Cheddi, F., Habbani, A. and Nait-Charif, H., 2025. Explainable Approach Using Semantic-Guided Alignment for Radiology Imaging Diagnosis. International Journal of Advanced Computer Science and Applications, 16 (7), 629-639.

Full text available as:

[thumbnail of OPEN ACCESS ARTICLE]
Preview
PDF (OPEN ACCESS ARTICLE)
Paper_61-Explainable_Approach_Using_Semantic_Guided_Alignment.pdf - Published Version
Available under License Creative Commons Attribution.

1MB

DOI: 10.14569/IJACSA.2025.0160761

Abstract

The increased success of deep learning in the radiology imaging domain has significantly advanced automated diagnosis and report generation, aiming to enhance diagnostic precision and clinical decision-making. However, existing methods often struggle to achieve detailed morphological description, resulting in reports that provide only general information without precise clinical specifics and thus fail to meet the stringent interpretability requirements of medical diagnosis. Also, the critical need for transparency in clinical automated systems has catalyzed the emergence of explainable artificial intelligence (XAI) as an essential research frontier. To address these limitations, we propose an explainable system for report generation that leverages semantic-guided alignment and interpretable multimodal deep learning. Our model combines hierarchical semantic feature extraction from medical reports with fine-grained features that guide the model to focus on lesion-relevant visual features and use Concept Activation Vectors (CAVs) to explain how radiological concepts affect report generation. A contrastive multimodal fusion module aligning textual and visual modalities through hierarchical attention and contrastive learning. Finally, an integrated concept activation system that provides transparent explanations by quantifying how radiological concepts influence generated reports. Validation of our approach in comparisons with existing methods indicates a corresponding boost in report quality in terms of clinical accuracy of the description, localization of the lesion, and contextual consistency, positioning our framework as a robust tool for generating more accurate and reliable medical reports.

Item Type:Article
ISSN:2158-107X
Uncontrolled Keywords:Automated report generation; explainable AI; cross-modal fusion; contrastive learning; semantic-guided alignment
Group:Faculty of Science & Technology
ID Code:41529
Deposited By: Symplectic RT2
Deposited On:20 Nov 2025 12:29
Last Modified:20 Nov 2025 12:29

Downloads

Downloads per month over past year

More statistics for this item...
Repository Staff Only -