Skip to main content

A novel solution based on scale invariant feature transform descriptors and deep learning for the detection of suspicious regions in mammogram images.

Bruno, A., Ardizzone, E., Vitabile, S. and Midiri, M., 2020. A novel solution based on scale invariant feature transform descriptors and deep learning for the detection of suspicious regions in mammogram images. Journal of Medical Signals and Sensors, 10 (3), 158 - 173.

Full text available as:

[img]
Preview
PDF
JMedSignalsSens103158-5638449_153944.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial Share Alike.

3MB

DOI: 10.4103/jmss.JMSS_31_19

Abstract

Background: Deep learning methods have become popular for their high‑performance rate in the classification and detection of events in computer vision tasks. Transfer learning paradigm is widely adopted to apply pretrained convolutional neural network (CNN) on medical domains overcoming the problem of the scarcity of public datasets. Some investigations to assess transfer learning knowledge inference abilities in the context of mammogram screening and possible combinations with unsupervised techniques are in progress. Methods: We propose a novel technique for the detection of suspicious regions in mammograms that consist of the combination of two approaches based on scale invariant feature transform (SIFT) keypoints and transfer learning with pretrained CNNs such as PyramidNet and AlexNet fine‑tuned on digital mammograms generated by different mammography devices. Preprocessing, feature extraction, and selection steps characterize the SIFT‑based method, while the deep learning network validates the candidate suspicious regions detected by the SIFT method. Results: The experiments conducted on both mini‑MIAS dataset and our new public dataset Suspicious Region Detection on Mammogram from PP (SuReMaPP) of 384 digital mammograms exhibit high performances compared to several state‑of‑the‑art methods. Our solution reaches 98% of sensitivity and 90% of specificity on SuReMaPP and 94% of sensitivity and 91% of specificity on mini‑MIAS. Conclusions: The experimental sessions conducted so far prompt us to further investigate the powerfulness of transfer learning over different CNNs and possible combinations with unsupervised techniques. Transfer learning performances’ accuracy may decrease when the training and testing images come out from mammography devices with different properties.

Item Type:Article
ISSN:2228-7477
Uncontrolled Keywords:classification; computer-assisted image processing; computing methodologies; deep learning; digital mammography
Group:Faculty of Media & Communication
ID Code:34245
Deposited By: Symplectic RT2
Deposited On:06 Jul 2020 09:34
Last Modified:14 Mar 2022 14:23

Downloads

Downloads per month over past year

More statistics for this item...
Repository Staff Only -