Cross-View Self-Similarity Using Shared Dictionary Learning for Cervical Cancer Staging

Nesrine Bnouni (Lead / Corresponding author), Islem Rekik, Mohamed Salah Rhim, Najoua Essoukri Ben Amara

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)
185 Downloads (Pure)

Abstract

Dictionary Learning (DL) has gained large popularity in solving different computer vision and medical image problems. However, to the best of our knowledge, it has not been used for cervical tumor staging. More importantly, there have been very limited works on how to aggregate different interactions across data views using DL. As a contribution, we propose a novel cross-view self-similarity low rank shared dictionary learning-based (CVSS-LRSDL) framework, which introduces three major contributions in medical image-based cervical cancer staging: (1) leveraging the complementary of axial and sagittal T2w-magnetic resonance (MR) views for cervical cancer diagnosis, (2) introducing self-similarity (SS) patches for DL training, which explore the unidirectional interaction from a source view to a target one, and (3) extracting features that are shared across tumor grades and grade-specific features using the CVSS-LRSDL learning approach. For the first and second contributions, given an input patch in the source view (axial T2w-MR images), we generate its SS patches within a fixed neighborhood in the target view (sagittal T2w-MR images). Specifically, we produce a unidirectional patch-wise SS from a source to a target view, based on mutual and additional information between both views. As for the third contribution, we represent each individual subject using the weighted distance matrix between views, which is used to train our DL-based classifier to output the label for a new testing subject. Overall, our framework outperformed several DL based multi-label classification methods trained using: (i) patch intensities, (ii) SS single-view patches, and (iii) weighted-SS single-view patches. We evaluated our CVSS-LRSDL framework using 15 T2w-MRI sequences with axial and sagittal views. Our CVSS-LRSDL significantly ( \text{p}< 0.05 ) outperformed several comparison methods and obtained an average accuracy of 81.73% for cervical cancer staging.

Original languageEnglish
Article number8658198
Pages (from-to)30079-30088
Number of pages10
JournalIEEE Access
Volume7
DOIs
Publication statusPublished - 5 Mar 2019

Keywords

  • Shared dictionary learning
  • cervical cancer stage
  • cross-view
  • low-rank models
  • self-similarity

ASJC Scopus subject areas

  • General Computer Science
  • General Materials Science
  • General Engineering

Fingerprint

Dive into the research topics of 'Cross-View Self-Similarity Using Shared Dictionary Learning for Cervical Cancer Staging'. Together they form a unique fingerprint.

Cite this