A Hybrid Fusion Approach for Skin Cancer Detection using Deep Learning on Clinical Images and Machine Learning on Patient Metadata

  • Aya Saber Abd El Aziz Omran Faculty of Computers and Artificial Intelligence, Fayoum University, Fayoum, Egypt
  • Mohamed Mohamed El-Gazzar
  • Mary Monir Saeid
Keywords: Skin Cancer Classification, Machine Learning, Multimodal Fusion, skin cancer, Patient Metadata, Medical Diagnostics

Abstract

Skin cancer continues to be a global health issue, with early detection being critical for improving treatment outcomes. While deep learning models like convolutional neural networks (CNNs) have proven highly efficient in skin cancer classification from dermatological images, they often disregard the valuable patient metadata that can contribute to better diagnostic accuracy. In the present study, we introduce a multimodal late fusion framework that integrates both skin cancer images and patient metadata. The approach leverages the Inception-ResNet-v2 (IRv2) model to extract image features, and a stacking ensemble model consisting of Extra Trees and Random Forest classifiers for patient metadata preprocessing. Then, a final voting classifier applying a soft voting strategy, which aggregates class probabilities from Logistic Regression and Random Forest as base voters, is employed for the final fusion methodology. This leads to an accuracy of 95.9% on the HAM10000 dataset. Our results highlight the potential of multimodal approaches in healthcare applications.
Published
2025-08-02
How to Cite
Abd El Aziz Omran , A. S., Mohamed El-Gazzar, M., & Monir Saeid , M. (2025). A Hybrid Fusion Approach for Skin Cancer Detection using Deep Learning on Clinical Images and Machine Learning on Patient Metadata. Statistics, Optimization & Information Computing. https://doi.org/10.19139/soic-2310-5070-2811
Section
Research Articles