Tabular-to-Image Encoding Methods for Melanoma Detection: A Proof-of-Concept

Deep Learning (DL) models have demonstrated strong performance in dermatological applications, particularly when trained on dermoscopic images. In contrast, tabular clinical data—such as patient metadata and lesion-level descriptors—are difficult to integrate into DL-based pipelines due to their heterogeneous, non-spatial, and often low-dimensional nature. As a result, these data are commonly handled using separate classical machine learning (ML) models.
In this work, we present a proof-of-concept study that investigates whether dermatological tabular data can be transformed into two-dimensional image representations to enable convolutional neural network (CNN)-based learning. To this end, we employ the Low Mixed-Image Generator for Tabular Data (LM-IGTD), a framework designed to transform low-dimensional and heterogeneous tabular data into two-dimensional image representations, through type-aware encoding and controlled feature augmentation. Using this approach, we encode low-dimensional clinical metadata, high-dimensional lesion-level statistical features extracted from dermoscopic images, as well as their feature-level fusion, into grayscale image representations. The resulting image representations serve as input to CNNs, and the performance is compared with ML models trained on tabular data. Experiments conducted on the Derm7pt and PH2 datasets show that traditional ML models generally achieve the highest Area Under the Curve values, while LM-IGTD-based representations provide comparable performance and enable the use of CNNs on structured clinical data used in dermatology.

Liked Liked