Geometry-Aware Deep Congruence Networks for Manifold Learning in Cross-Subject Motor Imagery

arXiv:2511.18940v2 Announce Type: replace-cross
Abstract: Cross-subject motor-imagery decoding remains a major challenge in EEG-based brain-computer interfaces. To mitigate strong inter-subject variability, recent work has emphasized manifold-based approaches operating on covariance representations. Yet dispersion scaling and orientation alignment remain largely unaddressed in existing methods. In this paper, we address both issues through congruence transforms and introduce three complementary geometry-aware models: (i) Discriminative Congruence Transform (DCT), (ii) Deep Linear DCT (DLDCT), and (iii) Deep DCT-UNet (DDCT-UNet). These models are evaluated both as pre-alignment modules for downstream classifiers and as end-to-end discriminative systems trained via cross-entropy backpropagation with a custom logistic-regression head. Across challenging motor-imagery benchmarks, the proposed framework improves transductive cross-subject accuracy by 2-3%, demonstrating the value of geometry-aware congruence learning.

Liked Liked