Federated Learning, Mobile Emotion Recognition, and Client-Side Data Quality: A Survey and Research Agenda
The combination of federated learning (FL), mobile edge computing, and facial emotion recognition (FER) promises privacy-preserving affective computing on personal devices. Instead of uploading raw images to the cloud, models are trained collaboratively across distributed clients while inference increasingly happens on-device. However, when systems move from carefully curated research datasets to user-generated mobile data, issues such as label noise, inconsistent annotations, and heterogeneous client data quality become central bottlenecks. These factors affect FL convergence, generalization, and downstream trust in emotion-aware applications. This survey consolidates literature from four major strands: (i) FER from classical handcrafted approaches to deep and mobile models, (ii) FL foundations and its use in vision and affective computing, (iii) robust FL under label noise and unreliable clients, and (iv) crowdsourcing and AI-assisted data-labeling quality assurance. Building on these strands, we argue that client-side data validation pipelines on mobile devices are a promising but underexplored direction. We outline an architectural blueprint for such pipelines and highlight open challenges around human–AI interaction, multimodal context, privacy, and fairness in FL-based FER systems, providing a comparative analysis of past, present, and future directions in this domain.