TransTCNet: Transformer-Based Temporal-Contextual Network for Low-Latency Typing Interfaces on Edge Devices

A distinct typing interface using surface electromyography (sEMG) can facilitate silent, hands-free typing by interpreting muscle activity in relation to specific keystrokes. Character-level recognition poses challenges compared to the recognition of unseemly gestures, due to insensitivity to slight temporal variations and the fusion of muscle dynamics. Temporal Feature is vital, since when typing, there may be irrelevant dissimilarities in how people press keys, and even in body movements that coincide. This paper proposes TransTCNet, a two-stage deep neural network design with a causal convolutional layer to learn local features and a transformer-based component to learn long-range temporal interactions. We evaluated our network on a publicly available 26-class typing sEMG dataset acquired from 19 individuals. The model achieved a validation accuracy of 96.53%, exceeding the baseline models. Our study revealed generalization among participants, and the AUC values were also high (>0.994) across all classes. The model was significantly reliable and displayed high prediction confidence (>0.9), enabling us to obtain a high training accuracy rate (97.86%) for real-time filtering decisions. TransTCNet could be suitable for wearable and edge devices due to its efficient architecture and low inference cost. The model’s ability to consistently decode fine-grained neu-romuscular signals across users makes it a suitable choice for real-time applications such as adaptive user interfaces, virtual and augmented reality, prosthetic control, and communication systems.

Liked Liked