MRCPs-and-ERS/D-Oscillations-Driven Deep Learning Models for Decoding Unimanual and Bimanual Movements

Motor brain-computer interface (BCI) can intend to restore or compensate for central nervous system functionality. In the motor-BCI, motor execution (ME), which relies on patients’ residual or intact movement functions, is a more intuitive and natural paradigm. Based on the ME paradigm, w...

Disgrifiad llawn

Wedi'i Gadw mewn:
Manylion Llyfryddiaeth
Prif Awduron: Jiarong Wang (Awdur), Luzheng Bi (Awdur), Aberham Genetu Feleke (Awdur), Weijie Fei (Awdur)
Fformat: Llyfr
Cyhoeddwyd: IEEE, 2023-01-01T00:00:00Z.
Pynciau:
Mynediad Ar-lein:Connect to this object online.
Tagiau: Ychwanegu Tag
Dim Tagiau, Byddwch y cyntaf i dagio'r cofnod hwn!
Disgrifiad
Crynodeb:Motor brain-computer interface (BCI) can intend to restore or compensate for central nervous system functionality. In the motor-BCI, motor execution (ME), which relies on patients’ residual or intact movement functions, is a more intuitive and natural paradigm. Based on the ME paradigm, we can decode voluntary hand movement intentions from electroencephalography (EEG) signals. Numerous studies have investigated EEG-based unimanual movement decoding. Moreover, some studies have explored bimanual movement decoding since bimanual coordination is important in daily-life assistance and bilateral neurorehabilitation therapy. However, the multi-class classification of the unimanual and bimanual movements shows weak performance. To address this problem, in this work, we propose a neurophysiological signatures-driven deep learning model utilizing the movement-related cortical potentials (MRCPs) and event-related synchronization/ desynchronization (ERS/D) oscillations for the first time, inspired by the finding that brain signals encode motor-related information with both evoked potentials and oscillation components in ME. The proposed model consists of a feature representation module, an attention-based channel-weighting module, and a shallow convolutional neural network module. Results show that our proposed model has superior performance to the baseline methods. Six-class classification accuracies of unimanual and bimanual movements achieved 80.3%. Besides, each feature module of our model contributes to the performance. This work is the first to fuse the MRCPs and ERS/D oscillations of ME in deep learning to enhance the multi-class unimanual and bimanual movements’ decoding performance. This work can facilitate the neural decoding of unimanual and bimanual movements for neurorehabilitation and assistance.
Disgrifiad o'r Eitem:1558-0210
10.1109/TNSRE.2023.3245617