Please use this identifier to cite or link to this item: https://accedacris.ulpgc.es/jspui/handle/10553/133373
Title: A Multi-view Spatio-Temporal EEG Feature Learning for Cross-Subject Motor Imagery Classification
Authors: Hameed, Adel
Fourati, Rahma
Ammar, Boudour
Sanchez-Medina, Javier J. 
Ltifi, Hela
UNESCO Clasification: 3314 Tecnología médica
Keywords: Electroencephalography
Focal Modulation Networks
Motor Imagery
Multi-View Representation
Issue Date: 2024
Journal: Communications in Computer and Information Science 
Conference: 16th International Conference on Computational Collective Intelligence (ICCCI 2024)
Abstract: This study introduces MV-FocalNet, a novel approach for classifying motor imagery from electroencephalography (EEG) signals. MV-FocalNet leverages multi-view representation learning and spatial-temporal modeling to extract diverse properties from multiple frequency bands of EEG data. By integrating information from multiple perspectives, MV-FocalNet captures both local and global features, significantly enhancing the accuracy of motor imagery task classification. Experimental results on two EEG datasets, 2a and 2b, show that MV-FocalNet accurately categorizes various motor movements, including left and right-hand activities, foot motions, and tongue actions. The proposed method outperforms existing state-of-the-art models, achieving substantial improvements in classification accuracy.
URI: https://accedacris.ulpgc.es/handle/10553/133373
ISBN: 9783031702587
ISSN: 1865-0929
DOI: 10.1007/978-3-031-70259-4_30
Source: Communications in Computer and Information Science[ISSN 1865-0929],v. 2166 CCIS, p. 393-405, (Enero 2024)
Appears in Collections:Actas de congresos
Show full item record

Page view(s)

298
checked on Jan 9, 2026

Google ScholarTM

Check

Altmetric


Share



Export metadata



Items in accedaCRIS are protected by copyright, with all rights reserved, unless otherwise indicated.