Identificador persistente para citar o vincular este elemento:
http://hdl.handle.net/10553/133373
Campo DC | Valor | idioma |
---|---|---|
dc.contributor.author | Hameed, Adel | en_US |
dc.contributor.author | Fourati, Rahma | en_US |
dc.contributor.author | Ammar, Boudour | en_US |
dc.contributor.author | Sanchez-Medina, Javier J. | en_US |
dc.contributor.author | Ltifi, Hela | en_US |
dc.date.accessioned | 2024-10-03T08:00:20Z | - |
dc.date.available | 2024-10-03T08:00:20Z | - |
dc.date.issued | 2024 | en_US |
dc.identifier.isbn | 9783031702587 | en_US |
dc.identifier.issn | 1865-0929 | en_US |
dc.identifier.other | Scopus | - |
dc.identifier.uri | http://hdl.handle.net/10553/133373 | - |
dc.description.abstract | This study introduces MV-FocalNet, a novel approach for classifying motor imagery from electroencephalography (EEG) signals. MV-FocalNet leverages multi-view representation learning and spatial-temporal modeling to extract diverse properties from multiple frequency bands of EEG data. By integrating information from multiple perspectives, MV-FocalNet captures both local and global features, significantly enhancing the accuracy of motor imagery task classification. Experimental results on two EEG datasets, 2a and 2b, show that MV-FocalNet accurately categorizes various motor movements, including left and right-hand activities, foot motions, and tongue actions. The proposed method outperforms existing state-of-the-art models, achieving substantial improvements in classification accuracy. | en_US |
dc.language | eng | en_US |
dc.relation.ispartof | Communications in Computer and Information Science | en_US |
dc.source | Communications in Computer and Information Science[ISSN 1865-0929],v. 2166 CCIS, p. 393-405, (Enero 2024) | en_US |
dc.subject | 3314 Tecnología médica | en_US |
dc.subject.other | Electroencephalography | en_US |
dc.subject.other | Focal Modulation Networks | en_US |
dc.subject.other | Motor Imagery | en_US |
dc.subject.other | Multi-View Representation | en_US |
dc.title | A Multi-view Spatio-Temporal EEG Feature Learning for Cross-Subject Motor Imagery Classification | en_US |
dc.type | info:eu-repo/semantics/conferenceObject | en_US |
dc.type | ConferenceObject | en_US |
dc.relation.conference | 16th International Conference on Computational Collective Intelligence (ICCCI 2024) | en_US |
dc.identifier.doi | 10.1007/978-3-031-70259-4_30 | en_US |
dc.identifier.scopus | 85204560281 | - |
dc.contributor.orcid | NO DATA | - |
dc.contributor.orcid | NO DATA | - |
dc.contributor.orcid | NO DATA | - |
dc.contributor.orcid | NO DATA | - |
dc.contributor.orcid | NO DATA | - |
dc.contributor.authorscopusid | 58559836000 | - |
dc.contributor.authorscopusid | 44961198800 | - |
dc.contributor.authorscopusid | 23974208100 | - |
dc.contributor.authorscopusid | 26421466600 | - |
dc.contributor.authorscopusid | 35092982000 | - |
dc.identifier.eissn | 1865-0937 | - |
dc.description.lastpage | 405 | en_US |
dc.description.firstpage | 393 | en_US |
dc.relation.volume | 2166 CCIS | en_US |
dc.investigacion | Ingeniería y Arquitectura | en_US |
dc.type2 | Actas de congresos | en_US |
dc.utils.revision | Sí | en_US |
dc.date.coverdate | Enero 2024 | en_US |
dc.identifier.conferenceid | events155448 | - |
dc.identifier.ulpgc | Sí | en_US |
dc.contributor.buulpgc | BU-INF | en_US |
dc.description.sjr | 0,203 | |
dc.description.sjrq | Q4 | |
dc.description.miaricds | 9,6 | |
item.grantfulltext | none | - |
item.fulltext | Sin texto completo | - |
crisitem.author.dept | GIR IUCES: Centro de Innovación para la Empresa, el Turismo, la Internacionalización y la Sostenibilidad | - |
crisitem.author.dept | IU de Cibernética, Empresa y Sociedad (IUCES) | - |
crisitem.author.dept | Departamento de Informática y Sistemas | - |
crisitem.author.orcid | 0000-0003-2530-3182 | - |
crisitem.author.parentorg | IU de Cibernética, Empresa y Sociedad (IUCES) | - |
crisitem.author.fullName | Sánchez Medina, Javier Jesús | - |
Colección: | Actas de congresos |
Los elementos en ULPGC accedaCRIS están protegidos por derechos de autor con todos los derechos reservados, a menos que se indique lo contrario.