Identificador persistente para citar o vincular este elemento:
http://hdl.handle.net/10553/134753
Campo DC | Valor | idioma |
---|---|---|
dc.contributor.author | Chaudhari, Aayushi | en_US |
dc.contributor.author | Bhatt, Chintan | en_US |
dc.contributor.author | Krishna, Achyut | en_US |
dc.contributor.author | Travieso González, Carlos Manuel | en_US |
dc.date.accessioned | 2024-11-19T19:34:26Z | - |
dc.date.available | 2024-11-19T19:34:26Z | - |
dc.date.issued | 2023 | en_US |
dc.identifier.issn | 2079-9292 | en_US |
dc.identifier.uri | http://hdl.handle.net/10553/134753 | - |
dc.description.abstract | Emotion recognition is a very challenging research field due to its complexity, as individual differences in cognitive–emotional cues involve a wide variety of ways, including language, expressions, and speech. If we use video as the input, we can acquire a plethora of data for analyzing human emotions. In this research, we use features derived from separately pretrained self-supervised learning models to combine text, audio (speech), and visual data modalities. The fusion of features and representation is the biggest challenge in multimodal emotion classification research. Because of the large dimensionality of self-supervised learning characteristics, we present a unique transformer and attention-based fusion method for incorporating multimodal self-supervised learning features that achieved an accuracy of 86.40% for multimodal emotion classification. | en_US |
dc.language | eng | en_US |
dc.relation.ispartof | Electronics (Switzerland) | en_US |
dc.source | Electronics (Switzerland) [ISSN 2079-9292], v. 12 (2), 288, (Enero 2023) | en_US |
dc.subject | 120325 Diseño de sistemas sensores | en_US |
dc.subject | 610603 Emoción | en_US |
dc.subject.other | Computer vision | en_US |
dc.subject.other | Contextual emotion recognition | en_US |
dc.subject.other | Depth of emotional dimensionality | en_US |
dc.subject.other | Inter-modality attention transformer | en_US |
dc.subject.other | Multimodality | en_US |
dc.subject.other | Real-time application | en_US |
dc.subject.other | Self-attention transformer | en_US |
dc.title | Facial Emotion Recognition with Inter-Modality-Attention-Transformer-Based Self-Supervised Learning | en_US |
dc.type | info:eu-repo/semantics/article | en_US |
dc.type | Article | en_US |
dc.identifier.doi | 10.3390/electronics12020288 | en_US |
dc.identifier.scopus | 2-s2.0-85146763720 | - |
dc.contributor.orcid | 0000-0003-0342-8806 | - |
dc.contributor.orcid | 0000-0002-0423-0159 | - |
dc.contributor.orcid | 0000-0003-4504-9636 | - |
dc.contributor.orcid | 0000-0002-4621-2768 | - |
dc.identifier.issue | 2 | - |
dc.relation.volume | 12 | en_US |
dc.investigacion | Ingeniería y Arquitectura | en_US |
dc.type2 | Artículo | en_US |
dc.description.numberofpages | 15 | en_US |
dc.utils.revision | Sí | en_US |
dc.date.coverdate | Enero 2023 | en_US |
dc.identifier.ulpgc | Sí | en_US |
dc.contributor.buulpgc | BU-TEL | en_US |
dc.description.sjr | 0,644 | |
dc.description.jcr | 2,9 | |
dc.description.sjrq | Q2 | |
dc.description.jcrq | Q2 | |
dc.description.scie | SCIE | |
dc.description.miaricds | 10,5 | |
item.grantfulltext | open | - |
item.fulltext | Con texto completo | - |
crisitem.author.dept | GIR IDeTIC: División de Procesado Digital de Señales | - |
crisitem.author.dept | IU para el Desarrollo Tecnológico y la Innovación | - |
crisitem.author.dept | Departamento de Señales y Comunicaciones | - |
crisitem.author.orcid | 0000-0002-4621-2768 | - |
crisitem.author.parentorg | IU para el Desarrollo Tecnológico y la Innovación | - |
crisitem.author.fullName | Travieso González, Carlos Manuel | - |
Colección: | Artículos |
Citas SCOPUSTM
18
actualizado el 24-nov-2024
Citas de WEB OF SCIENCETM
Citations
9
actualizado el 24-nov-2024
Google ScholarTM
Verifica
Altmetric
Comparte
Exporta metadatos
Los elementos en ULPGC accedaCRIS están protegidos por derechos de autor con todos los derechos reservados, a menos que se indique lo contrario.