Identificador persistente para citar o vincular este elemento: http://hdl.handle.net/10553/134753
Campo DC Valoridioma
dc.contributor.authorChaudhari, Aayushien_US
dc.contributor.authorBhatt, Chintanen_US
dc.contributor.authorKrishna, Achyuten_US
dc.contributor.authorTravieso González, Carlos Manuelen_US
dc.date.accessioned2024-11-19T19:34:26Z-
dc.date.available2024-11-19T19:34:26Z-
dc.date.issued2023en_US
dc.identifier.issn2079-9292en_US
dc.identifier.urihttp://hdl.handle.net/10553/134753-
dc.description.abstractEmotion recognition is a very challenging research field due to its complexity, as individual differences in cognitive–emotional cues involve a wide variety of ways, including language, expressions, and speech. If we use video as the input, we can acquire a plethora of data for analyzing human emotions. In this research, we use features derived from separately pretrained self-supervised learning models to combine text, audio (speech), and visual data modalities. The fusion of features and representation is the biggest challenge in multimodal emotion classification research. Because of the large dimensionality of self-supervised learning characteristics, we present a unique transformer and attention-based fusion method for incorporating multimodal self-supervised learning features that achieved an accuracy of 86.40% for multimodal emotion classification.en_US
dc.languageengen_US
dc.relation.ispartofElectronics (Switzerland)en_US
dc.sourceElectronics (Switzerland) [ISSN 2079-9292], v. 12 (2), 288, (Enero 2023)en_US
dc.subject120325 Diseño de sistemas sensoresen_US
dc.subject610603 Emociónen_US
dc.subject.otherComputer visionen_US
dc.subject.otherContextual emotion recognitionen_US
dc.subject.otherDepth of emotional dimensionalityen_US
dc.subject.otherInter-modality attention transformeren_US
dc.subject.otherMultimodalityen_US
dc.subject.otherReal-time applicationen_US
dc.subject.otherSelf-attention transformeren_US
dc.titleFacial Emotion Recognition with Inter-Modality-Attention-Transformer-Based Self-Supervised Learningen_US
dc.typeinfo:eu-repo/semantics/articleen_US
dc.typeArticleen_US
dc.identifier.doi10.3390/electronics12020288en_US
dc.identifier.scopus2-s2.0-85146763720-
dc.contributor.orcid0000-0003-0342-8806-
dc.contributor.orcid0000-0002-0423-0159-
dc.contributor.orcid0000-0003-4504-9636-
dc.contributor.orcid0000-0002-4621-2768-
dc.identifier.issue2-
dc.relation.volume12en_US
dc.investigacionIngeniería y Arquitecturaen_US
dc.type2Artículoen_US
dc.description.numberofpages15en_US
dc.utils.revisionen_US
dc.date.coverdateEnero 2023en_US
dc.identifier.ulpgcen_US
dc.contributor.buulpgcBU-TELen_US
dc.description.sjr0,644
dc.description.jcr2,9
dc.description.sjrqQ2
dc.description.jcrqQ2
dc.description.scieSCIE
dc.description.miaricds10,5
item.grantfulltextopen-
item.fulltextCon texto completo-
crisitem.author.deptGIR IDeTIC: División de Procesado Digital de Señales-
crisitem.author.deptIU para el Desarrollo Tecnológico y la Innovación-
crisitem.author.deptDepartamento de Señales y Comunicaciones-
crisitem.author.orcid0000-0002-4621-2768-
crisitem.author.parentorgIU para el Desarrollo Tecnológico y la Innovación-
crisitem.author.fullNameTravieso González, Carlos Manuel-
Colección:Artículos
Adobe PDF (3,77 MB)
Vista resumida

Citas SCOPUSTM   

18
actualizado el 24-nov-2024

Citas de WEB OF SCIENCETM
Citations

9
actualizado el 24-nov-2024

Google ScholarTM

Verifica

Altmetric


Comparte



Exporta metadatos



Los elementos en ULPGC accedaCRIS están protegidos por derechos de autor con todos los derechos reservados, a menos que se indique lo contrario.