Identificador persistente para citar o vincular este elemento: http://hdl.handle.net/10553/35406
Campo DC Valoridioma
dc.contributor.authorAlonso, Jesús B.en_US
dc.contributor.authorCabrera Cruz,Josué Jacoben_US
dc.contributor.authorTravieso González, Carlos Manuelen_US
dc.contributor.authorLópez-de-Ipiña, Karmeleen_US
dc.contributor.authorSánchez-Medina, Agustínen_US
dc.contributor.otherLopez-de-Ipina, Karmele-
dc.contributor.otherAlonso-Hernandez, Jesus B.-
dc.date.accessioned2018-04-16T14:06:24Z-
dc.date.available2018-04-16T14:06:24Z-
dc.date.issued2017en_US
dc.identifier.issn0925-2312en_US
dc.identifier.urihttp://hdl.handle.net/10553/35406-
dc.description.abstractThe speech emotion recognition has a huge potential in human computer interaction applications in fields such as psychology, psychiatry and affective computing technology. The great majority of research works on speech emotion recognition have been made based on record repositories consisting of short sentences recorded under laboratory conditions. In this work, we researched the use of the Emotional Temperature strategy for continuous tracking in long-term samples of speech in which there are emotional changes during the speech. Emotional Temperature uses a few prosodic and paralinguistic features set obtained from a temporal segmentation of the speech signal. The simplicity and limitation of the set, previously validated under laboratory conditions, make it appropriate to be used under real conditions, where the spontaneous speech is continuous and the emotions are expressed in certain moments of the dialogue, given emotional turns. This strategy is robust, offers low computational cost, ability to detect emotional changes and improves the performance of a segmentation based on linguistic aspects. The German Corpus EMO-DB (Berlin Database of Emotional Speech), the English Corpus LDC (Emotional Prosody Speech and Transcripts database), the Polish Emotional Speech Database and RECOLA (Remote Collaborative and Affective Interactions) database are used to validate the system of continuous tracking from emotional speech. Two experimentation conditions are analyzed, dependence and independence on language and gender, using acted and spontaneous speech respectively. In acted conditions, the approach obtained accuracies of 67-97% while under spontaneous conditions, compared to annotation performed by human judges, accuracies of 41-50% were obtained. In comparison with previous studies in continuous emotion recognition, the approach improves the existing results with an accuracy of 9% higher on average. Therefore, this approach has a good performance with low complexity to develop real-time applications or continuous tracking emotional speech applications.en_US
dc.languageengen_US
dc.relation.ispartofNeurocomputingen_US
dc.sourceNeurocomputing[ISSN 0925-2312],v. 255, p. 17-25en_US
dc.subject33 Ciencias tecnológicasen_US
dc.subject530602 Innovación tecnológicaen_US
dc.subject.otherEmotional speech recognitionen_US
dc.subject.otherPattern recognitionen_US
dc.subject.otherContinuous trackingen_US
dc.titleContinuous tracking of the emotion temperatureen_US
dc.typeinfo:eu-repo/semantics/Articleen_US
dc.typeArticleen_US
dc.typeArticleen_US
dc.identifier.doi10.1016/j.neucom.2016.06.093en_US
dc.identifier.scopus85017133041-
dc.identifier.isi000404313500003-
dcterms.isPartOfNeurocomputing-
dcterms.sourceNeurocomputing[ISSN 0925-2312],v. 255, p. 17-25-
dc.contributor.authorscopusid24774957200-
dc.contributor.authorscopusid56501436400-
dc.contributor.authorscopusid6602376272-
dc.contributor.authorscopusid56263484400-
dc.contributor.authorscopusid25638866100-
dc.identifier.eissn1872-8286-
dc.description.lastpage25en_US
dc.description.firstpage17en_US
dc.relation.volume255en_US
dc.investigacionIngeniería y Arquitecturaen_US
dc.type2Artículoen_US
dc.identifier.wosWOS:000404313500003-
dc.contributor.daisngid418703-
dc.contributor.daisngid4468790-
dc.contributor.daisngid265761-
dc.contributor.daisngid1399740-
dc.contributor.daisngid33577461-
dc.contributor.daisngid4569672-
dc.identifier.investigatorRIDK-4379-2013-
dc.identifier.investigatorRIDN-5977-2014-
dc.utils.revisionen_US
dc.contributor.wosstandardWOS:Alonso, JB-
dc.contributor.wosstandardWOS:Cabrera, J-
dc.contributor.wosstandardWOS:Travieso, CM-
dc.contributor.wosstandardWOS:Lopez-de-Ipina, K-
dc.contributor.wosstandardWOS:Sanchez-Medina, A-
dc.date.coverdateSeptiembre 2017en_US
dc.identifier.ulpgcen_US
dc.description.sjr1,073
dc.description.jcr3,241
dc.description.sjrqQ1
dc.description.jcrqQ1
dc.description.scieSCIE
item.grantfulltextnone-
item.fulltextSin texto completo-
crisitem.author.deptGIR IDeTIC: División de Procesado Digital de Señales-
crisitem.author.deptIU para el Desarrollo Tecnológico y la Innovación-
crisitem.author.deptDepartamento de Señales y Comunicaciones-
crisitem.author.deptGIR IDeTIC: División de Procesado Digital de Señales-
crisitem.author.deptIU para el Desarrollo Tecnológico y la Innovación-
crisitem.author.deptDepartamento de Señales y Comunicaciones-
crisitem.author.deptGIR IUCES: Centro de Innovación para la Empresa, el Turismo, la Internacionalización y la Sostenibilidad-
crisitem.author.deptIU de Cibernética, Empresa y Sociedad (IUCES)-
crisitem.author.deptDepartamento de Economía y Dirección de Empresas-
crisitem.author.orcid0000-0002-7866-585X-
crisitem.author.orcid0000-0002-4621-2768-
crisitem.author.orcid0000-0002-7569-3556-
crisitem.author.parentorgIU para el Desarrollo Tecnológico y la Innovación-
crisitem.author.parentorgIU para el Desarrollo Tecnológico y la Innovación-
crisitem.author.parentorgIU de Cibernética, Empresa y Sociedad (IUCES)-
crisitem.author.fullNameAlonso Hernández, Jesús Bernardino-
crisitem.author.fullNameCabrera Cruz,Josué Jacob-
crisitem.author.fullNameTravieso González, Carlos Manuel-
crisitem.author.fullNameSánchez Medina, Agustín Jesús-
Colección:Artículos
Vista resumida

Citas SCOPUSTM   

8
actualizado el 14-abr-2024

Citas de WEB OF SCIENCETM
Citations

8
actualizado el 25-feb-2024

Visitas

69
actualizado el 28-oct-2023

Google ScholarTM

Verifica

Altmetric


Comparte



Exporta metadatos



Los elementos en ULPGC accedaCRIS están protegidos por derechos de autor con todos los derechos reservados, a menos que se indique lo contrario.