Identificador persistente para citar o vincular este elemento: http://hdl.handle.net/10553/55746
Campo DC Valoridioma
dc.contributor.authorAlonso Hernández, Jesús Bernardinoen_US
dc.contributor.authorCabrera, Josuéen_US
dc.contributor.authorMedina Molina, Manuel Martínen_US
dc.contributor.authorTravieso, Carlos M.en_US
dc.contributor.otherAlonso-Hernandez, Jesus B.-
dc.contributor.otherTravieso-Gonzalez, Carlos M.-
dc.date.accessioned2019-06-10T20:43:51Z-
dc.date.available2019-06-10T20:43:51Z-
dc.date.issued2015en_US
dc.identifier.issn0957-4174en_US
dc.identifier.urihttp://hdl.handle.net/10553/55746-
dc.description.abstractThe automatic speech emotion recognition has a huge potential in applications of fields such as psychology, psychiatry and the affective computing technology. The spontaneous speech is continuous, where the emotions are expressed in certain moments of the dialogue, given emotional turns. Therefore, it is necessary that the real-time applications are capable of detecting changes in the speaker's affective state. In this paper, we emphasize on recognizing activation from speech using a few feature set obtained from a temporal segmentation of the speech signal of different language like German, English and Polish. The feature set includes two prosodic features and four paralinguistic features related to the pitch and spectral energy balance. This segmentation and feature set are suitable for real-time emotion applications because they allow detect changes in the emotional state with very low processing times. The German Corpus EMO-DB (Berlin Database of Emotional Speech), the English Corpus LDC (Emotional Prosody Speech and Transcripts database) and the Polish Emotional Speech Database are used to train the Support Vector Machine (SVM) classifier and for gender-dependent activation recognition. The results are analyzed for each speech emotion with gender-dependent separately and obtained accuracies of 94.9%, 88.32% and 90% for EMO-DB, LDC and Polish databases respectively. This new approach provides a comparable performance with lower complexity than other approaches for real-time applications, thus making it an appealing alternative, may assist in the future development of automatic speech emotion recognition systems with continuous trackingen_US
dc.languageengen_US
dc.relation.ispartofExpert Systems with Applicationsen_US
dc.sourceExpert Systems with Applications [ISSN 0957-4174], v. 42 (24), p. 9554-9564en_US
dc.subject3307 Tecnología electrónicaen_US
dc.subject.otherEmotional speech recognitionen_US
dc.subject.otherPattern recognitionen_US
dc.subject.otherEmotional intensityen_US
dc.subject.otherEmotional temperatureen_US
dc.titleNew approach in quantification of emotional intensity from the speech signal: emotional temperatureen_US
dc.typeinfo:eu-repo/semantics/articleen_US
dc.typeArticleen_US
dc.identifier.doi10.1016/j.eswa.2015.07.062en_US
dc.identifier.scopus2-s2.0-84942364867-
dc.identifier.isi000362857500015-
dc.contributor.orcid#NODATA#-
dc.contributor.orcid#NODATA#-
dc.contributor.orcid#NODATA#-
dc.contributor.orcid#NODATA#-
dcterms.isPartOfExpert Systems With Applications-
dcterms.sourceExpert Systems With Applications[ISSN 0957-4174],v. 42 (24), p. 9554-9564-
dc.contributor.authorscopusid24774957200-
dc.contributor.authorscopusid56501436400-
dc.contributor.authorscopusid56797487500-
dc.contributor.authorscopusid6602376272-
dc.description.lastpage9564-
dc.identifier.issue24-
dc.description.firstpage9554-
dc.relation.volume42-
dc.investigacionIngeniería y Arquitecturaen_US
dc.type2Artículoen_US
dc.identifier.wosWOS:000362857500015-
dc.contributor.daisngid418703-
dc.contributor.daisngid4468790-
dc.contributor.daisngid2742483-
dc.contributor.daisngid265761-
dc.identifier.investigatorRIDN-5977-2014-
dc.identifier.investigatorRIDNo ID-
dc.identifier.ulpgces
dc.description.sjr1,561
dc.description.jcr2,981
dc.description.sjrqQ1
dc.description.jcrqQ1
dc.description.scieSCIE
item.grantfulltextnone-
item.fulltextSin texto completo-
crisitem.author.deptGIR IDeTIC: División de Procesado Digital de Señales-
crisitem.author.deptIU para el Desarrollo Tecnológico y la Innovación-
crisitem.author.deptDepartamento de Señales y Comunicaciones-
crisitem.author.deptDepartamento de Señales y Comunicaciones-
crisitem.author.deptGIR IDeTIC: División de Procesado Digital de Señales-
crisitem.author.deptIU para el Desarrollo Tecnológico y la Innovación-
crisitem.author.deptDepartamento de Señales y Comunicaciones-
crisitem.author.orcid0000-0002-7866-585X-
crisitem.author.orcid0000-0001-5961-3782-
crisitem.author.orcid0000-0002-4621-2768-
crisitem.author.parentorgIU para el Desarrollo Tecnológico y la Innovación-
crisitem.author.parentorgIU para el Desarrollo Tecnológico y la Innovación-
crisitem.author.fullNameAlonso Hernández, Jesús Bernardino-
crisitem.author.fullNameMedina Molina, Manuel Martín-
crisitem.author.fullNameTravieso González, Carlos Manuel-
Colección:Artículos
Vista resumida

Citas SCOPUSTM   

60
actualizado el 10-nov-2024

Citas de WEB OF SCIENCETM
Citations

53
actualizado el 10-nov-2024

Visitas

97
actualizado el 04-may-2024

Google ScholarTM

Verifica

Altmetric


Comparte



Exporta metadatos



Los elementos en ULPGC accedaCRIS están protegidos por derechos de autor con todos los derechos reservados, a menos que se indique lo contrario.