Identificador persistente para citar o vincular este elemento: http://hdl.handle.net/10553/53701
Campo DC Valoridioma
dc.contributor.authorLlombart, Jorgeen_US
dc.contributor.authorMiguel, Antonioen_US
dc.contributor.authorLleida, Eduardoen_US
dc.contributor.otherHernandez-Perez, Eduardo-
dc.contributor.otherLleida, Eduardo-
dc.contributor.otherMiguel, Antonio-
dc.contributor.otherOrtega, Alfonso-
dc.date.accessioned2019-02-04T17:51:14Z-
dc.date.available2019-02-04T17:51:14Z-
dc.date.issued2014en_US
dc.identifier.isbn978-3-319-13622-6en_US
dc.identifier.issn0302-9743en_US
dc.identifier.urihttp://hdl.handle.net/10553/53701-
dc.description.abstractThere is a great amount of information in the speech signal, although current speech recognizers do not exploit it completely. In this paper articulatory information is extracted from speech and fused to standard acoustic models to obtain a better hybrid acoustic model which provides improvements on speech recognition. The paper also studies the best input signal for the system in terms of type of speech features and time resolution to obtain a better articulatory information extractor. Then this information is fused to a standard acoustic model obtained with neural networks to perform the speech recognition achieving better results.en_US
dc.languageengen_US
dc.publisherSpringeren_US
dc.relation.ispartofLecture Notes in Computer Scienceen_US
dc.sourceAdvances in Speech and Language Technologies for Iberian Languages. Lecture Notes in Computer Science, v. 8854 LNCS, p. 138-147en_US
dc.subject220990 Tratamiento digital. Imágenesen_US
dc.subject.otherArticulatory featuresen_US
dc.subject.otherNeural networken_US
dc.subject.otherHybrid modelsen_US
dc.titleArticulatory Feature Extraction from Voice and Their Impact on Hybrid Acoustic Modelsen_US
dc.typeinfo:eu-repo/semantics/bookParten_US
dc.typeBook parten_US
dc.identifier.doi10.1007/978-3-319-13623-3_15en_US
dc.identifier.isi000360168400015-
dcterms.isPartOfAdvances In Speech And Language Technologies For Iberian Languages, Iberspeech 2014
dcterms.sourceAdvances In Speech And Language Technologies For Iberian Languages, Iberspeech 2014[ISSN 0302-9743],v. 8854, p. 138-147
dc.description.lastpage147en_US
dc.description.firstpage138en_US
dc.relation.volume8854 LNCSen_US
dc.investigacionIngeniería y Arquitecturaen_US
dc.type2Capítulo de libroen_US
dc.identifier.wosWOS:000360168400015-
dc.contributor.daisngid10719464-
dc.contributor.daisngid187210-
dc.contributor.daisngid382172-
dc.identifier.investigatorRIDL-3413-2017-
dc.identifier.investigatorRIDK-8974-2014-
dc.identifier.investigatorRIDB-6044-2017-
dc.identifier.investigatorRIDJ-6280-2014-
dc.identifier.eisbn978-3-319-13623-3-
dc.utils.revisionNoen_US
dc.identifier.supplement0302-9743-
dc.identifier.ulpgcNoen_US
dc.identifier.ulpgcNoen_US
dc.identifier.ulpgcNoen_US
dc.identifier.ulpgcNoen_US
dc.description.sjr0,325
dc.description.sjrqQ3
dc.description.spiqQ1
item.grantfulltextnone-
item.fulltextSin texto completo-
Colección:Capítulo de libro
Vista resumida

Google ScholarTM

Verifica

Altmetric


Comparte



Exporta metadatos



Los elementos en ULPGC accedaCRIS están protegidos por derechos de autor con todos los derechos reservados, a menos que se indique lo contrario.