Identificador persistente para citar o vincular este elemento: http://hdl.handle.net/10553/70579
Campo DC Valoridioma
dc.contributor.authorMarras, Mirkoen_US
dc.contributor.authorMarín-Reyes, Pedro A.en_US
dc.contributor.authorLorenzo-Navarro, Javieren_US
dc.contributor.authorCastrillón-Santana, Modestoen_US
dc.contributor.authorFenu, Giannien_US
dc.date.accessioned2020-02-29T06:04:01Z-
dc.date.available2020-02-29T06:04:01Z-
dc.date.issued2020en_US
dc.identifier.isbn978-3-030-40013-2en_US
dc.identifier.issn0302-9743en_US
dc.identifier.otherScopus-
dc.identifier.urihttp://hdl.handle.net/10553/70579-
dc.description.abstractFrom border controls to personal devices, from online exam proctoring to human-robot interaction, biometric technologies are empowering individuals and organizations with convenient and secure authentication and identification services. However, most biometric systems leverage only a single modality, and may face challenges related to acquisition distance, environmental conditions, data quality, and computational resources. Combining evidence from multiple sources at a certain level (e.g., sensor, feature, score, or decision) of the recognition pipeline may mitigate some limitations of the common uni-biometric systems. Such a fusion has been rarely investigated at intermediate level, i.e., when uni-biometric model parameters are jointly optimized during training. In this chapter, we propose a multi-biometric model training strategy that digests face and voice traits in parallel, and we explore how it helps to improve recognition performance in re-identification and verification scenarios. To this end, we design a neural architecture for jointly embedding face and voice data, and we experiment with several training losses and audio-visual datasets. The idea is to exploit the relation between voice characteristics and facial morphology, so that face and voice uni-biometric models help each other to recognize people when trained jointly. Extensive experiments on four real-world datasets show that the biometric feature representation of a uni-biometric model jointly trained performs better than the one computed by the same uni-biometric model trained alone. Moreover, the recognition results are further improved by embedding face and voice data into a single shared representation of the two modalities. The proposed fusion strategy generalizes well on unseen and unheard users, and should be considered as a feasible solution that improves model performance. We expect that this chapter will support the biometric community to shape the research on deep audio-visual fusion in real-world contexts.en_US
dc.languageengen_US
dc.publisherSpringeren_US
dc.relationIdentificación Automática de Oradores en Sesiones Parlamentarias Usando Características Audiovisuales.en_US
dc.relation.ispartofLecture Notes in Computer Scienceen_US
dc.sourcePattern Recognition Applications and Methods. ICPRAM 2019. Lecture Notes in Computer Science, v. 11996, p. 136-157en_US
dc.subject120304 Inteligencia artificialen_US
dc.subject.otherAudio-visual learningen_US
dc.subject.otherCross-modal biometricsen_US
dc.subject.otherDeep biometric fusionen_US
dc.subject.otherMulti-biometric systemen_US
dc.subject.otherRe-identificationen_US
dc.subject.otherVerificationen_US
dc.titleDeep multi-biometric fusion for audio-visual user re-identification and verificationen_US
dc.typeinfo:eu-repo/semantics/bookParten_US
dc.typeBook parten_US
dc.relation.conference8th International Conference on Pattern Recognition Applications and Methods, ICPRAM 2019
dc.identifier.doi10.1007/978-3-030-40014-9_7en_US
dc.identifier.scopus85079549512-
dc.contributor.authorscopusid9233842500-
dc.contributor.authorscopusid57191274555-
dc.contributor.authorscopusid15042453800-
dc.contributor.authorscopusid57198776493-
dc.contributor.authorscopusid24469552000-
dc.description.lastpage157en_US
dc.description.firstpage136en_US
dc.relation.volume11996en_US
dc.investigacionIngeniería y Arquitecturaen_US
dc.type2Capítulo de libroen_US
dc.identifier.eisbn978-3-030-40014-9-
dc.utils.revisionen_US
dc.identifier.supplement0302-9743-
dc.identifier.supplement0302-9743-
dc.identifier.supplement0302-9743-
dc.identifier.supplement0302-9743-
dc.identifier.conferenceidevents121650-
dc.identifier.ulpgcen_US
dc.identifier.ulpgcen_US
dc.identifier.ulpgcen_US
dc.identifier.ulpgcen_US
dc.contributor.buulpgcBU-INFen_US
dc.contributor.buulpgcBU-INFen_US
dc.contributor.buulpgcBU-INFen_US
dc.contributor.buulpgcBU-INFen_US
dc.description.sjr0,249
dc.description.sjrqQ3
dc.description.spiqQ1
item.grantfulltextnone-
item.fulltextSin texto completo-
crisitem.project.principalinvestigatorCastrillón Santana, Modesto Fernando-
crisitem.author.deptGIR SIANI: Inteligencia Artificial, Robótica y Oceanografía Computacional-
crisitem.author.deptIU Sistemas Inteligentes y Aplicaciones Numéricas-
crisitem.author.deptDepartamento de Informática y Sistemas-
crisitem.author.deptGIR SIANI: Inteligencia Artificial, Robótica y Oceanografía Computacional-
crisitem.author.deptIU Sistemas Inteligentes y Aplicaciones Numéricas-
crisitem.author.deptDepartamento de Informática y Sistemas-
crisitem.author.orcid0000-0002-2834-2067-
crisitem.author.orcid0000-0002-8673-2725-
crisitem.author.parentorgIU Sistemas Inteligentes y Aplicaciones Numéricas-
crisitem.author.parentorgIU Sistemas Inteligentes y Aplicaciones Numéricas-
crisitem.author.fullNameMarín Reyes, Pedro Antonio-
crisitem.author.fullNameLorenzo Navarro, José Javier-
crisitem.author.fullNameCastrillón Santana, Modesto Fernando-
crisitem.event.eventsstartdate19-02-2019-
crisitem.event.eventsenddate21-02-2019-
Colección:Capítulo de libro
Vista resumida

Citas SCOPUSTM   

9
actualizado el 14-abr-2024

Visitas

131
actualizado el 01-oct-2022

Google ScholarTM

Verifica

Altmetric


Comparte



Exporta metadatos



Los elementos en ULPGC accedaCRIS están protegidos por derechos de autor con todos los derechos reservados, a menos que se indique lo contrario.