Please use this identifier to cite or link to this item: http://hdl.handle.net/10553/45487
DC FieldValueLanguage
dc.contributor.authorDiaz, Moisesen_US
dc.contributor.authorFischer, Andreasen_US
dc.contributor.authorPlamondon, Rejeanen_US
dc.contributor.authorFerrer, Miguel A.en_US
dc.date.accessioned2018-11-22T10:14:06Z-
dc.date.available2018-11-22T10:14:06Z-
dc.date.issued2015en_US
dc.identifier.isbn9781479918058en_US
dc.identifier.issn1520-5363en_US
dc.identifier.urihttp://hdl.handle.net/10553/45487-
dc.description.abstractWhat can be done with only one enrolled real hand-written signature in Automatic Signature Verification (ASV)? Using 5 or 10 signatures for training is the most common case to evaluate ASV. In the scarcely addressed case of only one available signature for training, we propose to use modified duplicates. Our novel technique relies on a fully neuromuscular representation of the signatures based on the Kinematic Theory of rapid human movements and its Sigma-Lognormal model. This way, a real on-line signature is converted into the Sigma-Lognormal model domain. The model parameters are then varied to generate new duplicated signatures.en_US
dc.languageengen_US
dc.relation.ispartofProceedings of the International Conference on Document Analysis and Recognition, ICDARen_US
dc.sourceProceedings of the International Conference on Document Analysis and Recognition, ICDAR[ISSN 1520-5363],v. 2015-November (7333838), p. 631-635en_US
dc.subject3307 Tecnología electrónicaen_US
dc.subject.otherHidden Markov modelsen_US
dc.subject.otherHandwriting recognitionen_US
dc.subject.otherIntegrated circuit modelingen_US
dc.subject.otherAtmospheric modelingen_US
dc.subject.otherProtocolsen_US
dc.titleTowards an automatic on-line signature verifier using only one reference per signeren_US
dc.typeinfo:eu-repo/semantics/conferenceObjecten_US
dc.typeConferenceObjecten_US
dc.relation.conference13th IAPR International Conference on Document Analysis and Recognition (ICDAR)
dc.relation.conference13th International Conference on Document Analysis and Recognition, ICDAR 2015
dc.identifier.doi10.1109/ICDAR.2015.7333838
dc.identifier.scopus84962537735-
dc.identifier.isi000381461400125
dc.contributor.authorscopusid36760594500-
dc.contributor.authorscopusid57192656275-
dc.contributor.authorscopusid7004878474-
dc.contributor.authorscopusid55636321172-
dc.description.lastpage635-
dc.identifier.issue7333838-
dc.description.firstpage631-
dc.relation.volume2015-November-
dc.investigacionIngeniería y Arquitecturaen_US
dc.type2Actas de congresosen_US
dc.contributor.daisngid29956019
dc.contributor.daisngid2554250
dc.contributor.daisngid196367
dc.contributor.daisngid233119
dc.utils.revisionen_US
dc.contributor.wosstandardWOS:Diaz, M
dc.contributor.wosstandardWOS:Fischer, A
dc.contributor.wosstandardWOS:Plamondon, R
dc.contributor.wosstandardWOS:Ferrer, MA
dc.date.coverdateNoviembre 2015
dc.identifier.conferenceidevents120988
dc.identifier.ulpgces
item.grantfulltextnone-
item.fulltextSin texto completo-
crisitem.author.deptGIR IDeTIC: División de Procesado Digital de Señales-
crisitem.author.deptIU para el Desarrollo Tecnológico y la Innovación-
crisitem.author.deptDepartamento de Física-
crisitem.author.deptGIR IDeTIC: División de Procesado Digital de Señales-
crisitem.author.deptIU para el Desarrollo Tecnológico y la Innovación-
crisitem.author.deptDepartamento de Señales y Comunicaciones-
crisitem.author.orcid0000-0003-3878-3867-
crisitem.author.orcid0000-0002-2924-1225-
crisitem.author.parentorgIU para el Desarrollo Tecnológico y la Innovación-
crisitem.author.parentorgIU para el Desarrollo Tecnológico y la Innovación-
crisitem.author.fullNameDíaz Cabrera, Moisés-
crisitem.author.fullNameFerrer Ballester, Miguel Ángel-
crisitem.event.eventsstartdate23-08-2015-
crisitem.event.eventsstartdate23-08-2015-
crisitem.event.eventsenddate26-08-2015-
crisitem.event.eventsenddate26-08-2015-
Appears in Collections:Actas de congresos
Show simple item record

Google ScholarTM

Check

Altmetric


Share



Export metadata



Items in accedaCRIS are protected by copyright, with all rights reserved, unless otherwise indicated.