Please use this identifier to cite or link to this item: http://hdl.handle.net/10553/53647
DC FieldValueLanguage
dc.contributor.authorSchmidt, Joachimen_US
dc.contributor.authorCastrillon-Santana, Modestoen_US
dc.contributor.otherCastrillon-Santana, Modesto-
dc.date.accessioned2019-02-04T17:34:52Z-
dc.date.available2019-02-04T17:34:52Z-
dc.date.issued2008en_US
dc.identifier.isbn978-989-8111-21-0en_US
dc.identifier.urihttp://hdl.handle.net/10553/53647-
dc.description.abstractSocial robots require the ability to communicate and recognize the intention of a human interaction partner. Humans commonly make use of gestures for interactive purposes. For a social robot, recognition of gestures is therefore a necessary skill. As a common intermediate step, the pose of an individual is tracked over time making use of a body model. The acquisition of a suitable body model, i.e. self-starting the tracker, however, is a complex and challenging task. This paper presents an approach to facilitate the acquisition of the body model during interaction. Taking advantage of a robust face detection algorithm provides the opportunity for automatic and markerless acquisition of a 3D body model using a monocular color camera. For the given human robot interaction scenario, a prototype has been developed for a single user configuration. It provides automatic initialization and failure recovery of a 3D body tracker based on head and hand detection information, delivering promising results.en_US
dc.languageengen_US
dc.relationTecnicas Para El Robustecimiento de Procesos en Vision Artificial Para la Interaccionen_US
dc.sourceVisapp 2008: Proceedings Of The Third International Conference On Computer Vision Theory And Applications, Vol 2, p. 535-542en_US
dc.subject120304 Inteligencia artificialen_US
dc.subject.otherHuman robot interactionen_US
dc.subject.otherFace detectionen_US
dc.subject.otherModel acquisitionen_US
dc.subject.otherAutomatic initializationen_US
dc.subject.otherHuman body trackingen_US
dc.titleAutomatic initialization for body tracking: using appearance to learn a model for tracking human upper body motionsen_US
dc.typeinfo:eu-repo/semantics/conferenceObjecten_US
dc.typeConferenceObjecten_US
dc.relation.conferenceProceedings of the Third International Conference on Computer Vision Theory and Applications (VISAPP 2008)en_US
dc.identifier.doi10.5220/0001071005350542en_US
dc.identifier.scopus57549099722-
dc.identifier.isi000256791600084-
dcterms.isPartOfVisapp 2008: Proceedings Of The Third International Conference On Computer Vision Theory And Applications, Vol 2-
dcterms.sourceVisapp 2008: Proceedings Of The Third International Conference On Computer Vision Theory And Applications, Vol 2, p. 535-542-
dc.contributor.authorscopusid57198415382-
dc.contributor.authorscopusid22333278500-
dc.description.lastpage542en_US
dc.description.firstpage535en_US
dc.relation.volume2en_US
dc.investigacionIngeniería y Arquitecturaen_US
dc.type2Actas de congresosen_US
dc.identifier.wosWOS:000256791600084-
dc.contributor.daisngid6661505-
dc.contributor.daisngid1060138-
dc.identifier.investigatorRIDK-9040-2014-
dc.identifier.ulpgces
dc.contributor.buulpgcBU-INFen_US
item.grantfulltextnone-
item.fulltextSin texto completo-
crisitem.event.eventsstartdate22-01-2008-
crisitem.event.eventsenddate25-01-2008-
crisitem.author.deptGIR SIANI: Inteligencia Artificial, Robótica y Oceanografía Computacional-
crisitem.author.deptIU Sistemas Inteligentes y Aplicaciones Numéricas-
crisitem.author.deptDepartamento de Informática y Sistemas-
crisitem.author.orcid0000-0002-8673-2725-
crisitem.author.parentorgIU Sistemas Inteligentes y Aplicaciones Numéricas-
crisitem.author.fullNameCastrillón Santana, Modesto Fernando-
crisitem.project.principalinvestigatorLorenzo Navarro, José Javier-
Appears in Collections:Actas de congresos
Show simple item record

SCOPUSTM   
Citations

1
checked on Nov 24, 2024

Page view(s)

62
checked on Jun 15, 2024

Google ScholarTM

Check

Altmetric


Share



Export metadata



Items in accedaCRIS are protected by copyright, with all rights reserved, unless otherwise indicated.