Please use this identifier to cite or link to this item: http://hdl.handle.net/10553/54978
|Title:||Ego-motion classification for body-worn videos||Authors:||Meng, Zhaoyi
Morel, Jean Michel
Bertozzi, Andrea L.
Brantingham, P. Jeffrey
|UNESCO Clasification:||220990 Tratamiento digital. Imágenes||Issue Date:||2018||Publisher:||1612-3786||Journal:||Mathematics and Visualization||Abstract:||Portable cameras record dynamic first-person video footage and these videos contain information on the motion of the individual to whom the camera is mounted, defined as ego. We address the task of discovering ego-motion from the video itself, without other external calibration information. We investigate the use of similarity transformations between successive video frames to extract signals reflecting ego-motions and their frequencies. We use novel graph-based unsupervised and semi-supervised learning algorithms to segment the video frames into different ego-motion categories. Our results show very accurate results on both choreographed test videos and ego-motion videos provided by the Los Angeles Police Department.||URI:||http://hdl.handle.net/10553/54978||ISSN:||1612-3786||DOI:||10.1007/978-3-319-91274-5_10||Source:||Tai XC., Bae E., Lysaker M. (eds) Imaging, Vision and Learning Based on Optimization and PDEs. IVLOPDE 2016. Mathematics and Visualization. Springer, Cham|
|Appears in Collections:||Actas de Congresos|
Show full item record Recommend this item
Items in accedaCRIS are protected by copyright, with all rights reserved, unless otherwise indicated.