Please use this identifier to cite or link to this item: http://hdl.handle.net/10553/124220
DC FieldValueLanguage
dc.contributor.authorGómez-Cárdenes, Óscaren_US
dc.contributor.authorMarichal-Hernández, José Gilen_US
dc.contributor.authorSon, Jung Youngen_US
dc.contributor.authorPérez Jiménez, Rafaelen_US
dc.contributor.authorRodríguez-Ramos, José Manuelen_US
dc.date.accessioned2023-08-31T10:33:37Z-
dc.date.available2023-08-31T10:33:37Z-
dc.date.issued2023en_US
dc.identifier.otherScopus-
dc.identifier.urihttp://hdl.handle.net/10553/124220-
dc.description.abstractIn this work, two methods are proposed for solving the problem of one-dimensional barcode segmentation in images, with an emphasis on augmented reality (AR) applications. These methods take the partial discrete Radon transform as a building block. The first proposed method uses overlapping tiles for obtaining good angle precision while maintaining good spatial precision. The second one uses an encoder-decoder structure inspired by state-of-the-art convolutional neural networks for segmentation while maintaining a classical processing framework, thus not requiring training. It is shown that the second method's processing time is lower than the video acquisition time with a 1024 × 1024 input on a CPU, which had not been previously achieved. The accuracy it obtained on datasets widely used by the scientific community was almost on par with that obtained using the most-recent state-of-the-art methods using deep learning. Beyond the challenges of those datasets, the method proposed is particularly well suited to image sequences taken with short exposure and exhibiting motion blur and lens blur, which are expected in a real-world AR scenario. Two implementations of the proposed methods are made available to the scientific community: one for easy prototyping and one optimised for parallel implementation, which can be run on desktop and mobile phone CPUs.en_US
dc.languageengen_US
dc.relation.ispartofSensors (Basel, Switzerland)en_US
dc.sourceSensors (Basel, Switzerland)[EISSN 1424-8220],v. 23 (13), (Julio 2023)en_US
dc.subject3325 Tecnología de las telecomunicacionesen_US
dc.subject.otherBarcodesen_US
dc.subject.otherClassical Signal Processingen_US
dc.subject.otherEncoder–Decoderen_US
dc.subject.otherMultiscale Drten_US
dc.subject.otherPixelwise Segmentationen_US
dc.subject.otherRadon Transformen_US
dc.subject.otherScale-Space Methodsen_US
dc.titleAn Encoder-Decoder Architecture within a Classical Signal-Processing Framework for Real-Time Barcode Segmentationen_US
dc.typeinfo:eu-repo/semantics/Articleen_US
dc.typeArticleen_US
dc.identifier.doi10.3390/s23136109en_US
dc.identifier.scopus85164844713-
dc.contributor.orcid0000-0002-7951-982X-
dc.contributor.orcid0000-0003-2297-8483-
dc.contributor.orcid0000-0001-6099-0577-
dc.contributor.orcid0000-0002-8849-592X-
dc.contributor.orcidNO DATA-
dc.contributor.authorscopusid56205585100-
dc.contributor.authorscopusid8252032600-
dc.contributor.authorscopusid58488327400-
dc.contributor.authorscopusid56044417600-
dc.contributor.authorscopusid8252032700-
dc.identifier.eissn1424-8220-
dc.identifier.issue13-
dc.relation.volume23en_US
dc.investigacionIngeniería y Arquitecturaen_US
dc.type2Artículoen_US
dc.utils.revisionen_US
dc.date.coverdateJulio 2023en_US
dc.identifier.ulpgcen_US
dc.contributor.buulpgcBU-TELen_US
item.grantfulltextopen-
item.fulltextCon texto completo-
crisitem.author.deptGIR IDeTIC: División de Fotónica y Comunicaciones-
crisitem.author.deptIU para el Desarrollo Tecnológico y la Innovación-
crisitem.author.deptDepartamento de Señales y Comunicaciones-
crisitem.author.orcid0000-0002-8849-592X-
crisitem.author.parentorgIU para el Desarrollo Tecnológico y la Innovación-
crisitem.author.fullNamePérez Jiménez, Rafael-
Appears in Collections:Artículos
Adobe PDF (7,65 MB)
Show simple item record

SCOPUSTM   
Citations

1
checked on Nov 24, 2024

Page view(s)

34
checked on May 18, 2024

Download(s)

7
checked on May 18, 2024

Google ScholarTM

Check

Altmetric


Share



Export metadata



Items in accedaCRIS are protected by copyright, with all rights reserved, unless otherwise indicated.