Identificador persistente para citar o vincular este elemento:
http://hdl.handle.net/10553/71000
Campo DC | Valor | idioma |
---|---|---|
dc.contributor.author | Jaiswar, Lalita | en_US |
dc.contributor.author | Yadav, Anjali | en_US |
dc.contributor.author | Dutta, Malay Kishore | en_US |
dc.contributor.author | Travieso-González, Carlos | en_US |
dc.contributor.author | Esteban-Hernández, Luis | en_US |
dc.date.accessioned | 2020-03-21T06:05:58Z | - |
dc.date.available | 2020-03-21T06:05:58Z | - |
dc.date.issued | 2020 | en_US |
dc.identifier.isbn | 978-1-4503-7630-3 | en_US |
dc.identifier.other | Scopus | - |
dc.identifier.uri | http://hdl.handle.net/10553/71000 | - |
dc.description.abstract | Visually impaired people face several problems in their daily life. One of the biggest problems is to visiting unfamiliar places and identifying public places like pharmacy store, restrooms, pedestrian signs on roads, etc. Although there are some conventional methods that are available to aid visually impaired people but these are inefficient to use without assistance. The proposed method presents a framework which will help visually impaired people to identify the common public amenities while visiting any unfamiliar places. This method uses deep learning for recognizing some daily used places. For this purpose, VGG16 model is used to extract features from the images and train the sequential model. The model has been tested on varying images of different class that are present in the database. The developed algorithm achieves an accuracy of 95.88%. The obtained result of the developed model shows that it is an efficient method for assisting visually impaired people in real time application. | en_US |
dc.language | eng | en_US |
dc.publisher | Association for Computing Machinery | en_US |
dc.source | APPIS 2020: Proceedings of the 3rd International Conference on Applications of Intelligent Systems. January 2020, article n. 19, p. 1–6 | en_US |
dc.subject | 33 Ciencias tecnológicas | en_US |
dc.subject.other | Features Extraction | en_US |
dc.subject.other | Object Recognition | en_US |
dc.subject.other | Pedestrians Signs | en_US |
dc.subject.other | Public Places | en_US |
dc.subject.other | Transfer Learning | en_US |
dc.subject.other | Vgg16 | en_US |
dc.subject.other | Visually Impaired | en_US |
dc.title | Transfer Learning based Computer Vision Technology for Assisting Visually Impaired for detection of Common Places | en_US |
dc.type | info:eu-repo/semantics/conferenceObject | en_US |
dc.type | ConferenceObject | en_US |
dc.relation.conference | International Conference on Applications of Intelligent Systems (APPIS 2020) | en_US |
dc.identifier.doi | 10.1145/3378184.3378215 | en_US |
dc.identifier.scopus | 85081094089 | - |
dc.contributor.authorscopusid | 57215532998 | - |
dc.contributor.authorscopusid | 57195513394 | - |
dc.contributor.authorscopusid | 35291803600 | - |
dc.contributor.authorscopusid | 57201316633 | - |
dc.contributor.authorscopusid | 57215532908 | - |
dc.investigacion | Ingeniería y Arquitectura | en_US |
dc.type2 | Actas de congresos | en_US |
dc.utils.revision | Sí | en_US |
dc.identifier.conferenceid | events121681 | - |
dc.identifier.ulpgc | Sí | es |
dc.contributor.buulpgc | BU-TEL | en_US |
item.grantfulltext | none | - |
item.fulltext | Sin texto completo | - |
crisitem.author.dept | GIR IDeTIC: División de Procesado Digital de Señales | - |
crisitem.author.dept | IU para el Desarrollo Tecnológico y la Innovación | - |
crisitem.author.dept | Departamento de Señales y Comunicaciones | - |
crisitem.author.orcid | 0000-0002-4621-2768 | - |
crisitem.author.parentorg | IU para el Desarrollo Tecnológico y la Innovación | - |
crisitem.author.fullName | Travieso González, Carlos Manuel | - |
crisitem.event.eventsstartdate | 07-01-2020 | - |
crisitem.event.eventsenddate | 09-01-2020 | - |
Colección: | Actas de congresos |
Citas SCOPUSTM
4
actualizado el 15-dic-2024
Visitas
93
actualizado el 11-feb-2023
Google ScholarTM
Verifica
Altmetric
Comparte
Exporta metadatos
Los elementos en ULPGC accedaCRIS están protegidos por derechos de autor con todos los derechos reservados, a menos que se indique lo contrario.