Mobile Cognitive Indoor Assistive Navigation for Blind Persons

Abstract

Indoor assistive navigation system plays an essential role for the independently mobility in unfamiliar environment for the Blind & Visually Impaired (BVI). It has been researched extensively in recent years with the fast evolution of mobile technologies, from applying robotics simultaneous localization and mapping (SLAM) approach, deploying infrastructure sensors to integrating GIS indoor map databases. Although these researches provide useful prototypes to help blind people with independently traveling, cognitive assistant is still in early stage in the era of deep learning for computer vision. In this paper, we propose a novel cognitive assistive indoor navigation system to provide blind people cognitive perception for the surroundings during the navigation journey based on deep learning computing. First indoor semantic map database is built to model the environment spatial context-aware information based on Google Tango visual positioning service (VPS), then a TinyYOLO (You-only-look-once) convolutional neural network (CNN) model is applied on the Tango Android phone for real-time moving person recognition and tracking and finally scene understanding using CNN and long short-term memory (LSTM) network is performed on the Cloud Server. Experiments by blind people and blind people-folded sighted subjects shows the system works effectively to guide the user to a destination and to provide ambient cognitive perception..

Publication
The 33rd CSUN Assistive Technology Conference
Date
Links