A Monocular Vision-based Obstacle Avoidance Android/Linux Middleware for the Visually Impaired

Document Type

Conference Proceeding

Publication Date



Increasing maturity in Fog/Edge computing has enabled many latency-sensitive Internet of Things (IoT) applications to achieve better performance and shorter response times. Our work on the URMILA middleware [1] proposed a performance and mobility-aware Fog/Edge resource management solution to support cognitive assistance to the visually impaired. However, URMILA was evaluated only in lab-based emulated scenarios. We overcome this limitation by presenting an affordable, unobtrusive and simple-to-use solution for the visually impaired. Alongside the long cane or the guide dog, our application aims to provide the visually impaired with a more detailed description of their environment. Using a single-camera on the Sony SmartEyeglass SED-E1 as the only sensor and an Android/Linux application, we were able to perform both per-pixel depth prediction and object detection on each image frame. By combining the information from these two sources, we provide users with a descriptive audio feedback assisting them in avoiding obstacles and thus better situational awareness. URMILA is used as before to manage the fog and edge resources in the system. We show the effectiveness of obstacle detection and recognition by creating both an outdoor and indoor scenario.

Publication Title

Middleware Demos and Posters 2019 - Proceedings of the 2019 20th International Middleware Conference Demos and Posters, Part of Middleware 2019

First Page Number


Last Page Number




This document is currently not available here.