VASCO: AI-based visual assistance system for computer-assisted orientation and action for people with visual impairments

Problem Formulation

A heterogeneous and dynamic environment is often a great challenge for people with visual impairments. Although they use all their other senses for this and learn basic techniques for basic techniques for orientation and mobility, many scenarios arise in everyday life in which they are dependent on external support. The concrete goal of the project is the development of a wearable assistance system that is not only intended specifically for a particular application or situation, but rather different scenarios in different areas of everyday life. The assistance system includes input devices (cameras and microphone), a computing unit and output devices (headphones and tactile devices) to relay information to the wearer. At this research project, the aim is to develop scene understanding and decision making algorithms for wearable devices that can assist visually impaired and blind people, based on artificial intelligence (AI) and machine learning (ML). These algorithms assist people with visual impairments in perceiving their surroundings and in performing targeted actions for orientation, movement and action. The AI algorithms shall not only capture the dynamic environment, but rather also observe the wearer's behavior, and analyze it in relation to the current situation. Compared to the state of the art, the algorithms should not only be fast, but also energy efficient, in order to keep the weight of the device down and thus not to impair the mobility of the wearer.
 

Solution Approach

The application software includes image processing, environment modeling, decision-making, and feedback generation. These software modules are based on machine learning and artificial intelligence algorithms. They are essential for ensuring the functionality of the assistance system by supporting the user in performing targeted actions in various domains, such as indoor navigation, street mobility, and grasping and interacting with objects. To this end, the AI algorithm captures the environment, observes and analyzes the user's behavior, and relates it to the current situation. The assistance system thus not only has a warning function and is actively involved in the user's actions and behavior. A fast information supply and low-latency integration of the different components is therefore essential to react quickly to environmental changes and provide the carrier with feedback information without delay so that he can coordinate his actions with the dynamic environment in the best possible way. Developing low-latency and self-learning AI algorithms represents a core aspect of the research project. They should be significantly improved in speed and energy efficiency compared to state-of-the-art to keep the weight low and not impair the mobility of the wearable assistance system.
 

Project Goals

  • Automated real-time capture of the dynamic environment, with the goal of detecting and tracking static and dynamic objects, 3D free-space detection, and the creation of a environment model
  • User monitoring, recognition of hand movements and gestures, and prediction of the user's behavior
  • Discrete decision making as a high-level layer of a hierarchical approach, which forms the basis for generating feedback information to the user
  • Path and motion planning as a lower level of the hierarchical approach for generating concrete motion and behavior patterns for the user based on the decisions made by the higher level
  • Converting the discrete and continuous decision variables resulting from the two levels are into control signals for the actuators to generate feedback information for the user
  • In order to deal with uncertainties and errors, which are usually inherent in human-machine interaction, adaptive control methods will be used to generate control signals for the actuators.
     

Keywords

  • Assistance system
  • Machine learning
  • Energy efficient and low latency AI
  • Decision making
  • Environment perception
     

Funding

Time span

Jan 2023 - Dec 2025
 

Project Partners

  • DC Vision Systems GmbH
  • Walk Engineering GmbH
  • Technische Universität Kaiserslautern
  • Inventivio GmbH
  • Berufsförderungswerk Düren GmbH
     

Contact

Prof. Dr.-Ing. Naim Bajcinca
Gottlieb-Daimler-Str. 42
67663, Kaiserslautern
+49 (0)631/205-3230
naim.bajcinca(at)mv.uni-kl.de