(ECNS) — A Chinese research team has developed a wearable artificial intelligence (AI)-assisted system designed to help blind and visually impaired individuals navigate their surroundings.
The system integrates visual, auditory, and tactile components while using AI algorithms to detect the environment. It can send signals to the wearer when approaching obstacles or objects to assist with movement, object grasping, and other vision-related tasks.
The research, a significant application of AI in biomedical engineering, was led by Associate Professor Gu Leilei of Shanghai Jiao Tong University in collaboration with Fudan University, the Hong Kong University of Science and Technology, and East China Normal University. The findings were published on Monday in Nature Machine Intelligence (online), a journal under the Springer Nature group.
Gu, the corresponding author, noted that wearable electronic vision assistance systems offer a promising alternative to medical treatments and prosthetic implants for the blind and those with severe visual impairments. These systems convert visual information into signals interpretable by other senses, including hearing and touch. However, many existing systems are bulky and have not gained wide acceptance among users.
To address this issue, the team developed a user-centered system that combines innovative hardware and AI algorithms to reduce user burden and improve usability. The system analyzes video footage captured by a built-in camera and transmits it through stereo audio signals to guide the user step by step toward their goal. The team also created a stretchable artificial skin worn on the wrist that conveys vibration signals to the user, helping them detect and avoid both static and moving obstacles on either side.
The team tested the system using humanoid robots and conducted both virtual and real-world training with visually impaired individuals. Results showed significant improvements in navigation and post-navigation tasks. For instance, participants were able to navigate through mazes, move through cluttered meeting rooms, and grasp specific objects.
This research highlights that integrating vision, hearing, and touch enhances the usability and functionality of assistive vision systems. Gu and his collaborators believe that future research should focus on optimizing the system for miniaturization and intelligence to pave the way for customized vision assistance solutions.
(By Zhao Li)