Back in 2017, Microsoft released Seeing AI, an app that leverages artificial intelligence to enable blind and low-vision users to understand the world around them in a better way. Essentially, the technology uses smartphone cameras to observe their surroundings and describe them through through narration. It can even analyze people's facial expressions to comprehend their feelings or physical appearance.
Today, the tech giant has announced new capabilities and improvements heading the app's way. For starters, users are now able to 'explore' photos by simply tapping on an image on their touch screen. The description of the objects included in the image will be read out loud, while the spatial relationship between various such entities will be described as well. Photos taken on the Scene channel, stored in the photo browser, or shared on social media can also now be explored through a menu that will be accessible through other apps.
Regarding improvements to already present features, the order of channels can now be customized, while face recognition can also now be accessed via the main screen. Furthermore, audio cues will now indicate whenever an image is being processed by the app. Lastly, native iPad support has now been added, particularly to cater to users who do not use cellular devices in a professional environment.
To check out the improvements and additions, you can download Seeing AI for free on the App Store. You can also share any feedback through the Disability Answer Desk and Accessibility User Voice Forum, or by emailing at firstname.lastname@example.org.