Apple has always gone out of its way to build features for users with disabilities, and VoiceOver on iOS is an invaluable tool for anyone with a vision impairment — assuming every element of the interface has been manually labeled. But the company just unveiled a brand new feature that uses machine learning to identify and label every button, slider and tab automatically. From a report: Screen Recognition, available now in iOS 14, is a computer vision system that has been trained on thousands of images of apps in use, learning what a button looks like, what icons mean and so on. Such systems are very flexible — depending on the data you give them, they can become expert at spotting cats, facial expressions or, as in this case, the different parts of a user interface. The result is that in any app now, users can invoke the feature and a fraction of a second later every item on screen will be labeled. And by “every,” they mean every — after all, screen readers need to be aware of every thing that a sighted user would see and be able to interact with, from images (which iOS has been able to create one-sentence summaries of for some time) to common icons (home, back) and context-specific ones like “…” menus that appear just about everywhere. The idea is not to make manual labeling obsolete — developers know best how to label their own apps, but updates, changing standards and challenging situations (in-game interfaces, for instance) can lead to things not being as accessible as they could be.
Read more of this story at Slashdot.
Source: Slashdot – iPhones Can Now Automatically Recognize and Label Buttons and UI Features for Blind Users