Facebook’s New Tool Describes Photos To The Blind; Available Now On iOS App

As more and more photos get uploaded on to Facebook to make social networking a visually immersive experience, it poses a challenge for the visually impaired. If they are using a screen reader software on their smartphones all they get is the name of the friend who has posted the photo and a narration saying that there is a photo in the placeholder. All that is about to change with Facebook’s updated iOS app where with the use of deep convolutional neural network engine the app can generate a description of the photo and convey it to the listener. Facebook has launched the world first automatic alternative text that generates the description of the photo using object recognition technology.

Facebook Alt Text

Facebook’s initiative of building this tool began with a recent study that they conducted with the help of Cornell University researchers that showed that blind people were interested in interacting with visual content on social media but were feeling left out because there were no tools to describe photos. Facebook’s technical team set out to build this tool where descriptions would be generated as alt text; you know the HTML tag that describes the photo so that it would be picked up by any screen reader software. Facebook spent 10 months in building this tool so that it would be able to recognise not just what was happening in the foreground but also what was going on in the background.


Facebook relied on its computer vision platform that provides a visual recognition engine that recognises elements in the photo. It then fed learnable parameters to the deep convolutional neural network to learn to decipher new elements in the photos. It used a list of 100 concepts that were most prominent in all Facebook photos such as people’s appearance, nature, transportation, food, objects etc to generate the alt text. Since the image recognition is not without its flaws, the alt text generated says before every description “image may contain”.

Currently the feature is available on the iOS app in English for people in the U.S., U.K., Canada, Australia, and New Zealand. The feature will be rolled out to other platforms and other languages soon.

Source: #-Link-Snipped-#, #-Link-Snipped-# & #-Link-Snipped-#

Replies

You are reading an archived discussion.

Related Posts

A group of researchers from the Pohang University of Science and Technology (POSTECH) has designed a unique algorithm which helps a lab-on-chip (LOC) system to run autonomously without disturbance. LOCs...
hello all..Hope You all are fine.. I am doing my bachlrs in computer science and my final year is in progress.. I have selected "implementation of IPTV in department's offices...
Hello everyone! I need help on SVM(Support Vector Machine). I have to write VHDL code on this topic. Actually there is a project going on image recognition and identification so...
In a paper published in the journal Nature, a team of researchers from Ben-Gurion University, Israel and University of Georgia have demonstrated that it's possible to build 1000x smaller electronic...
Can anybody tell the statistical analysis of how many times tyre(two wheeler) get punctured in a year