Meg Miller and
Example alt text descriptions from Microsoft Word, Facebook and Wikipedia.
By Meg Miller and Ilaria Parogni Feb. 18, 2022
The text boxes above are examples of alt text, which people who are blind or have low vision often rely on when navigating the web. When it’s available, the text can be detected and read aloud or translated into Braille through screen readers, assistive technology that can be accessed in the form of software programs, apps or even browser extensions. For these users, alt text is essential to the online experience.
But it is not always available, or even helpful. Haben Girma, a lawyer and disability rights advocate, said she frequently comes across words like “‘image,” “jpg” or “graphics” when navigating the web with a screen reader. “That doesn’t tell me anything,” she added.
And in an image-saturated world — over 63 million were uploaded to Instagram alone in a single day in February, according to Internet Live Stats — it can be difficult for people who are blind or have low vision to fully experience the web.
Examples of A.I.-generated alt text from Facebook, Alt Text Chrome Extension and Microsoft Word.
Partly in response to this changing landscape, disability rights advocates, people with vision-related disabilities and technologists alike have been coming up with ways to increase the presence and the quality of alt text.
Alt text is usually tucked away in a web page’s HTML code, the language that defines how information will appear on a browser. Screen readers can access the information and translate it into a format that users can interact with, but for those without the assistive technology, alt text would not be apparent.
When a screen reader encounters this image, a robotic voice announces at high speed, “A kitten at the window. Image. You are currently on an image. To begin interacting with the contents of this image press Control-Option-Shift-Down Arrow.”
Many social media platforms have features that enable people to add alt text to their posts manually. On sites that permit a longer word count on posts, like Instagram, people may even include the description of the image they are sharing in the caption accompanying it.
Despite the presence of these options, the practice remains little known and mystifying to many.
One analysis of a million homepages, by WebAIM, a nonprofit organization affiliated with Utah State University that focuses on web accessibility, found that as of February 2021, 60.6 percent had instances of missing alt text. A Carnegie Mellon study in 2019 of 1.09 million tweets with images found that only 0.1 percent of those tweets included alt text. (The New York Times has been working on rolling out alt text for its images.)
Examples of A.I.-generated alt text from Facebook, Alt-Text Chrome Extension and Microsoft Word.
Some companies have turned to artificial intelligence to increase the presence of alt text. Microsoft and Google have both developed features that use A.I. to automatically generate alt text. In 2016, Facebook debuted its own “automatic alt text,” which uses A.I. to identify objects in images, which it also added to Instagram in 2018. (Instagram Stories does not have an option for alt text.) Cynthia Bennett, a researcher at Carnegie Mellon University’s Human-Computer Interaction Institute, who is blind and uses a screen reader, said that artificial intelligence and automation have enabled alt text to be generated much more widely. But the descriptions she comes across, she added, “tend to not be very high quality.”
A.I.-generated text can indeed be puzzling: “I have been in situations where the A.I. will say, ‘A person holding a gun,’ and that person is not holding a gun,” Girma said. “The A.I. got confused. Or it says, ‘A child in a chair.’ But it’s not a child; it’s an adult.”
An industry has also sprung up to address issues of quality and scale, with some companies taking a human-based approach and others working on auto-generated alt text.
One company, Scribely, offers alt text written by people. Its chief executive, Caroline Desrosiers, said that while A.I. can identify objects, humans are far better at deciding which parts of an image are important to describe. Also, she said, “Alt text needs to be short and succinct, so we have to make a call on which details that we choose to highlight.”
The start-up CloudSight focuses instead on mustering the power of algorithms. Brad Folkens, the chief executive and co-founder of the company, said that its A.I. had to rely on human review in its early stages of development, to ensure the quality of its descriptions. But, he added, the technology has since evolved to be able to function independently. It “does a good enough job” for CloudSight’s clients, he said. (The company still offers human-reviewed services at a premium.)
Some disability rights advocates say that A.I. alt text is improving — yet still often misses the context and what Chancey Fleet, a disability rights advocate and tech educator, calls the “emotional valence” of a human description. “For example, a photo from a Black Lives Matter protest would be something like ‘people street demonstration,’” Fleet said, “and that just does not suffice in moments of importance.”
Alt text examples from Wirecutter, The New York Times’s product recommendation service; Alt Text Chrome Extension; and Microsoft Word.
The inclusion of alt text is a no-brainer to those championing its consistent, wider use.
Some observers, like Thomas Reid, a voice actor and podcast host who is blind, say that social media is helping them broadcast their message and make people more aware. “We’re having these conversations in public, and it’s easy to jump into them,” Reid said.
The designer and disability rights advocate Alex Haagaard has been part of the dialogue with a series, “Shitty Alt Text,” critiquing Instagram’s A.I.-generated alt text for the Mona Lisa (“Image May Contain 1 or More Persons”), Katsushika Hokusai’s “The Great Wave off Kanagawa” (“May Be Art of Water”) and others.
Girma, who is blind, regularly uses her Instagram and Twitter accounts to invite others to write alt text for the images they post online, as well as to share suggestions on how to craft alt text and to direct followers to useful resources. “You don’t need to describe every leaf and detail. Write one or two sentences describing the main point of the image,” the caption of one of her Instagram posts reads.
She is not the only one providing detailed advice on social media. In a Twitter post, a blind activist, Amy Kavanagh, advises her followers to think about context: “If it’s a fashion picture, tell me about the clothes. If it’s a group photo, I don’t need every outfit described.”
Imani Barbarin, a communications professional and disability rights advocate, uses her TikTok account to urge others in her industry to include alt text in their resources for brands and nonprofits: “That’s part of your job,” she says in one video, “you have to include it.”
Examples of alt text from Getty Images, Alt-Text Chrome Extension and Microsoft Word.
The artists Bojana Coklyat and Shannon Finnegan have taken a robust approach with Alt Text as Poetry — a website, workbook and series of workshops — in their effort to encourage the use of alt text among artists and on social media.
“When you’re online or on Instagram to have fun or feel a sense of belonging, and you’re constantly getting these dry, minimal descriptions, it takes away from that delight or pleasure,” said Coklyat, who has low vision.
Finnegan and Coklyat said that when people use expressive or playful alt text on social media, in either the HTML or written into a caption, they can expose others to the practice, and inspire them to try it.
Fleet, the tech educator, who is blind, posts on Twitter about alt text and agrees that these efforts are making a difference. She, like Coklyat and Finnegan, has noticed an uptick in people using alt text mindfully on social media.
Digitally savvy social media users have also jumped in to help: Image Description Bot — developed by Cole Gleason, a research scientist working on accessibility — provides auto-generated alt text for tweets with images on Twitter. Alt Text Reminder, another Twitter bot, notifies followers when they have tweeted an image without alt text. (The bot’s creator, Hannah Kolbeck, has also built a tool to quickly generate alt text for Wordle results that can be shared on social media.)
The first description was written by Haben Girma for a photo of herself that was posted on her Instagram account. The others are alt text examples from Microsoft Word and Facebook.
What information and descriptors should and shouldn’t be included in alt text is also fodder for debate.
Coklyat and Finnegan, of Alt Text as Poetry, invite nuanced approaches.
In a workbook on their website, the pair encourages people to engage with complex questions of subjectivity, identity and representation that may come up when translating an image to text. “Lots of information about identity is communicated visually, but that information is often filtered through guesswork, interpretation and bias,” Coklyat and Finnegan write. “When and how do we describe race, gender, disability status, age, height, weight, etc.?”
The question has not been easy to answer. Reid, the podcast host, argues that omitting details like race and gender takes the choice and agency away from the alt-text consumer. “Let me have that experience,” he said. “That’s what access is about.” (He advises people writing alt text to verify how a person in a photo identifies — and if that’s not possible, to describe skin color instead of race.)
Examples of alt text from Gregory the A'ight @driven2drink on Twitter, Microsoft Word and Facebook.
Folkens of CloudSight said that his company is careful not to use racial identifiers in the alt text it generates. “We don’t want to be in the situation where we’re misidentifying those types of categories, because obviously you can get very problematic, very quickly,” he said.
Folkens also pointed to another risk with automated systems: that they replicate the biases — including racial ones — that exist in the data used to train the A.I. “A.I. is not this kind of all-intelligent thing,” he said. “It’s just a very, very good probability engine. So you feed it enough of something, then it’ll have the same kind of bias that’s in the data.”
Though opinions about what details should be included for those using screen readers diverge, most people agree on one point: that the presence of alt text is an important step toward making the internet more accessible.
At its core, they all emphasize that alt text has to encapsulate the purpose and meaning of an image. Or, as Dr. Bennett put it, “what someone would need to know if the image wasn’t there.”
The challenge, Girma said, is to get more people and organizations to use “any alt text at all.”
In that respect, making a change for the better takes only a few mouse clicks and some extra typing — “someone just taking that minute to be a human being and connect,” Coklyat said.
Photo credits: Leonardo da Vinci/Musée du Louvre, via Wikipedia (Mona Lisa); Craig Lee for The New York Times (pancakes); Getty Images (kitten and group of people); Doug Mills/The New York Times (Biden); Poler (Poler Napsack); via Haben Girma (Haben Girma); via Know Your Meme (“My Brain Recording Good Memories, My Brain Recording Bad Memories”).
The alt text descriptions for the images above were captured in January and February of 2022.
Designed by Sean Catangui. Produced by Alicia DeSantis, Deborah Leiderman, Tala Safie and Josephine Sedgwick.