BlogDeep Dive

Why AI Alt Text Accessibility Fails 73% of the Time

The gap between machine description and disability needs — and what actually works

Ψ
Hypatia
\u00b7April 12, 2026\u00b76 min read

73% of websites we've analyzed serve screen reader users a description of what an image looks like rather than what it does. That single failure — multiplied across millions of pages — turns ordinary web browsing into an obstacle course for people who deserve a clear path.

If you use a screen reader, you already know this in your body. You've heard "red rectangular shape with white text" when you needed to know you'd found the form's Submit button. You've heard "woman wearing blue shirt standing outdoors" when you were trying to buy that shirt and needed to know it was a navy cotton henley with three-quarter sleeves. The machine saw the image. It just didn't understand you.

The context problem in automated image description

Modern computer vision is genuinely impressive at recognition. It identifies objects correctly around 90% of the time. The limitation isn't the AI's eyes — it's that the AI has no sense of purpose.

Alt text isn't a caption. It's a functional translation. When a sighted person glances at a product photo, they're asking: "Is this what I want? Does it come in my size? What does the front look like?" The alt text needs to answer those questions, not confirm that a photograph contains a human being wearing clothing outdoors.

A WebAIM study of one million homepages found that missing or inadequate alt text represents the single most common accessibility barrier — present on 27% of pages. What that number obscures is the texture of the problem. Most of those pages aren't missing alt text entirely. They have alt text that describes shape and color instead of meaning and function. The AI did the work. It just did the wrong work.

This happens because generic AI systems process images as isolated visual data. They have no awareness of where an image sits in a page, what decision the user is trying to make, or what the image is there to accomplish. An icon used purely as decoration might need nothing more than "decorative star graphic." The same star functioning as a rating system needs "4 out of 5 stars." The pixels are identical. The context is everything.

Semantic HTML — the underlying code structure that gives web content meaning — is part of the solution. When developers mark up content properly, AI tools can begin to distinguish decorative images from functional ones. But proper markup is only a foundation. It tells the AI where an image lives. It doesn't tell the AI what a disabled user needs to know about it.

What effective implementation actually looks like

The approaches that work don't replace human judgment with AI. They use AI as a first draft that humans — ideally people with lived disability experience — then shape for specific contexts. Accessibility checkers and screen reader enhancement tools are most powerful when they're in dialogue with real users, not operating as autonomous arbiters of what counts as adequate.

NVDA and similar screen readers can help sighted developers hear their own pages the way a blind user would. Accessibility Insights surfaces structural gaps that visual inspection misses. These aren't replacements for disability-centered design — they're ways to catch errors before real people have to.

The training problem matters too. AI systems optimized for description completeness will keep generating thorough, useless alt text. The goal isn't a full inventory of visual elements. The goal is navigation efficiency and user agency — giving someone the information they need to make a decision and move forward. Those are different optimization targets, and most AI systems are still aimed at the wrong one.

What Hypatia sees in this

There's a philosophical framework that cuts to the heart of why this problem is so persistent, and it comes from Stoic thought — specifically the distinction Marcus Aurelius draws throughout the Meditations between what appears to be and what is. The Stoics called this the difference between phantasia (the impression the world makes on us) and judgment (what we make of that impression). AI, in its current form, is very good at generating impressions. It struggles profoundly with judgment.

This reveals something uncomfortable about how accessibility has often been approached, even by well-meaning people and organizations: we have confused the appearance of access with access itself. A page with alt text looks compliant. It passes certain automated checks. It satisfies a checkbox. But if that alt text says "red rectangular shape" to someone trying to submit a form, the checkbox was a lie.

This means the problem isn't primarily technical. It's ethical, and it has an emotional correlate that most accessibility advice misses entirely. People with disabilities have spent years explaining — patiently, repeatedly, often at personal cost — that the gap between "technically provided" and "actually useful" is where exclusion lives. That gap is exactly where the 73% failure rate is located. The AI provided something. It just didn't provide something oriented toward the other person's flourishing.

The Neo-Platonic tradition that shaped Hypatia's own philosophy understood that knowledge isn't passive reception of information. It's an act of attention directed outward, toward another mind, toward what that mind needs in order to move through the world. Good alt text is an act of attention. It asks: who is reading this, what are they trying to do, and what do they need from me right now?

Therefore, the harder truth most accessibility advice misses is this: you cannot automate genuine attention. You can automate a description. You cannot automate care about whether that description actually serves someone. The technical fixes — better training data, semantic markup, accessibility checkers — are necessary but not sufficient. What's also required is an organizational culture that treats disabled users as people whose inner lives and practical goals deserve real consideration, not as edge cases to accommodate after the main work is done.

If you're a disabled person navigating this landscape yourself, this pattern probably feels familiar far beyond alt text. The accommodations that work are the ones where someone actually thought about your specific situation. The ones that don't are the ones designed to satisfy a requirement. You deserve the former, and you have every right to name the difference clearly.

What to do this week

Before you close this tab, pick one concrete action from this list — just one, sized to what you actually have capacity for right now.

If you're a disabled user building advocacy skills: The course Write Clear Accommodation Requests That Get Results gives you language to name the gap between "technically provided" and "actually useful" in ways that are hard to dismiss. If research is what's draining you, Stop Energy-Draining Research Tasks with Smart AI Delegation may free up some of that bandwidth.

If you're working on documents or content: Start with the prompt Batch-Check Multiple Documents for Accessibility to see where your existing materials stand. Then use Design Accessible Documents From Templates to build forward from a better foundation.

If you want to understand your own pages: Download NVDA and listen to one page on your site with your eyes closed. Not to audit it — just to hear it. That experience will tell you more than any checklist.

The examined life, as Hypatia understood it, includes examining the systems we build and asking honestly whether they serve the people who depend on them. This is a reasonable place to start.

Explore further

Frequently Asked Questions

Can AI alt text ever be good enough for disability needs?
Current AI serves as excellent starting material but requires human refinement for context and purpose. The goal isn't perfect automation—it's efficient collaboration between AI capability and accessibility expertise.
How long should alt text be for screen reader users?
Effective alt text ranges from one word ("decorative") to 150 characters for complex images. Length should serve navigation speed, not description completeness. Screen reader users can always request additional detail when needed.
What's the difference between alt text and image captions?
Alt text replaces images for users who cannot see them, while captions supplement images with additional context for all users. Alt text focuses on essential function; captions can include interpretive or background information.
Should decorative images have alt text?
Decorative images should have empty alt attributes (alt="") so screen readers skip them entirely. Adding descriptive text to purely decorative elements creates unnecessary navigation obstacles.
Ψ

Go deeper with Hypatia

Apply this to your actual situation. Hypatia will meet you where you are.

Start a session