Dr. Michele A. Williams has spent her career helping organizations stop treating accessibility as a compliance checkbox and start building it as a design constraint from day one. Her framework is clear: if disabled people are not in the room when you are designing, testing, and shipping — you are not doing governance. You are doing theater.
The Mindset Problem Comes Before the Technical Problem
The medical model — treating disability as something to rehabilitate around — dominates institutional thinking. It positions disabled people as edge cases rather than full contributors. Until that mindset shifts, checklists will substitute for lived experience and audits will be snapshots instead of systems.
The social model flips this. Disability is not the disabling force. The lack of access is. If you build your product around the diversity of people who will actually use it — you stop asking disabled people to make all the adjustments and start designing systems that work.
Where AI Backfires on Accessibility
Leaders are being sold that AI solves accessibility. Dr. Williams is precise about where this breaks down:
- → Auto captions are often inaccurate enough to require human review before they are genuinely useful
- → AI-generated alt text produces descriptions, not context — "a woman in a meeting room" does not tell you why the image exists
- → Automated accessibility tests cover only a fraction of the web content accessibility guidelines, and miss the user experience entirely
The Signal
Nothing about us without all of us.
Three questions for this week:
- → Are disabled people in the room when your team defines the problem — not just testing the solution after it ships?
- → Is your AI-generated content being reviewed by a human with disability expertise — or just by the tool that produced it?
- → Do your procurement criteria currently require vendors to demonstrate accessibility compliance — with documentation?
Exclusion is the default setting — not because anyone chose it, but because no one designed against it.
About Human Signal
Dr. Tuboise Floyd | Founder, Human Signal
Human Signal is an independent AI governance research and media platform dedicated to institutional risk analysis. We reverse-engineer institutional AI failures and develop frameworks operators can use when it matters — not frameworks designed to satisfy an audit.
Govern the machine. Or be the resource it consumes.
— Dr. Tuboise Floyd · Founder, Human Signal
#Accessibility #AIGovernance #InclusiveDesign #HumanSignal