Interview · 51 min

Digital Accessibility in the AI Era: Making It Actually Work

#DigitalAccessibility #AIGovernance #DisabilityInclusion #AccessibleUX #HumanSignal #NothingAboutUsWithoutAllOfUs

Listen to This Episode


97% of the web still presents accessibility barriers to disabled people. That is not a rounding error. That is the infrastructure. Pages average more than 50 preventable errors each — missing alt text, broken keyboard paths, unreachable forms. And now AI is touching everything from UX research to captions to code, accelerating the pace at which those failures get encoded and shipped at scale.

In this episode of The AI Governance Briefing, Dr. Tuboise Floyd speaks with Dr. Michele A. Williams — Accessibility Consultant, UX Researcher, and author of Accessible UX Research (Smashing Magazine) — about what it actually takes to build accessibility into AI-era products. Not as a compliance checkbox. As a governance requirement.

Key Takeaways

  • 97% of the web has accessibility issues. AI trained on that web reproduces those failures at scale
  • Accessibility is not an edge case — 1 in 4 U.S. adults has some form of disability, and that figure is underreported
  • The risk is not a missed alt tag. The risk is automated discrimination encoded into everyday operations
  • Ableism — the assumption that non-disabled ways of operating are the default — is the root cause. Until the mindset shifts, the actions will not change
  • AI is a tool, not a savior. Auto-captions, auto-generated alt text, and automated scans are starting points — not substitutes for disabled expertise and human review
  • The real move is not building more technology for disabled people. It is removing barriers from the system
  • If your AI strategy produces systems disabled people cannot use, your governance is performative, not protective
  • Nothing about us without all of us — disabled people must be designers, decision-makers, and leaders, not just research participants

What Is Actually at Stake

Dr. Williams frames the risk in three buckets: equity, resilience, and trust. Equity: who is getting to use your systems, and who are you shutting out — quietly or loudly? Resilience: whether your products handle real-world diversity or fall apart at what you call edge cases. Trust: when people repeatedly experience access barriers, they leave — and they lose confidence in your brand, your institution, and your leadership.

"Not having accessibility in these areas is already doing that currently. AI is then magnifying that — encoding and accelerating those failures at scale." — Dr. Michele A. Williams

Dr. Floyd sharpened the stakes immediately: the risk is not a missed alt tag. The risk is that a team automated discrimination into its everyday operations.

"If your AI strategy is producing systems that disabled people can't use, that's a pretty strong sign your governance is performative and not protective."

The Disability Mindset Problem

Before tactics, Dr. Williams insists on the mindset. What she sees inside product teams mirrors wider society: disability is othered and made invisible. It is the largest minority group — and yet treated as a group that does not need to be accounted for, because non-disabled people often assume disabled people are not full contributors.

"This is ableism — the assumption that non-disabled ways of moving, thinking, and communicating are the default, and everything else is a deviation and even to some extent lesser." — Dr. Michele A. Williams

Two models define how teams act. The medical model treats disability as something to rehabilitate. The social model flips that: it is not the disability that is disabling — it is the lack of access and the barriers built by others. Until leaders in positions of power shift from the medical model to the social model, the actions will not truly change.

A UX research team, in a recorded call, asked: "If we have accessibility specialists, why do we need disabled people in our research?" Dr. Williams' response: we have design guidelines, but we do not ask "why do we need to talk to users?" Somehow, when disability enters the conversation, checklists become a comfortable substitute for lived experience.

Language: Person First, Identity First, and What Matters More

Three approaches currently exist. Person first ("person with a disability") centers the individual before the condition — it came from a good place but inadvertently made disability feel like a bad word. Identity first ("disabled person") treats disability as a descriptor, a core part of human diversity. That is how Dr. Williams aligns. Avoidance language (terms like "handicapable") is rarely preferred by actually disabled people.

The Deaf community adds important nuance: capital-D Deaf is a cultural identity, not a disability or impairment label — this is the cultural model of disability. The principle is consistent: ask individuals what they prefer, honor that, and do not let language anxiety become a reason to avoid the relationship and the work.

"The number one preferred way a person wants to be identified is by their name." — Dr. Michele A. Williams

AI and Accessibility: Tool, Not Savior

AI has genuine potential for accessibility at scale — auto-captions are a step forward from no captions, alt text generation helps with large media banks, automated scans can flag code issues efficiently. But where it backfires: auto-captions are often not good enough and need human correction. Auto-generated alt text describes what is in an image without conveying why it is there. Automated tests cover only a portion of WCAG guidelines — and even when accurate, they miss the user experience that only manual testing with real users reveals.

"97% of the web has some accessibility issues. If AI was trained on what's present, and what's present is inaccessible — what's produced is inaccessible as well." — Dr. Michele A. Williams

This is the GASP™ problem applied to accessibility: the structural failure was already in place before AI arrived. AI does not introduce it. It inherits it, encodes it, and ships it faster. Easter Seals ran a campaign with the tagline "disability isn't a dirty word" for exactly this reason — the stigma is structural, not incidental.

The Disability Dongle

Disability advocate Liz Jackson coined the term disability dongle for solutions that look clever to non-disabled people but do not address what disabled people actually need. Dr. Williams' example: the sensor-equipped smart cane. Engineers add vibration sensors and connectivity. White cane users report the devices are heavy, expensive, distracting, and do not reliably communicate where an obstacle is or what to do. They are not solving the real problem — what many blind pedestrians actually want is safer, better-designed environments.

"Before we ask what we can build — let's be clear about whether we need to build something. What is the actual problem, and what would be a meaningful solution?" — Dr. Michele A. Williams

Exclusion Is the Default Setting

Dr. Floyd stated it plainly: exclusion becomes the default not because anyone chose it, but because no one designed against it. Dr. Williams confirmed: by and large, exclusion is the default. His own experience illustrated it — asking Siri to open Zoom accidentally triggered the accessibility zoom feature on his iPhone, locking the screen at maximum magnification before an important meeting. He had to go home, sit at a computer, and Google how to undo it. For disabled users who depend on those settings daily, inaccessible interfaces are not inconvenient — they are exclusionary.

The deeper issue: burying assistive settings under a separate "Accessibility" menu rather than integrating them into standard settings signals that these features are for a special subset of users — not for everyone. As Dr. Williams notes, anyone over 40 benefits from larger text. All technology is, in some sense, assistive.

AI Can Draft. A Human Must Review.

Dr. Floyd drew the line clearly: AI can draft, but the human has to review — and that review must include disability expertise. You cannot use AI as a stand-in for that expertise.

"Real human in the loop — not the imitated human — is going to be important. AI should be a tool embedded in a thoughtful, accessible process, but not a replacement for disabled participants or human judgment." — Dr. Michele A. Williams

This connects directly to what Human Signal calls the Trust Gap: governance that exists on paper but cannot intervene at execution. An accessibility review that substitutes an AI scan for disabled human expertise is structurally insufficient — it looks like oversight, but it cannot catch what it was not designed to see. AccessiBe's million-dollar fine for false accessibility claims is a reference point worth studying here.

The 90-Day Accessibility Commitment

For the Gen X leader who is both convicted and overwhelmed, Dr. Williams offers a concrete, realistic arc — not heroic transformation, but a structural build.

Days 1–30 · Establish Your Baseline

Scan where AI and accessibility already intersect in your organization. Where are you running auto-captions, auto-generated alt text, AI-written copy, or AI-generated code? For each, identify what non-AI tool or human review validates the output. Ask your research and product teams: are disabled people included in studies and outreach? If the answer is no or never, flag it. The goal of month one is to stop guessing and learn how your system is actually behaving.

Days 31–60 · Change the Defaults

Identify whether your current processes will produce accessible outcomes. Is your design system built accessible? Are coding tools overriding accessibility standards? Add accessibility and AI impact questions to procurement criteria — ask vendors whether their platforms meet standards and whether their AI tools allow human-in-the-loop review of outputs. Choose one upcoming research project and make inclusion of disabled participants non-negotiable. Build time and budget in. This is where you stop relying on individual champions and start putting this in infrastructure.

Days 61–90 · Run Inclusive Research and Document

Run the inclusive research you planned. Bring clips of disabled users navigating your products with assistive technology to leadership reviews. Let executives hear directly how what they ship actually performs. Document what you will change as a result. Define what gets fixed, what gets postponed, and what does not ship until the change is made. Accessibility is not a special project — it is part of the feedback loop, the muscle you are building.

"Accessibility isn't this special project we're going to get to. It's just part of a feedback loop — building that muscle of thinking about it, asking about it — and it's starting to feel normal." — Dr. Michele A. Williams

Dr. Williams is clear about what she actually does when organizations bring her in: she is not conducting audits. An audit is a snapshot. She is redesigning how teams work — building a practice that sustains after she is gone. Once accessibility is embedded in procurement criteria and workflow templates, it is much harder to quietly roll back.

Nothing About Us Without All of Us

The disability rights movement gave the world a mantra: nothing about us without us. It has since evolved.

"Nothing about us without all of us. There should not be spaces where someone is left out intentionally." — Dr. Michele A. Williams

For leaders: recognize disabled professionals — not just disabled research participants. They should be hired, promoted, and positioned as real decision-makers. If your tools and workflows are inaccessible, you are locking disabled talent out before they ever get a seat at the table. AI policies, safety frameworks, and research standards must be built with disabled people in the room — so that we are not rebuilding the same exclusion at higher speed.

"If we're serious about AI governance, we must include disabled people — and we must have them be among the people we design with and answer to, as a built-in accountability." — Dr. Tuboise Floyd


Full Transcript

Lightly edited for readability. Speaker labels and timestamps preserved from original recording.

[00:00] Opening

Dr. Tuboise Floyd: Welcome to Human Signal. 97% of the web still presents accessibility barriers to disabled people. That's not a typo. That's our digital infrastructure. Pages average over 50 preventable errors each — missing alt text, broken keyboard paths, unreachable forms. If you're a Gen X leader running a team, a product line, or a whole division, this isn't an edge case. This is your user base, your legal risk, and your culture baked into every screen you ship. I'm Dr. Tuboise Floyd and this is Human Signal. Today we're asking: how do you make digital accessibility actually work when AI is touching everything from UX research to captions to code? My guest: Dr. Michele A. Williams — Accessibility Consultant, UX Researcher, and author of Accessible UX Research, published by Smashing Magazine. Michelle, welcome to Human Signal.

[01:53] What's at Stake

Dr. Tuboise Floyd: The Gen X product VP, the engineering director, the civic tech lead — they see AI everywhere, see lawsuits in the headlines, and know accessibility is important. But it feels like a compliance fire. From your vantage point, what's really at stake?

Dr. Michele A. Williams: Three buckets. Equity — who is getting to use your systems, and who are you shutting out? Resilience — whether your products handle real-world diversity or fall apart at edge cases. Trust — when people repeatedly experience access barriers, they leave. They lose confidence in your brand and your leadership. Not having accessibility is already doing that. AI is then magnifying it — encoding and accelerating those failures at scale.

Dr. Tuboise Floyd: So the risk isn't just we missed an alt tag. It's that we automated discrimination into our everyday operations.

Dr. Michele A. Williams: Exactly. Because AI is so fast and almost invisible, it just amplifies what's already been happening — that accessibility has primarily been missed.

[04:01] Disability Mindset

Dr. Tuboise Floyd: In your book, you start with disability mindset. What's the mindset problem inside teams?

Dr. Michele A. Williams: What I see in teams reflects wider society. Disability is othered and made invisible. It's the largest minority group — yet still treated as a group that doesn't need to be accounted for. This is ableism — the assumption that non-disabled ways of moving, thinking, and communicating are the default, and everything else is a deviation. The medical model says disability is something to rehabilitate. The social model flips that — it's not the disability that's disabling, it's the lack of access and the barriers built by others. Until there's a mindset shift, the actions won't truly change.

[06:15] Checklists vs. Lived Experience

Dr. Tuboise Floyd: You share a story in your book about UX researchers who asked: if we have accessibility specialists, why do we need disabled people in our research?

Dr. Michele A. Williams: We have design guidelines, but we don't say "why do we need to talk to users?" — meaning non-disabled users. We know guidelines are not enough. But when disability enters the mix, people become comfortable substituting checklists for lived experience. Disabled users are treated as optional, not yet considered.

[07:26] Language

Dr. Michele A. Williams: Three primary approaches. Person first — "person with a disability" — came from the cognitive impairment community and came from a good place, but ended up making disability feel like a bad word. Identity first — "disabled person" — treats disability as a descriptor, part of the diversity of human experience. That's how I tend to align. Avoidance language — terms like "handicapable" — is rarely preferred by actually disabled folks. The most important thing: ask individuals what they prefer. And remember, the number one preferred way a person wants to be identified is by their name. Don't let language anxiety stop you from doing the actual work.

[13:08] AI and Accessibility

Dr. Michele A. Williams: AI has potential to do great things for accessibility at scale — but only if current tactics are disrupted. AI has already done real harm in hiring algorithms, benefits decisions, surveillance technologies. It's a tool, not a savior.

Dr. Tuboise Floyd: Standard models are not disability-first by design. Products get shipped that look fine visually but don't have correct semantic structure — no labels, no logical focus order. That's chaos to assistive technology.

Dr. Michele A. Williams: Exactly. Inaccessibility in web and mobile apps comes from how things get coded. Assistive technologies need to understand the code to know what to do. When the code is not semantic — not meaningful — the assistive technology cannot engage it. And AI was built on a web where 97% of pages have accessibility issues. What it was trained on is inaccessible. What it produces is inaccessible.

[31:23] Exclusion Is the Default

Dr. Tuboise Floyd: Exclusion becomes the default setting — not because anyone chose it, but because no one designed against it.

Dr. Michele A. Williams: The hope is that we are more intentional. But by and large, yes — exclusion is the default setting.

[31:36] Disability Dongles

Dr. Tuboise Floyd: In your book, you write about disability dongles — solutions that look clever to non-disabled people but don't actually help. Give us an example.

Dr. Michele A. Williams: I'll first acknowledge disability advocate Liz Jackson for coining this term. A classic example: the smart cane. Engineers add sensors, vibrations, connectivity. But white cane users report these devices are heavy, expensive, distracting, and don't reliably tell you where an obstacle is or what to do. They're not solving the real problem. What many blind pedestrians actually want is safer environments. Before we ask what we can build — let's ask whether we need to build something at all.

[34:02] Where AI Auto-Tools Backfire

Dr. Michele A. Williams: Auto-captions are often not good enough and need correction. Alt text generation may describe what's in an image without conveying why it's there. Automated tests only cover a portion of WCAG guidelines — and even if accurate, they miss the overall user experience that comes from manual testing. AccessiBe is a company that often comes up here — look at the million-dollar fine they're facing for false claims. That will give you some understanding of hype versus reality.

[36:26] AI Can Draft. A Human Must Review.

Dr. Tuboise Floyd: The pattern is AI can draft, but the human has to review — and that review must include disability expertise. You cannot use AI as a stand-in for that expertise.

Dr. Michele A. Williams: Real human in the loop — not the imitated human. AI should be a tool embedded in a thoughtful, accessible process, but not a replacement for disabled participants or human judgment. Every practitioner, regardless of field, should carry some baseline understanding of disability inclusion. The expert is for the complex edge. The baseline is not optional.

[39:57] Redesigning How Teams Work

Dr. Tuboise Floyd: You are not doing audits. You are redesigning how teams work.

Dr. Michele A. Williams: An audit is just a snapshot. I'm more interested in helping teams build a practice that will sustain after I'm gone. I want you to feel confident this is how you operate.

[40:40] The 90-Day Arc

Dr. Michele A. Williams: First 30 days: establish your baseline. Scan where AI and accessibility intersect. Ask whether disabled people are included in research. Stop guessing and learn how your system is actually behaving. Days 31 through 60: change the defaults. Update design systems, procurement, coding tools. Add accessibility and AI impact questions to vendor selection. Make disabled participant inclusion non-negotiable on one upcoming research project. Days 61 through 90: run inclusive research. Bring clips to leadership reviews. Document what changes. Define what ships and what doesn't. Accessibility is just part of a feedback loop — the muscle you're building.

[46:15] Where Human Signal and This Work Intersect

Dr. Tuboise Floyd: Our mission is to help real operators slow and safe rollouts, rewrite procurement criteria, and build governance around reality instead of vendor hype. Accessibility is one of the sharpest reality checks we have.

Dr. Michele A. Williams: If your AI strategy is producing systems that disabled people can't use, that's a pretty strong sign your governance is performative and not protective.

[47:59] Nothing About Us Without All of Us

Dr. Michele A. Williams: Recognize disabled professionals — not just disabled participants. They should be hired, promoted, positioned as real decision-makers. If your tools and workflows are inaccessible, you're locking disabled talent out of the field and out of leadership. Policies, safety frameworks, research standards — all of these need to be built with disabled folks in the room, so we're not rebuilding the same exclusion at higher speed. The phrase: nothing about us without all of us. There should not be spaces where someone is left out intentionally.

[49:28] One Action This Week

Dr. Michele A. Williams: Start with probes. Ask around. Ask if people in your organization are even familiar with accessibility. There may be people doing it right now who just aren't formally empowered to say so. There may be in-house champions you aren't leveraging. Good UX practices, good code practices, and inclusive policies are already the foundations of accessibility. The main thing is to start asking. Start probing. See where you are.

[50:34] Closing

Dr. Tuboise Floyd: If you're thinking you need help, this is literally what Dr. Michele A. Williams does. She works with organizations to redesign their research practices, product decisions, and workflows so that accessibility is built in — not bolted on. Pick up her book, Accessible UX Research, and visit her online to learn how to bring her in as a strategic partner. This is Human Signal. Thanks for listening. This is Dr. Floyd signing off.


About the Guest: Dr. Michele A. Williams

Dr. Michele A. Williams is an Accessibility Consultant and UX Researcher whose practice helps product, research, and policy teams build disability inclusion into their design and research processes from day one — not as a retrofit. She is the author of Accessible UX Research, published by Smashing Magazine. Her mission: make accessibility accessible. Visit her online and pick up her book to bring her in as a strategic partner for your organization.

About the Host: Dr. Tuboise Floyd

Dr. Tuboise Floyd is the founder of Human Signal, an independent AI governance research and media platform based in Washington, DC. A PhD social scientist and former federal contracting strategist, he reverse-engineers institutional AI failures and designs governance frameworks that survive real humans, real incentives, and real pressure. Connect on LinkedIn.

Build Your AI Governance Competency

TAIMScore™ Assessor Workshop — Learn to assess AI governance maturity using the TAIMScore™ framework. The professional credential for institutional operators who own the outcomes.

→ TAIMScore™ Assessor Workshop → Register Now

Subscribe to The AI Governance Briefing — New episodes every month. No vendor decks. No compliance theater. Just signal.

→ Subscribe to the Podcast → ✦ Underwrite Human Signal