In July 2025, Dr. Jeanetta Floyd, Associate Professor at Georgetown University, published a short LinkedIn essay titled "Words Matter in AI Conversations." It did not announce a framework. It did not propose a taxonomy. It did something more useful than either. It named, in the disciplined language of a Georgetown professor who teaches through problem-based learning, a discomfort that the AI governance field has been circling for two years and refusing to land on.
The discomfort is this: when we call a model's statistical error a "hallucination," we are not being colorful. We are making a pedagogical choice. And that choice has consequences far beyond the seminar room.
Nine months later, in April 2026, I posted "The Pedagogy Problem in AI Governance" to SSRN. Dr. Floyd's piece was one of the signals I was tracking as I worked through the position paper — one of several practitioner reflections that told me the field was already feeling the failure even if it had not yet named it. That is what research in a novel field looks like. You read what educators, technologists, and operators are saying at the edges. You connect the dots. You formalize what the practice is already pointing at. And you accept the burden of writing the standard down, because in a field this young, the standard does not yet exist and someone has to bear the weight of building it.
This issue of The AI Governance Record is an attempt to do something the field rarely does well: name another researcher's contribution, build on it explicitly, and show why the pedagogy problem in AI governance is not a niche academic concern. It is the load-bearing failure underneath most of what we currently call "AI literacy," "responsible AI training," and "user education."
I
What Dr. Floyd actually said.
Strip the LinkedIn formatting away and her argument runs in three moves.
Move one. When a large language model produces an erroneous output, that output is not a glitch and not a cognitive event. It is prediction under uncertainty — the expected behavior of a system sampling from probability distributions shaped by training data. The model did not perceive anything. It did not forget anything. It generated a token sequence with the highest available conditional probability given an under-specified prompt and an incomplete training distribution.
Move two. Calling that behavior "hallucination" imports a clinical and cognitive vocabulary that does not belong to the system. The metaphor is not neutral. It softens. It anthropomorphizes. And in doing so, it dismisses three things the field cannot afford to dismiss: our responsibility to teach users how to interrogate outputs, the structural gaps in training data and representational equity, and our obligation to engage with the actual choices in model architecture, data curation, and system objectives that produced the error.
Move three. The defense of the term — that "hallucination" is more relatable, more palatable, more accessible to non-experts — collapses under scrutiny. Simplification is not the same as accessibility. Accessibility is offering accurate, transparent explanations that respect a user's capacity to understand complexity when it is clearly communicated. Catchy metaphor is not respect. It is condescension wearing a friendly face.
II
Why this is a pedagogy problem, not a vocabulary problem.
The temptation, when you read Dr. Floyd's piece, is to treat it as a debate about word choice. Drop "hallucination," adopt "confabulation" or "stochastic error" or "ungrounded generation," and the problem is solved. It is not.
The pedagogy problem in AI governance is not that we picked the wrong word. It is that the field built its entire user-facing explanatory apparatus on the assumption that learners need their cognition flattened before they can engage with the technology. That assumption is the failure. The vocabulary is downstream of it.
In my dissertation work on adult learning theory at Auburn — examining how Georgia workforce educators distributed themselves across teacher-centered and learner-centered orientations — one finding sat uncomfortably with the field's self-image. Instructors who described themselves as andragogical, as learner-centered, as committed to self-directed inquiry, scored consistently teacher-centered when measured against validated instruments. They believed they were respecting the learner. The behavior said otherwise.
AI governance education has the same gap. We say we are democratizing AI understanding. We then choose explanatory metaphors that pre-decide what the learner is allowed to know. "Hallucination" is the most visible example. It is not the only one. Every time we describe a model as "trying to" do something, "wanting to" be helpful, "knowing" or "believing" or "remembering," we are making the same move Dr. Floyd flagged: importing cognitive architecture the system does not have and inviting the user to reason about the system using the wrong mental model.
This is what I formalized as the Pedagogy Problem. Dr. Floyd was already pointing at it, in the register of a practitioner reflecting in real time, before the field had a name for what she was describing. That is how standards get built. Practitioners feel the failure. Researchers connect the dots. Someone bears the weight of writing it down.
III
Where the frameworks fit.
Three pieces of the Human Signal canon apply directly here, and I want to be precise about how.
In its current formulation, the Trust Gap names the distance between what a system is permitted to do and what it is admissible to do — between Structural Absence (no governance exists) and Structural Insufficiency (governance exists but does not engage the actual decision the system is making). Linguistic framing is a Structural Insufficiency mechanism. When we tell a board "the model hallucinated," we have provided a label. We have not provided governance. The label permits the conversation to continue without engaging the model architecture, the data curation choices, or the system objectives that produced the error. It is permitted speech. It is not admissible explanation. Dr. Floyd's piece is, in this sense, a Trust Gap intervention written in the vocabulary of an educator rather than the vocabulary of a governance theorist.
The Workflow Thesis holds that governance failure is rarely a failure of policy and almost always a failure of the workflow into which the policy was inserted. The same is true of AI literacy. Nobody learns "what hallucination means" in the abstract. They learn it inside a workflow — a customer service script, a clinical decision support tool, a legal research assistant, a hiring screen. The metaphor enters their cognition pre-loaded with the workflow's stakes. Teaching them, after the fact, that the metaphor was imprecise does not unwind the cognitive architecture the workflow installed. This is why "AI literacy training" delivered as a one-hour module after deployment almost never changes operator behavior. The pedagogy was set the moment the metaphor was chosen.
The Governance Architecture Stress-Point diagnostic was built to identify where institutional governance breaks under operational AI load. In every GASP engagement scoped so far, one stress point recurs: the gap between what the technical team understands about model behavior and what the operating team has been taught to expect. That gap is almost always a language gap. The technical team knows the system samples from a distribution. The operating team has been told the system "sometimes hallucinates." Those are not the same mental model. They produce different escalation behavior, different documentation behavior, and different incident response. Dr. Floyd's argument, applied at the institutional scale, is a GASP finding before the diagnostic is even run.
IV
What an honest pedagogy looks like.
If we accept that the pedagogy problem is real, the question becomes operational. What does AI governance education look like when it stops flattening? Three commitments, drawn directly from where Dr. Floyd's argument and the Pedagogy Problem position paper converge.
The defense of "hallucination" rests on the assumption that non-experts need a simplified myth to engage with a complex system. That assumption is wrong on the evidence. Adult learners, given accurate information and a workflow in which to apply it, consistently outperform learners given simplified myths. Andragogy assumes the learner has capacity. Pedagogy in AI governance has, with rare exceptions, assumed the learner does not.
Once you accept that linguistic framing is an instrument of governance, you accept that metaphor selection is a system design decision. It belongs in the same review process as model architecture and data curation. It is not a communications afterthought. It is a structural choice that determines what the user can perceive about the system's behavior.
A municipal CISO does not need the loss function. They do need to know that the system produces probability-weighted outputs, that those outputs degrade predictably under distributional shift, and that the failure mode is not a glitch but a feature of the architecture. That is a teachable sentence. It does not require the word "hallucination" to land.
Closing
Dr. Jeanetta Floyd wrote a short essay nine months before I formalized the position paper. She named the problem in the language of the classroom. I named it in the language of the field. That is how a body of literature gets built in a discipline this young — practitioner signal, researcher synthesis, standard. Both moves are necessary. Neither, alone, is enough.
The pedagogy problem in AI governance will not be solved by replacing one word with another. It will be solved when the field accepts that every explanatory choice is a governance choice, that every metaphor we hand a learner shapes the institutional response when the system fails, and that the learner — operator, executive, board member, citizen — has earned the right to the accurate version of the story.
— Dr. Tuboise Floyd
Read Dr. Jeanetta Floyd's original piece: "Words Matter in AI Conversations," LinkedIn, July 30, 2025.
"The Pedagogy Problem in AI Governance" is available at humansignal.io/position-paper or via SSRN (DOI: 10.2139/ssrn.6549178).
Human Signal Town Hall · May 14, 2026
The governance conversation your institution cannot miss.
Live. Recorded. Practitioner-led. No vendor filter. Operators examining institutional AI failures in real time — with no sponsored talking points.
Date
May 14, 2026
Host
Dr. Tuboise Floyd
Format
Live · Recorded
Price
$97 · Rises to $147 May 1
Seats are limited · May 14, 2026
About Human Signal
Dr. Tuboise Floyd | Founder, Human Signal
Human Signal is an independent AI governance research and media platform dedicated to institutional risk analysis. We reverse-engineer institutional AI failures and develop frameworks operators can use when it matters — not frameworks designed to satisfy an audit.
Govern the machine. Or be the resource it consumes.
— Dr. Tuboise Floyd · Founder, Human Signal
#AIGovernance #PedagogyProblem #TrustGap #HumanSignal #InstitutionalRisk #AIPolicy #Andragogy #LanguageIsGovernance