The AI Governance Record
A Human Signal Publication
AI governance intelligence for institutional operators. No vendor capture. No fluff. Just the questions your organization isn't asking.
In July 2025, Dr. Jeanetta Floyd, Associate Professor at Georgetown University, published a short LinkedIn essay titled "Words Matter in AI Conversations." It did not announce a framework. It did not propose a taxonomy. It did something more useful than either. It named, in the disciplined language of a Georgetown professor who teaches through problem-based learning, a discomfort that the AI governance field has been circling for two years and refusing to land on.
The discomfort is this: when we call a model's statistical error a "hallucination," we are not being colorful. We are making a pedagogical choice. And that choice has consequences far beyond the seminar room.
The regulatory landscape produces mandates. It does not produce mechanisms. That gap is where institutions get audited, sued, and defunded — not because they were ignorant of the mandate, but because they had no mechanism to prove compliance with one.
Nine months later, in April 2026, I posted "The Pedagogy Problem in AI Governance" to SSRN. Dr. Floyd's piece was one of the signals I was tracking — one of several practitioner reflections that told me the field was already feeling the failure even if it had not yet named it. That is what research in a novel field looks like. This issue names her contribution, builds on it explicitly, and shows why the pedagogy problem in AI governance is not a niche academic concern. It is the load-bearing failure underneath most of what we currently call "AI literacy," "responsible AI training," and "user education."
I
What Dr. Floyd actually said.
Strip the LinkedIn formatting away and her argument runs in three moves.
Move one. When a large language model produces an erroneous output, that output is not a glitch and not a cognitive event. It is prediction under uncertainty — the expected behavior of a system sampling from probability distributions shaped by training data. The model did not perceive anything. It generated a token sequence with the highest available conditional probability given an under-specified prompt and an incomplete training distribution.
Move two. Calling that behavior "hallucination" imports a clinical and cognitive vocabulary that does not belong to the system. The metaphor is not neutral. It softens. It anthropomorphizes. And in doing so, it dismisses three things the field cannot afford to dismiss: our responsibility to teach users how to interrogate outputs, the structural gaps in training data, and our obligation to engage with the actual choices in model architecture that produced the error.
Move three. The defense of the term collapses under scrutiny. Simplification is not the same as accessibility. Accessibility is offering accurate, transparent explanations that respect a user's capacity to understand complexity when it is clearly communicated. Catchy metaphor is not respect. It is condescension wearing a friendly face.
That is the argument. It is short. It is correct. And it is treating language as a pedagogical instrument and holding the speaker accountable for what the instrument produces in the listener.
II
Why this is a pedagogy problem, not a vocabulary problem.
The temptation is to treat Dr. Floyd's piece as a debate about word choice. Drop "hallucination," adopt "confabulation" or "stochastic error," and the problem is solved. It is not.
The pedagogy problem in AI governance is not that we picked the wrong word. It is that the field built its entire user-facing explanatory apparatus on the assumption that learners need their cognition flattened before they can engage with the technology. That assumption is the failure. The vocabulary is downstream of it.
The result is a population of operators, executives, board members, and frontline employees who have been taught to trust or distrust AI based on metaphors that were never accurate. When the system fails — and it will fail, statistically, by design — they have no framework for diagnosing what failed, why, or what to do about it. They have only the metaphor. And the metaphor told them the machine got confused.
III
Where the frameworks fit.
The Trust Gap
When we tell a board "the model hallucinated," we have provided a label. We have not provided governance. The label permits the conversation to continue without engaging the model architecture or data curation choices that produced the error. It is permitted speech. It is not admissible explanation. Dr. Floyd's piece is a Trust Gap intervention written in the vocabulary of an educator.
The Workflow Thesis
Nobody learns "what hallucination means" in the abstract. They learn it inside a workflow. The metaphor enters their cognition pre-loaded with the workflow's stakes. This is why "AI literacy training" delivered as a one-hour module after deployment almost never changes operator behavior. The pedagogy was set the moment the metaphor was chosen.
GASP™
In every GASP engagement scoped so far, one stress point recurs: the gap between what the technical team understands about model behavior and what the operating team has been taught to expect. The technical team knows the system samples from a distribution. The operating team has been told the system "sometimes hallucinates." Those are not the same mental model. Dr. Floyd's argument, applied at the institutional scale, is a GASP finding before the diagnostic is even run.
Closing
Dr. Jeanetta Floyd wrote a short essay nine months before I formalized the position paper. She named the problem in the language of the classroom. I named it in the language of the field. That is how a body of literature gets built in a discipline this young — practitioner signal, researcher synthesis, standard. Both moves are necessary. Neither, alone, is enough.
Most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it. Language is part of that structure. Dr. Floyd saw it first. I am writing it down. The field should catch up.
Read Dr. Jeanetta Floyd's original piece: "Words Matter in AI Conversations," LinkedIn, July 30, 2025.
"The Pedagogy Problem in AI Governance" — humansignal.io/position-paper · DOI: 10.2139/ssrn.6549178
Human Signal Town Hall · May 14, 2026
The governance conversation your institution cannot miss.
Live. Recorded. Practitioner-led. No vendor filter. Operators examining institutional AI failures in real time — with no sponsored talking points.
Price
$97 · Rises to $147 May 1
Confirmed speakers: Kathy Swacina · Cotishea Anderson · Taiye Lambo · Paul Wilson Jr. · Michelle Houston
Reserve Your Seat →
Seats are limited · May 14, 2026
About Human Signal
Dr. Tuboise Floyd | Founder, Human Signal
Human Signal is an independent AI governance research and media platform dedicated to institutional risk analysis. We reverse-engineer institutional AI failures and develop frameworks operators can use when it matters — not frameworks designed to satisfy an audit.
Govern the machine. Or be the resource it consumes.
— Dr. Tuboise Floyd · Founder, Human Signal
#AIGovernance #PedagogyProblem #TrustGap #HumanSignal #InstitutionalRisk #AIPolicy #Andragogy #LanguageIsGovernance
I
Permitted is not the same as admissible.
The Failure Files™ series has been accumulating case-by-case evidence for the same structural failure. Pull any three from the library and the pattern is identical.
Air Canada Chatbot
The tribunal did not rule that Air Canada lacked an AI policy. It ruled that Air Canada could not produce operational evidence that its chatbot was bounded, monitored, or escalated at the point of execution. The mandate existed. The mechanism did not. Liability followed.
UnitedHealthcare nH Predict
Federal suits did not allege UnitedHealthcare lacked governance documentation. They alleged the company could not demonstrate that the algorithm's reliability was measured, its fairness was tested, or physician override pathways were operational. The mandate existed. The mechanism did not.
Zillow Zestimate
The company had published model governance documentation. The failure was that the documented context of the Zestimate was consumer estimation. The deployed context was institutional capital allocation. $881 million in write-downs later, the mandate still existed. The mechanism still did not.
Three cases. Three sectors. One structural failure. Institutions operated under regulatory obligations they understood — and could not produce the mechanism that would have converted those obligations into operational evidence when the system was tested.
The Trust Gap framework I published earlier this year calls the distinction permitted versus admissible. The institution permits the AI system to operate. The tribunal, the regulator, or the plaintiff later asks whether that operation was admissible as governed behavior. Permitted is a policy. Admissible is a mechanism.
II
Why mandates cannot self-mechanize.
A reasonable objection: if regulators require governance and institutions face real consequences for absent governance, will the market not produce its own mechanisms over time? Give it a few years.
The objection fails on three structural grounds.
First — mandates are written for scope, not evidence. NIST AI RMF publishes functions and categories. The EU AI Act publishes obligations and penalties. Neither publishes scoring rubrics, and neither ever will — regulators operate at the level of principle, and writing the operational mechanism would foreclose the implementation flexibility the regulated entities require.
Second — institutional incentives reward activity, not evidence. The GASP™ framework calls this the central pathology of AI governance: institutions produce activity because activity is legible to internal stakeholders. It generates status reports, slide decks, compliance check-ins. Structure is invisible internally because structure is only visible when tested.
Third — the supply side is occupied by vendors. The vacuum created by mandate-without-mechanism is commercially valuable. Every major cloud provider publishes a responsible AI framework. Every enterprise AI vendor publishes a governance toolkit. None of them are comparable across institutions because each is designed to make the vendor's product look compliant.
A mandate without a mechanism creates a vacuum. A vacuum in AI governance is filled by vendors who profit from its continuation.
III
The mechanism the field has been missing.
The Trusted AI Model Score — TAIMScore™ — was developed by Taiye Lambo, Founder and Chief Artificial Intelligence Officer of HISPI, through Project Cerebellum, HISPI's AI Governance Think Tank. Its founding premise is that AI should cause no harm. Its operational premise is that accountability must be measurable, or it is not accountability.
The TAIMScore™ Architecture
| Layer |
What It Measures |
| Four Domains |
GOVERN · MAP · MEASURE · MANAGE |
| Twenty Controls |
The minimum viable set to prove AI accountability posture |
| Three Dimensions |
People · Process · Data & Technology |
| Seven Properties |
Transparency · Accountability · Impartiality · Inclusion · Security & Privacy · Reliability & Safety · Robustness |
| Six Crosswalks |
HIPAA · PCI DSS · SOC 2 · EU AI Act · EU GDPR · White House AI Executive Order |
The four TAIM domains share names and functional structure with the four NIST AI RMF functions. This alignment is intentional. An institution operationalizing NIST AI RMF is already partway through a TAIMScore™ assessment. The difference is that NIST publishes functions. TAIMScore™ publishes a score.
NIST AI RMF is the mandate. TAIMScore™ is the mechanism.
IV
Mechanism as governance, not adjacent to governance.
The deeper argument is not that TAIMScore™ is a useful instrument. It is that the mechanism is governance — not a byproduct, not a report about governance, not a compliance artifact adjacent to it. A scored assessment is admissible in ways a policy document is not. A score carries a date, an assessor, a methodology, and a result. It can be audited against a reference framework, compared across institutions, and introduced as evidence in a regulatory proceeding.
TAIMScore™ is not a competing framework to NIST AI RMF or the EU AI Act. It is the mechanism those frameworks require but do not themselves produce. The score is not a substitute for the mandate. The score is the mandate operationalized.
Pedagogy is how adults learn governance.
Measurement is how institutions prove governance.
The position paper went to SSRN this week as a companion to The Pedagogy Problem in AI Governance. The two are designed to be read together.
Previous Issues
Issue No. 015 · April 2026 · Guest Feature · AI Governance · Systems Design
The Veteran's Diagnosis
High-performing people. Low-performing ecosystems. Dr. Rhonda Farrell — Marine Corps veteran, DoD strategist — on why AI doesn't break your organization. It reveals it.
Read Issue 015 →
Issue No. 014 · April 2026 · Analysis · AI Governance · Measurement
The Mechanism After the Mandate
Every regulator wrote the mandate. None of them wrote the mechanism. The mandate-mechanism gap and the scoring instrument the field has been missing.
Read Issue 014 →
Issue No. 013 · April 2026 · Analysis · AI Governance · Pedagogy
The Gap After Page 400
Karen Hao named the empire. Someone still had to build the architecture. Dr. Tuboise Floyd responds to Empire of AI with the governance architecture that begins where the book ends.
Read Issue 013 →
Issue No. 012 · April 2026 · Governance · Distributed AI
When AI Is Everywhere, Who Is Accountable for Anything?
Distributed AI doesn't just spread compute. It spreads risk, diffuses accountability, and creates governance gaps that no single framework was built to handle.
Read Issue 012 →
Issue No. 011 · Analysis & Position
The Trust Gap: Your AI is Deployed. Your Governance is Not.
Most institutions are not failing because their AI model is broken. They are failing because no one built the structure around it — and the failure has already begun.
Read Issue 011 →
Issue No. 010 · Strategy
The Architect Economy: Why Most Companies Are Solving the Wrong Problem
Your teams aren't afraid of AI. They're exhausted by inefficiency. The real crisis is not AI versus jobs — it's architecture versus drift.
Read Issue 010 →
Issue No. 009 · Leadership · Executive Intelligence
The ROI Wildcard: Why Senior Leaders Bet on Brutal Candor
The cost of hiring the truth is far less than the price of ignoring it. Why senior leaders bet on brutal candor — and what the ROI wildcard actually delivers at the decision-making level.
Read Issue 009 →
Issue No. 008 · Strategy · Career Architecture
The Architect's Mindset: How to Re-Engineer Professional Risk into Strategic Opportunity
Don't manage risk. Re-architect it. How the architect's mindset converts credential gaps, role pivots, and non-traditional experience into strategic leverage.
Read Issue 008 →
Issue No. 007 · Leadership
Operationalizing Brutal Candor: A Field Guide for Builders
You don't build outlier ROI with comfort. A field guide for builders on installing brutal candor as a structural advantage — not a communication training.
Read Issue 007 →
Issue No. 006 · Strategy
The Override Protocol: A Counter-Celebrity Playbook for Architecting Signal
We aren't building a following. We're building an architecture. A counter-celebrity playbook for rejecting algorithmic noise and architecting an uncopyable signal.
Read Issue 006 →
Issue No. 005 · National Security
Why the Policy-First Approach to AI Governance Is a National Security Risk
The machine is not waiting for your policy framework to catch up. Why mission-critical leaders must audit for resilience — not just compliance.
Read Issue 005 →
Issue No. 004 · March 2026 · Applied Signal
Your Network Is a Governance Decision
Operating inside a 320,000+ member Cybersecurity and AI community means protecting its integrity. The moment a professional relationship becomes purely extractive — it stops being a network and starts being a liability.
Read on LinkedIn →
Issue No. 003 · March 2026 · Essay
Is History Repeating Itself with AI?
Lessons on resistance, status anxiety, and ethical adoption. The script rarely changes — society reacts, resists, and then reluctantly adapts. But it's not really the technology that people are judging.
Read Issue 003 →
Issue No. 002 · March 2026 · Guest Feature
Making Digital Accessibility Work in the AI Era
97% of the web still presents accessibility barriers to disabled people. That is not an edge case. That is your user base, your legal risk, and your culture baked into every screen you ship.
Read Issue 002 →
Issue No. 001 · March 2026
Why AI Governance Keeps Failing
Organizations are not failing at AI governance because it is hard. They are failing because they were never serious about it in the first place.
Read Issue 001 →