In 2010, this author completed a doctoral dissertation at Auburn University examining the adult educational philosophies and teaching styles of workforce educators and entrepreneurship instructors within the State of Georgia. The study employed two validated instrumentation frameworks: the Principles of Adult Learning Scale (PALS), developed by Gary Conti, which measures the frequency with which an instructor practices one teaching style over another along a learner-centered to teacher-centered continuum; and the Philosophy of Adult Education Inventory (PAEI), developed by Lorraine Zinn, which identifies the underlying philosophical orientation — progressive, behaviorist, humanist, liberal, or radical — governing an educator's approach to the teaching-learning transaction.
Sixty-two surveys were returned from each population. Reliability coefficients registered Cronbach's alpha of .99 for both instruments. Mean scores on the PAEI trended higher on the progressive and behaviorist orientations, with participants reporting no strong disagreement across all five educational philosophies — a pattern consistent with existing literature suggesting that instructors may not be aware of inconsistencies within their own beliefs absent deliberate philosophical self-examination.
The central finding was unambiguous: total mean scores on the PALS fell below the mean established by Conti (2004), indicating that participants tended toward teacher-centered rather than learner-centered practice. Entrepreneurship instructors scored higher than workforce educators across all teaching style factors but neither population was practicing at the learner-centered register their stated philosophies implied. They professed learner-centered beliefs but their instructional practice did not reflect.
The gap between philosophical orientation and classroom execution was not incidental. It was structural. The institution, the delivery context, and the default assumptions embedded in professional practice were overriding the very philosophies these educators held.
That finding revealed a pattern. A two-level structural gap in absence and insufficiency.
The pattern generalized.
What the dissertation documented was structural insufficiency in pedagogy: a system in which the governance framework existed but could not intervene at the moment of execution. The educator had the right beliefs. The structural conditions overrode those beliefs in practice.
This is precisely the pattern that AI governance failures reveal. UnitedHealthcare maintained insurance contracts that explicitly stipulated coverage decisions would be made by clinical staff. The nH Predict algorithm — deployed through its NaviHealth subsidiary and documented to carry a 90% error rate on appeals — operated systematically outside that contractual commitment. The governance framework named the standard. The algorithm never encountered it.
Air Canada's terms of service prohibited retroactive bereavement fare applications. Its customer-facing chatbot promised the opposite — advising a grieving passenger that he could purchase a full-fare ticket and apply for the bereavement discount within ninety days of travel. The Tribunal called Air Canada's defense “remarkable” and rejected it. The policy existed. The system it governed did not know the policy existed.
Zillow's collapse is the most instructive case because the governance failure was not passive — it was enforced. Under Project Ketchup, Zillow's leadership explicitly prevented its pricing experts from modifying the algorithm's home valuations and directed them to stop questioning its outputs. Human override was not merely unavailable. It was prohibited. The algorithm was not ungoverned. It was protected from governance.
In 2010, a doctoral dissertation examining Georgia workforce educators found the same structural condition operating in a different domain: practitioners who held the right beliefs, inside institutions whose structural conditions prevented those beliefs from reaching the point of execution. The field was not named yet. The pattern was already present.
The Trust Gap — naming what the dissertation found.
The Trust Gap framework formalizes the dissertation's central finding into two diagnostic levels. Not as a typology of bad actors, but as a structural map of how governance fails in organizations that believe they are governing.
Level One
Structural absence.
No governance framework exists. The institution has no documented protocol, no escalation path, no accountability structure for AI decision-making. The structure did not fail; it was never built. The Amazon warehouse fulfillment algorithm that systematically scheduled workers at injury-producing pace operated inside an institution with no AI governance architecture capable of asking whether the optimization target was the right variable. There was no framework to fail because there was no framework to begin with.
Level Two — Dominant failure mode
Structural insufficiency.
Governance exists. Policy has been written. Ethics boards have convened. And still, the algorithm runs without encountering any of it. UnitedHealthcare had coverage standards — the nH Predict algorithm processed denials at a scale and speed those standards could not reach. Air Canada had a bereavement policy — the chatbot never consulted it. Zillow had pricing experts — Project Ketchup made their judgment structurally irrelevant.
Permitted is not the same as admissible. That distinction, borrowed from the language of evidence and proof, is the precise diagnostic the field has been missing. A governance framework that permits a decision without requiring that decision to pass through an accountability structure has not governed anything. It has documented an intention. Documentation is not governance. It is the precondition for governance that was never completed.
The field is not missing frameworks. It is missing the structural conditions that make frameworks executable. And those structural conditions are built through learning — specifically, through the kind of learning that adults actually do: experience-centered, problem-grounded, and failure-forward.