Karen Hao's Empire of AI is the most important book written about the AI industry in a generation. That is not hyperbole. Drawing on roughly 260 interviews, extensive internal OpenAI sourcing, and nearly six years of investigative work, Hao documents what most in the industry have been unwilling to say plainly: that the major AI companies are not innovators operating in good faith. They are empires — extracting labor, claiming intellectual property, and consolidating ungoverned power at a scale that has no modern precedent.
The book is a NYT bestseller. It deserves to be.
And at nearly 500 pages, it ends without a governance architecture.
I
The gap is not rhetorical. It is structural.
The institutions that will actually live with these systems — hospitals, financial firms, insurers, universities — are not waiting for the empires to be broken up. They are deploying AI now. In workflows that affect real people, with real consequences, in environments where their existing governance structures were not built to intervene at the point of algorithmic execution.
This is the failure mode that does not appear in Empire of AI — not because Hao missed it, but because it is a different problem requiring a different discipline.
The system did not fail because OpenAI is an ungoverned empire. It failed because Air Canada's own policy structure did not reach its own deployed system. The output was permitted. It was not governed at the point of execution. Those are not the same thing.
The algorithm operated with a documented 90% reversal rate on appeals. The governance standard stipulating clinical oversight existed. The algorithm processed denials at a speed the governance standard could not reach. Scale outpaced structure.
Leadership did not produce a bad model. They produced an institutional culture that refused to override it. Managers with contrary evidence were directed to stop questioning the algorithm's valuations. Human judgment was present. It was systematically suppressed. The failure was not ungoverned. It was enforced structural insufficiency.
In each case, the empire is not the unit of analysis. The institution is.
II
The Pedagogy Problem
The AI governance field has responded to these failures with frameworks, compliance checklists, ethics boards, and policy documentation. All of it is necessary. None of it is sufficient.
The reason is not political. It is pedagogical.
We are teaching adult practitioners — executives, general counsel, risk officers, operations leads — how to govern AI using the same methods we use to teach children: passive documentation, abstract rules, and compliance deadlines. The field is applying a pedagogical model to an andragogical problem.
That is not a theory. It is the documented finding of adult learning research going back to Malcolm Knowles, and confirmed in a completely different domain by my 2010 Auburn dissertation: practitioners held the right philosophical beliefs. Their institutional structures overrode those beliefs at the point of delivery.
Enterprise AI is failing for the exact same reason.
III
The Handoff
Comparative Analysis
| Empire of AI | The Pedagogy Problem | |
|---|---|---|
| Diagnosis | AI companies are ungoverned empires | Institutions fail from broken governance structures |
| Method | Investigative journalism | Andragogical theory |
| Audience | Public & policymakers | Practitioners & executives |
| Solution | Break up the empires | Teach governance as structural discipline |
| Missing | The architecture | Practitioner adoption at scale |
The Pedagogy Problem in AI Governance — published this month as an SSRN preprint — does not compete with Hao's diagnosis. It begins where her book ends.
The argument is straightforward: institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it. And they will not fix that structure by reading another framework. They will fix it by learning the way adults actually learn — through structured engagement with real failure, applied to their own architectural gaps, before the crisis arrives.
I built the framework to solve it.
The empire is real. Hao named it.
The institution is the unit of risk. That is the next problem.
Related Research
The Pedagogy Problem in AI Governance
The position paper that extends where Empire of AI leaves off. Published as an SSRN open-access preprint. The founding argument for AI governance as an andragogical discipline.
Read the Position Paper →About Human Signal
Dr. Tuboise Floyd | Founder, Human Signal
Human Signal is an independent AI governance research and media platform dedicated to institutional risk analysis. We reverse-engineer institutional AI failures and develop frameworks operators can use when it matters — not frameworks designed to satisfy an audit.
Govern the machine. Or be the resource it consumes.
— Dr. Tuboise Floyd · Founder, Human Signal
#AIGovernance #PedagogyProblem #TrustGap #EmpireOfAI #HumanSignal #InstitutionalRisk #AIPolicy #Andragogy