The AI Governance Record

A Human Signal Publication

AI governance intelligence for institutional operators. No vendor capture. No fluff. Just the questions your organization isn't asking.


The AI Governance Record

Get this in your inbox.

Quarterly. Independent. No vendor capture.

Issue No. 013 · Analysis · AI Governance · Pedagogy · Latest

The Gap After
Page 400

Karen Hao named the empire. Someone still had to build the architecture.

By Dr. Tuboise Floyd — Founder, Human Signal

Human Signal · April 2026


Karen Hao's Empire of AI is the most important book written about the AI industry in a generation. That is not hyperbole. Drawing on roughly 260 interviews, extensive internal OpenAI sourcing, and nearly six years of investigative work, Hao documents what most in the industry have been unwilling to say plainly: that the major AI companies are not innovators operating in good faith. They are empires — extracting labor, claiming intellectual property, and consolidating ungoverned power at a scale that has no modern precedent.

The book is a NYT bestseller. It deserves to be.

And at nearly 500 pages, it ends without a governance architecture.

That is not a criticism of Hao. Investigative journalism names the problem. That is its function, and she executed it at the highest level. But diagnosis is not treatment. And the field of AI governance has been confusing the two for years.

I

The gap Hao leaves is not rhetorical. It is structural.

The institutions that will actually live with these systems — hospitals, financial firms, insurers, universities — are not waiting for the empires to be broken up. They are deploying AI now. In workflows that affect real people, with real consequences, in environments where their existing governance structures were not built to intervene at the point of algorithmic execution.

This is the failure mode that does not appear in Empire of AI — not because Hao missed it, but because it is a different problem requiring a different discipline.

Case 01 — Air Canada Chatbot

The system did not fail because OpenAI is an ungoverned empire. It failed because Air Canada's own policy structure did not reach its own deployed system. The output was permitted. It was not governed at the point of execution. Those are not the same thing.

Case 02 — UnitedHealthcare nH Predict

The algorithm operated with a documented 90% reversal rate on appeals. The governance standard stipulating clinical oversight existed. The algorithm processed denials at a speed the governance standard could not reach. Scale outpaced structure.

Case 03 — Zillow Project Ketchup

Leadership did not produce a bad model. They produced an institutional culture that refused to override it. Managers with contrary evidence were directed to stop questioning the algorithm's valuations. Human judgment was present. It was systematically suppressed. The failure was not ungoverned. It was enforced structural insufficiency.

In each case, the empire is not the unit of analysis. The institution is.


II

The pedagogy problem.

The AI governance field has responded to these failures with frameworks, compliance checklists, ethics boards, and policy documentation. All of it is necessary. None of it is sufficient.

The reason is not political. It is pedagogical.

We are teaching adult practitioners — executives, general counsel, risk officers, operations leads — how to govern AI using the same methods we use to teach children: passive documentation, abstract rules, and compliance deadlines. The field is applying a pedagogical model to an andragogical problem.

Adults do not internalize governance through documentation. They internalize it through experience — specifically, through engaging with real failures as proxy experiences, diagnosing the structural gap, and mapping the lesson onto their own institutional context before the operational pressure arrives.

That is not a theory. It is the documented finding of adult learning research going back to Malcolm Knowles, and confirmed in a completely different domain by my 2010 Auburn dissertation: practitioners held the right philosophical beliefs. Their institutional structures overrode those beliefs at the point of delivery.

Enterprise AI is failing for the exact same reason.


III

The handoff.

Comparative Analysis

Empire of AI The Pedagogy Problem
Diagnosis AI companies are ungoverned empires Institutions fail from broken governance structures
Method Investigative journalism Andragogical theory
Audience Public & policymakers Practitioners & executives
Solution Break up the empires Teach governance as structural discipline
Missing The architecture Practitioner adoption at scale

The Pedagogy Problem in AI Governance — published this month as an SSRN preprint — does not compete with Hao's diagnosis. It begins where her book ends.

The argument is straightforward: institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it. And they will not fix that structure by reading another framework. They will fix it by learning the way adults actually learn — through structured engagement with real failure, applied to their own architectural gaps, before the crisis arrives.

She named the problem.
I built the framework to solve it.

The empire is real. Hao named it.

The institution is the unit of risk. That is the next problem.



Related Research

The Pedagogy Problem in AI Governance

The position paper that extends where Empire of AI leaves off. Published as an SSRN open-access preprint. The founding argument for AI governance as an andragogical discipline.

Read the Position Paper →

Human Signal Town Hall · May 14, 2026

The governance conversation your institution cannot miss.

Live. Recorded. Practitioner-led. No vendor filter. Operators examining institutional AI failures in real time — with no sponsored talking points.

Date

May 14, 2026

Host

Dr. Tuboise Floyd, PhD

Format

Live · Recorded

Early Access

$50 · Goes to $75 May 1

Confirmed speakers: Kathy Swacina · Cotishea Anderson · Taiye Lambo · Paul Wilson Jr. · Michelle Houston

Reserve Your Seat →

Seats are limited · May 14, 2026


About Human Signal

Dr. Tuboise Floyd | Founder, Human Signal

Human Signal is an independent AI governance research and media platform dedicated to institutional risk analysis. We reverse-engineer institutional AI failures and develop frameworks operators can use when it matters — not frameworks designed to satisfy an audit.

Govern the machine. Or be the resource it consumes.

— Dr. Tuboise Floyd · Founder, Human Signal

#AIGovernance #PedagogyProblem #TrustGap #EmpireOfAI #HumanSignal #InstitutionalRisk #AIPolicy #Andragogy


Stay in the Signal

Get the Next Issue

AI governance intelligence for institutional operators — delivered quarterly. Independent. No vendor capture. No fluff.

Quarterly cadence · No spam · Unsubscribe anytime

Analysis

Original governance frameworks and failure autopsies you won't find from vendor-funded sources.

Signal

Three practitioner questions per issue — designed to surface what your institution isn't asking.

No Noise

Quarterly. Not daily. Written for operators with limited bandwidth who need high-signal briefings.


Previous Issues

Issue No. 012 · April 2026 · Governance · Distributed AI

When AI Is Everywhere, Who Is Accountable for Anything?

Distributed AI doesn't just spread compute. It spreads risk, diffuses accountability, and creates governance gaps that no single framework was built to handle.

Read Issue 012 →

Issue No. 011 · Analysis & Position

The Trust Gap: Your AI is Deployed. Your Governance is Not.

Most institutions are not failing because their AI model is broken. They are failing because no one built the structure around it — and the failure has already begun.

Read Issue 011 →

Issue No. 010 · Strategy

The Architect Economy: Why Most Companies Are Solving the Wrong Problem

Your teams aren't afraid of AI. They're exhausted by inefficiency. The real crisis is not AI versus jobs — it's architecture versus drift.

Read Issue 010 →

Issue No. 009 · Leadership · Executive Intelligence

The ROI Wildcard: Why Senior Leaders Bet on Brutal Candor

The cost of hiring the truth is far less than the price of ignoring it. Why senior leaders bet on brutal candor — and what the ROI wildcard actually delivers at the decision-making level.

Read Issue 009 →

Issue No. 008 · Strategy · Career Architecture

The Architect's Mindset: How to Re-Engineer Professional Risk into Strategic Opportunity

Don't manage risk. Re-architect it. How the architect's mindset converts credential gaps, role pivots, and non-traditional experience into strategic leverage.

Read Issue 008 →

Issue No. 007 · Leadership

Operationalizing Brutal Candor: A Field Guide for Builders

You don't build outlier ROI with comfort. A field guide for builders on installing brutal candor as a structural advantage — not a communication training.

Read Issue 007 →

Issue No. 006 · Strategy

The Override Protocol: A Counter-Celebrity Playbook for Architecting Signal

We aren't building a following. We're building an architecture. A counter-celebrity playbook for rejecting algorithmic noise and architecting an uncopyable signal.

Read Issue 006 →

Issue No. 005 · National Security

Why the Policy-First Approach to AI Governance Is a National Security Risk

The machine is not waiting for your policy framework to catch up. Why mission-critical leaders must audit for resilience — not just compliance.

Read Issue 005 →

Issue No. 004 · March 2026 · Applied Signal

Your Network Is a Governance Decision

Operating inside a 320,000+ member Cybersecurity and AI community means protecting its integrity. The moment a professional relationship becomes purely extractive — it stops being a network and starts being a liability.

Read on LinkedIn →

Issue No. 003 · March 2026 · Essay

Is History Repeating Itself with AI?

Lessons on resistance, status anxiety, and ethical adoption. The script rarely changes — society reacts, resists, and then reluctantly adapts. But it's not really the technology that people are judging.

Read Issue 003 →

Issue No. 002 · March 2026 · Guest Feature

Making Digital Accessibility Work in the AI Era

97% of the web still presents accessibility barriers to disabled people. That is not an edge case. That is your user base, your legal risk, and your culture baked into every screen you ship.

Read Issue 002 →

Issue No. 001 · March 2026

Why AI Governance Keeps Failing

Organizations are not failing at AI governance because it is hard. They are failing because they were never serious about it in the first place.

Read Issue 001 →