Founder · Principal Analyst

Dr. Tuboise
Floyd

Human Signal™ · Independent AI Media

I am the founder of Human Signal — an independent AI governance research and media platform for leaders inside AI-disrupted institutions: federal agencies, universities, and enterprises racing to deploy autonomous AI systems without the governance infrastructure to keep those systems from breaking the institution.

I reverse-engineer institutional failures, build frameworks that operators can actually use, and document what happens when organizations treat AI as a procurement problem instead of a systems design problem.

My work bridges the gap between deep technical systems design and operational reality — ensuring operators have the clear signal they need to navigate AI safety and governance.

Dr. Tuboise Floyd, Founder of Human Signal™

Independent Research

No vendor owns this voice.

Human Signal operates without vendor funding, advertising, or institutional capture. The research remains independent because readers, listeners, and institutional partners choose to sustain it. There are three ways to support this work.

Individual

Direct Contribution

Make a one-time or recurring contribution directly to the research. Every amount sustains the independence of this analysis.

Contribute →

Institutional

Corporate Underwriting

Organizations that share Human Signal's commitment to responsible AI can partner as named underwriters — with full editorial independence preserved.

See Tiers →

Grants

Research Grants

Foundations and public interest organizations seeking to support independent AI governance research are encouraged to reach out directly.

Get in Touch →

Frameworks

What I Build

Through the Human Signal podcast, visual briefs, and frameworks I translate 15+ years of institutional operations into clear language and pressure-tested tools.

Framework

The LEAC Protocol

A physics-based model for evaluating AI infrastructure viability — Lithography, Energy Arbitrage, and Cooling.

Practice

Noise Discipline

Cognitive defense for operators drowning in vendor hype. A structured practice for cutting through artificial noise.

Thesis

The Workflow Thesis

The institutional AI risk is not the model — the risk is the governance structure around the model.

Experience

Background

My career has been split between fixing systems under pressure and studying why they break.

Federal Operations

Technical strategy and program management supporting federal IT modernization — where outages and bad data have real-world consequences.

Enterprise Resilience

Led disaster recovery, COOP design, and large-scale systems migrations — 7,000+ users, cross-functional governance failure recovery.

Systems Research

PhD-level work on how institutions adapt to — or reject — structural controls, so governance becomes something people actually follow instead of route around.

Now

Now and Currently

I am building Human Signal as the premier independent media and educational platform for AI governance — providing documented institutional failures, original frameworks, and honest analysis for the people who have to make decisions inside systems they did not design.

Through corporate underwriting I partner with responsible AI startup founders and compliance officers. This public broadcasting model allows builders to fund independent research while securing visibility with the 320,000+ tech professionals Dr. Floyd engages across his network — without bending the analysis.

Building Season 2 of Human Signal and developing visual strategy playbooks for institutional operators. Open to corporate underwriting, advisory roles, and speaking engagements on AI governance, institutional resilience, and systems design.

Capabilities

Key Initiatives & Core Capabilities

Direct the production of the Human Signal podcast and The Failure Files video series — converting complex AI governance topics into accessible, independent research.

Design and execute corporate underwriting and sponsorship packages for responsible AI startup founders and enterprise risk leaders — securing visibility with the 320,000+ tech professionals Dr. Floyd engages across his network.

Translate emerging AI regulations and federal guidance into operational strategies for leaders navigating AI-disrupted institutions.

Provide strategic consulting on AI infrastructure viability leveraging proprietary frameworks — The LEAC Protocol and the Role Signal Analyzer.

Built a reusable AI governance playbook mapping NIST 800-53 and FedRAMP readiness controls to checkpoints in AI-augmented workflows — guiding institutional operators and sponsors on compliance positioning.

Designed and enforced Hyperprompt — a context control protocol for LLM-enabled professional workflows reducing hallucination risk for knowledge workers.