Tuboise Floyd PhD

Founder and Principal Analyst Human Signal Independent AI Media

I am the founder of Human Signal an independent AI governance research and media platform for leaders inside AI disrupted institutions federal agencies universities and enterprises racing to deploy autonomous AI systems without the governance infrastructure to keep those systems from breaking the institution.

I reverse engineer institutional failures build frameworks that operators can actually use and document what happens when organizations treat AI as a procurement problem instead of a systems design problem.

My work bridges the gap between deep technical systems design and operational reality ensuring operators have the clear signal they need to navigate AI safety and governance.

Tuboise Floyd PhD

Independent research is powered by you

You rely on Human Signal for objective AI governance analysis. I rely on you to fund the research. No vendor owns this voice.

Support Human Signal

What I Build

Through the Human Signal podcast visual briefs and frameworks I translate 15 plus years of institutional operations into clear language and pressure tested tools. I provide a transparent stage where responsible AI founders risk leaders and researchers can visibly underwrite independent research to reach decision makers in a regulated market.

The LEAC Protocol

A physics based model for evaluating AI infrastructure viability.

Noise Discipline

Cognitive defense for operators drowning in vendor hype.

The Workflow Thesis

The institutional AI risk is not the model the risk is the governance structure around the model.

Background

My career has been split between fixing systems under pressure and studying why they break.

Federal Operations

Technical strategy and program management supporting federal IT modernization where outages and bad data have real world consequences.

Enterprise Resilience

Led disaster recovery COOP design and large scale systems migrations 7000 plus users cross functional governance failure recovery.

Systems Research

PhD level work on how institutions adapt to or reject structural controls so governance becomes something people actually follow instead of route around.

Now and Currently

I am building Human Signal as the premier independent media and educational platform for AI governance. I provide documented institutional failures original frameworks and honest analysis for the people who have to make decisions inside systems they did not design.

Through corporate underwriting I partner with responsible AI startup founders and compliance officers. This public broadcasting model allows builders to fund independent research while securing visibility across my podcast and a managed community of 320000 plus tech professionals without bending the analysis.

Building Season 2 of Human Signal and developing visual strategy playbooks for institutional operators. Open to corporate underwriting advisory roles and speaking engagements on AI governance institutional resilience and systems design.

Key Initiatives and Core Capabilities

Direct the production of the Human Signal podcast and The Failure Files video series converting complex AI governance topics into accessible independent research.

Design and execute corporate underwriting and sponsorship packages for responsible AI startup founders and enterprise risk leaders securing visibility across major social platforms and a managed community of 320000 plus tech professionals.

Translate emerging AI regulations and federal guidance into operational strategies for leaders navigating AI disrupted institutions.

Provide strategic consulting on AI infrastructure viability leveraging proprietary frameworks like The LEAC Protocol and the Role Signal Analyzer.

Built a reusable AI governance playbook mapping NIST 800 53 and FedRAMP readiness controls to checkpoints in AI augmented workflows to guide institutional operators and sponsors on compliance positioning.

Designed and enforced a context control protocol Hyperprompt for LLM enabled professional workflows reducing hallucination risk for knowledge workers.