Trillion-dollar companies are racing to release AI models that hallucinate. Boardrooms are treating AI like a productivity toy. And the window to install governance before something catastrophic happens is getting narrower by the month.
In this episode of The AI Governance Briefing, Dr. Tuboise Floyd is joined by Col. Kathy Swacina (USA, Ret.) — Chief Information Officer at SherpaWerx and Chairperson for Advocacy at HISPI Project Cerebellum — and Taiye Lambo, Founder and Chief AI Officer of HISPI — Holistic Information Security Practitioner Institute. Together they examine the existential stakes, the solution architecture, and the binary choice every institution now faces.
Key Takeaways
- We are not yet doomed — but the window to install guardrails is narrowing as the AI race accelerates
- Boardrooms are treating AI like a productivity toy. They have far less control over these models than they believe
- AI applies physics and math — not morality and ethics. That is where the human comes in
- The human must be in the loop from the Strategy phase — not the Operation phase. By then it is too late
- The TAIMScore™ framework harmonizes seven international responsible AI principles, ISO 42001, the EU AI Act, NIST, and more into a single assessment tool — free and open source
- The TAIMScore™ is not a credit score. It is closer to a Dun & Bradstreet score — it assesses your use case, not the AI model itself
- Intelligence is abundant, but trust is scarce. Once an organization loses trust through an ungoverned AI failure, it is nearly impossible to recover
- PACE planning — Primary, Alternate, Contingency, Emergency — must be applied to AI systems the same way it is applied to military communications
- The CIO is the new CEO. AI has made technical command and control the most consequential executive function in the organization
- The choice is binary: build fast and pray, or build with a nervous system. One path leads to liability. The other leads to authority
Are We Doomed? The Existential Risk of Ungoverned AI
Dr. Floyd opened with the direct question: with trillion-dollar companies racing to release models that hallucinate and manipulate users, are we doomed if we continue without a holistic control layer?
Taiye Lambo answered as an optimist: not yet — but the window is narrowing.
"It's getting narrower and narrower as the race intensifies. You're going 200 miles an hour and you're still trying to fix the brakes. The closer you are to that crash point, the higher your chances of being doomed." — Taiye Lambo
The opportunity is not to slow down the AI race. It is to install the guardrails while there is still time. That distinction matters: the goal is not deceleration, it is governance.
The Safety Belt Analogy: Three Points vs. Five Points
Consumer automobiles require a three-point safety harness. Formula 1 cars use a five-point harness. The three-point was not chosen because it was safest — it was chosen because it was the minimum viable product that did not cut into auto industry profits.
Dr. Floyd's argument: that same logic is now governing AI deployment. We are giving institutions just enough governance to click — not enough to protect them at speed.
"We don't have to stomp on the brakes — but we need to press on them a little. Speeding forward to be first to market isn't necessarily the way to go. Doing your due diligence in development is simply good business." — Col. Kathy Swacina
Col. Swacina's point: risk tolerance is not a fixed number. Every agency, organization, and industry partner has to actively decide how much risk it is willing to accept — and that decision has to be made consciously, not by default.
Human in the Loop — at Every Stage
Col. Swacina was direct: we should not give command and control to the computers. AI applies physics and math to problems. It does not apply morality or ethics. That is the human's job — and it cannot be delegated.
"AI agents are fine for mundane and repeatable tasks or mathematical calculations. But humans must be involved in the final decision-making — especially when the outcome could be life or death. Humans bring emotion and ethics to a decision that a mathematical calculation simply cannot." — Col. Kathy Swacina
Taiye Lambo pushed the concept further: the human in the loop is not just a checkpoint at the end. It must be embedded from the beginning of the AI life cycle — from Strategy through Design, Transition, Operation, and Continuous Service Improvement.
"If you wait until the Operation or Transition stage, it's probably too late. Things will go wrong. You have to add the E — early human in the loop." — Taiye Lambo
This maps directly to what Human Signal calls the Trust Gap: governance that exists on paper but cannot intervene at execution is structurally insufficient. The control must be present at the point where decisions are being made — not installed after the damage is done.
Project Cerebellum and the TAIMScore™ Framework
Project Cerebellum is HISPI's AI governance think tank. The name is deliberate — the cerebellum is the part of the brain that maintains balance and prevents you from falling. The frontal lobe drives dopamine-fueled decision-making: "I'm going to create this system because I can." The cerebellum asks: should we? And what happens if we do?
The framework built by Project Cerebellum is the Trusted AI Model — assessed through the TAIMScore™. It harmonizes seven international responsible AI principles with U.S. and global standards including the NIST Cybersecurity Framework, HIPAA, PCI, SOC 2, the EU AI Act, the EU GDPR, and seven ISO standards — including ISO 42001, the first AI management system standard organizations can be certified against.
Everything is free and open source. The work is volunteer-driven. Taiye Lambo leads the Harmonization working group himself.
"AI is both a threat and an opportunity. It's a powerful tool — but it can also be weaponized. AI systems don't have a moral compass. That's why the human must be in the loop." — Taiye Lambo
What the TAIMScore™ Actually Measures
Dr. Floyd asked the right question: is this like a credit score for AI governance? Taiye Lambo corrected the frame — it is closer to a Dun & Bradstreet score. A D&B number proved you were a real, vetted company. The TAIMScore™ proves your AI use case has been assessed against a credible governance baseline.
Critically: the TAIMScore™ does not rate AI systems. It rates how you use them.
"You can still use it to assess the model or the LLM (large language model). But it's a shared responsibility — and we're focusing more on the responsibility of the user. Before you have a use case, you need a business case. Start with: why are we doing this, and what's the benefit?" — Taiye Lambo
The framework includes 72 controls total — but identifies a Top 20 that every AI use case must be assessed against to establish baseline trust. Think of it as the three-point harness: the minimum required. The full 72 controls is the five-point harness. Start with the 20.
The TAIMScore™ also predicts the probability of an AI incident based on your assessment score — low, moderate, or high. This does not mean you stop the project. It means you proceed with caution and close the gaps. And it ensures that the risk you are accepting is commensurate with your organization's documented risk appetite.
The workshop goes further: organizations can bring their in-house AI systems to live-fire exercises — real-world scenarios, not simulations — and leave with both individual and group TAIMScores™. Participants can also earn a TAIMScore™ Assessor certificate through HISPI. The TAIMScore™ Assessor Workshop is available now.
Governance Is Not a Weapon. It Is Your Conscience.
Dr. Floyd asked Col. Swacina: do you agree that governance is actually a weapon? Her answer reframed the question entirely.
"Governance is more like our conscience. In Project Cerebellum, it helps with the guardrails — using your cerebellum instead of your frontal lobe." — Col. Kathy Swacina
The frontal lobe is dopamine and instant gratification. The cerebellum is balance and consequence. Project Cerebellum forces the governance questions: are we measuring what we're creating? Are we managing what we're deploying? Can we prove it is safe?
This is precisely the GASP™ diagnostic in operational form: governance as a structural problem. The institution is not missing the right software. It is missing the structure that asks the right questions before shipping.
The Death Spiral: What Ungoverned AI Looks Like in Practice
Dr. Floyd pushed Col. Swacina through a direct scenario: a company in 2026 ignores governance. They run ungoverned models — no guardrails, nothing. What happens when the lawsuits hit?
"When a company or agency creates something that doesn't do what it's supposed to — or goes rogue — you lose trust. And trust has to be equated with the organization. When you lose it, you lose credibility, you lose your customer base, and you lose the revenue." — Col. Kathy Swacina
The death spiral is not abstract. It is sequential: ungoverned deployment → AI failure → public exposure → trust collapse → customer attrition → revenue loss → executive accountability. The CFO and CEO typically absorb the first rounds of consequence. Sometimes the CIO too.
Taiye Lambo's illustration grounded it: the ChatGPT hallucination case — a lawyer submitted entirely fabricated case law citations to a federal court. The lawyer won — until the judge discovered the citations did not exist. The decision was reversed. The lawyer was disbarred. That failure was not a technology problem. It was a governance problem: a lack of policy requiring that any output from a large language model be independently verified before submission.
"Trust but verify. That lawyer blindly trusted ChatGPT as an authoritative source — which no LLM (large language model) should ever be." — Taiye Lambo
The New CEOs Are the CIOs
Dr. Floyd made a direct assertion: in an era where AI governs operations, the CIO holds the most consequential executive authority in the organization. They should be compensated accordingly.
Col. Swacina confirmed it from experience: effective CIO training requires knowing not just the network and the technology — but the business. What is it trying to achieve? What are its systems? How does technology serve the mission? The CIO who can answer those questions is operating at CEO-level strategic depth.
Dr. Floyd's corollary: it is easier for a CIO to scale up to CEO than vice versa. The gaps going the other direction — technical command, operational awareness, infrastructure accountability — are too wide in an AI-driven environment.
PACE Planning for AI Systems
Col. Swacina introduced PACE — the military framework for communication resilience — and applied it directly to AI governance.
Primary
Your primary AI tool or system. Fully operational, fully governed. The system you rely on under normal conditions.
Alternate
A backup approach if the primary system fails or produces unacceptable outputs. Identified and ready before you need it.
Contingency
A third path for partial system breakdown. May be lower-tech or manual. Must be documented and understood by the team before the incident.
Emergency
What happens if everything fails? Who gets in the car and delivers the message? Every AI system needs a defined emergency human fallback.
The military does not wait for the primary network to fail before identifying the alternate. Every AI deployment should have the same rigor applied from day one.
Call to Action
For practitioners and architects — Taiye Lambo's CTA: Do something. Take the first step. Go to ProjectCerebellum.com, download the open-source framework, and sign up for the TAIMScore™ tool. There is also a gamified tabletop exercise that simulates a real AI governance scenario across four levels — scoring you on customer satisfaction, financial performance, and reputation. It will show you exactly what doing the right thing actually costs — and why it is still worth it.
For policy leaders and C-suite executives — Col. Swacina's CTA: Meet the regulatory standards. That is how you stay alive. Apply guardrails for data resilience, model governance, and infrastructure redundancy. Build an AI adoption plan — not just a tool rollout. Communicate across the C-suite. Get everyone on the same sheet of music before the first system ships.
The Binary Choice
"It's binary. You can continue to build fast and pray that nothing breaks — or you can build with a nervous system. One path leads to liability. The other path leads to authority." — Dr. Tuboise Floyd
Study the mission. Study the vision. Implement the controls. Don't be a casualty of your own inventory.
Full Transcript
Lightly edited for readability. Speaker labels and timestamps preserved from original recording.
[00:00] Opening & Guest Introductions
Dr. Tuboise Floyd: Welcome to Human Signal. I'm your host, Dr. Tuboise Floyd. Today we have a dynamic show — two fantastic guests. Ladies first. Col. Kathy Swacina — retired U.S. Army Colonel, 30-year career leading multi-million dollar operations and IT programs, Colonel promotable to General. Currently Chief Information Officer at SherpaWerx and Chairperson for Advocacy at HISPI Project Cerebellum. An expert in defense networks and change management. Welcome, Kathy.
Col. Kathy Swacina: Thank you, Dr. Floyd.
Dr. Tuboise Floyd: Our second guest — Taiye Lambo. Visionary author, serial entrepreneur, pioneer of the virtual CISO and Chief AI Officer roles. Founder and Chief AI Officer at HISPI — Holistic Information Security Practitioner Institute. Also founder of Cloud Assurance, recognized as a Gartner Cool Vendor. Welcome, Taiye.
Taiye Lambo: Thank you, Dr. Floyd. It's an honor and a privilege.
[02:42] The Why: Are We Doomed?
Dr. Tuboise Floyd: Trillion-dollar companies are racing to release models that hallucinate and manipulate users. If we continue without a holistic control layer — I think we're doomed. What are your thoughts, Taiye?
Taiye Lambo: I have to answer as an optimist. Not yet — we're not doomed yet, but we can be doomed. We have the opportunity to put guardrails in place without slowing the race. The window is getting narrower. You're going 200 miles an hour and you're still trying to fix the brakes. The closer you are to that crash point, the higher your chances of being doomed.
[04:35] The Safety Belt Analogy
Dr. Tuboise Floyd: The law requires a three-point harness for consumer automobiles — but Formula 1 uses a five-point. The three-point was the minimum viable product that didn't cut into profits. Kathy — is it all about the type of safety belt?
Col. Kathy Swacina: It comes down to risk tolerance. The five-point harness is optimal. When you go to a three-point, you're taking on more risk. The question is: how much risk is your organization willing to accept? We don't have to stomp on the brakes — but we need to press on them. Speeding forward to be first to market isn't necessarily the way to go. Due diligence in development is just good business.
[07:13] Human in the Loop
Col. Kathy Swacina: We shouldn't give command and control to the computers. We have to keep a human in the loop. AI is a tool — it applies physics and math, not morality and ethics. AI agents are fine for mundane and repeatable tasks. But humans must be in the final decision-making, especially when the outcome could be life or death. Humans bring emotion and ethics. A mathematical calculation simply cannot.
Taiye Lambo: The human in the loop is critical at every stage of the AI life cycle — Strategy, Design, Transition, Operation, Continuous Service Improvement. If you wait until the Operation stage, it's probably too late. Early human in the loop. Add the E.
[12:26] Project Cerebellum and the TAIMScore™ Framework
Taiye Lambo: With Project Cerebellum, we believe AI should cause no harm — meaning no unintended consequences — and should enhance the quality of human life. The how: the TAIMScore™ framework. It harmonizes seven international responsible AI principles with U.S. and global standards: NIST Cybersecurity Framework, HIPAA, PCI, SOC 2, the EU AI Act, the EU GDPR — up to 4% of global revenue in penalties for violations — and seven ISO standards, including ISO 42001, the first AI management system standard organizations can be certified against. At the heart of the model is trust. We also address the full AI life cycle — from strategy through continuous improvement. AI is both a threat and an opportunity. AI systems don't have a moral compass. The human has to be in the loop.
[17:02] Getting Involved: Certifications and Training
Taiye Lambo: First step: go to ProjectCerebellum.com. Everything we produce is free and open source, volunteer-driven. Download the framework, watch the overview videos. If our mission resonates — safe, secure, responsible, trustworthy AI through harmonization — join us. One to two hours a month. For those who want hands-on training, we run monthly half-day workshops simulating a real-world AI risk management assessment against a system hosted in AWS GovCloud. The results are fairly shocking. You can also earn a TAIMScore™ Assessor certificate issued by HISPI. Six CEUs awarded, pending independent approval. We're also developing a Train the Trainer program.
[21:14] The Kobayashi Maru
Dr. Tuboise Floyd: So you're throwing C-suite leaders into the Kobayashi Maru — the no-win scenario. See how people make decisions under high stress.
Taiye Lambo: Couldn't have said it better.
[21:45] What the TAIMScore™ Actually Measures
Dr. Tuboise Floyd: Is the TAIMScore™ like a credit score for AI governance?
Taiye Lambo: It's more like a Dun & Bradstreet score — it proves your use case has been properly assessed. We're not rating AI systems per se. The system is a tool. It's how you use the tool. We're judging use cases — you self-assess your use case against the framework. The whole idea is to shift responsibility to the user of the AI system. Before you have a use case, you need a business case. Start with: why are we doing this?
Dr. Tuboise Floyd: And you're not Gartner — you don't rate vendors, companies don't pay you for a rating.
Taiye Lambo: Correct. We're not a rating company.
[26:13] Governance Is the Conscience, Not the Weapon
Col. Kathy Swacina: Governance is more like our conscience. In Project Cerebellum, it helps with the guardrails — using your cerebellum instead of your frontal lobe. The frontal lobe is dopamine and instant gratification: "I'm going to create this system because I can." Project Cerebellum and the TAIMScore™ bring in the governance questions — measuring and managing what you're creating.
[30:56] The Death Spiral
Col. Kathy Swacina: When a company creates something that doesn't do what it's supposed to — or goes rogue — you lose trust. And once an organization loses trust, it's nearly impossible to get back. Intelligence is abundant, but trust is scarce. The death spiral: move too fast, skip risk management, ship something with backdoors or fundamental problems — and you lose the people, the profit, and the accountability. CFOs and CEOs are typically the ones who get fired.
Taiye Lambo: Every CIO I've worked for as a CISO — I made sure trust was in our vision and mission statement. One of the ways we demonstrate trust is by adopting best practices and standards. Our TAIMScore™ framework's Top 20 controls are based on high-profile real-world AI incidents — including the ChatGPT case law hallucination that got a lawyer disbarred. Trust but verify. No LLM (large language model) should ever be treated as an authoritative source without fact-checking.
[38:57] Rules of Engagement
Dr. Tuboise Floyd: What I keep hearing is: rules of engagement. Here are the scenarios — here are the rules. I'm watching the fallout of harmful AI and bad agentic decisions in boardrooms and on the front lines. And I see a lack of overarching professional standards in the tech industry. When attorneys pass the bar, they take an oath. When doctors get licensed, they take an oath. AI is the new frontier — and just like the Industrial Revolution, controls follow experience. But AI is compressing that timeline dramatically. Kathy — you said the speed-to-scale window has closed from 10 years to under 5.
Col. Kathy Swacina: AI is running faster than we expected. It's learning faster than we anticipated. You need to look at risk and responsibility — who is in charge of that in your organization. Tap the brakes. Make sure the guardrails are in place. Have everyone in the C-suite aligned from the start — on the same sheet of music. Build an AI adoption plan: what steps before introducing the tool, while introducing it, and what you expect to get out of it.
[42:22] PACE Planning for AI
Col. Kathy Swacina: PACE: Primary, Alternate, Contingency, Emergency. With AI, you have to think through all four levels for your tools and systems. Duplicate critical components. Evaluate your data, your models, your infrastructure. In the military — primary network is JWICS, alternate is email, contingency is satellite phones, emergency is someone gets in a car and delivers the message physically. You need to think through all those steps when designing AI systems.
[44:29] Call to Action
Taiye Lambo: Do something. Go to ProjectCerebellum.com. Sign up for the TAIMScore™ tool. Play the gamified tabletop exercise — it will run you through AI governance scenarios from Level 1 to Level 4, scoring you on customer satisfaction, financial performance, and reputation. Doing the right thing does cost money. But while others are crashing at 200 miles an hour, you're still moving — and when it's time to stop, your brakes work.
Col. Kathy Swacina: Meet the regulatory standards. That's how you stay alive. Apply guardrails for data resilience, model governance, and infrastructure redundancy. Build an AI adoption plan. Communicate across the C-suite. We are the gatekeepers.
[49:37] The Binary Choice
Dr. Tuboise Floyd: It's binary. You can continue to build fast and pray that nothing breaks — or you can build with a nervous system. One path leads to liability. The other path leads to authority. If you want to survive the next 24 months in this AI landscape, you need to understand what Project Cerebellum is doing. Thank you, Taiye. Thank you, Colonel. Study the mission. Study the vision. Implement the controls. Don't be a casualty of your own inventory. This is Dr. Floyd signing off.
About the Guests
Col. Kathy Swacina (USA, Ret.) is a retired U.S. Army Colonel with a 30-year career leading multi-million dollar operations and IT programs. She currently serves as Chief Information Officer at SherpaWerx and Chairperson for Advocacy at HISPI Project Cerebellum. She is an international strategic technology board member and expert in defense networks and change management.
Taiye Lambo is the Founder and Chief AI Officer of HISPI — Holistic Information Security Practitioner Institute and the lead of the Project Cerebellum AI think tank. He is also Founder and CTO of Cloud Assurance, recognized as a Gartner Cool Vendor. Connect with him on LinkedIn.
About the Host: Dr. Tuboise Floyd
Dr. Tuboise Floyd is the founder of Human Signal, an independent AI governance research and media platform based in Washington, DC. A PhD social scientist and former federal contracting strategist, he reverse-engineers institutional AI failures and designs governance frameworks that survive real humans, real incentives, and real pressure. Connect on LinkedIn.
Build Your AI Governance Competency
TAIMScore™ Assessor Workshop — Learn to assess AI governance maturity using the TAIMScore™ framework. Live-fire scenarios. Real-world case studies. Professional certificate issued by HISPI.
→ TAIMScore™ Assessor Workshop → Register NowSubscribe to The AI Governance Briefing — New episodes every month. No vendor decks. No compliance theater. Just signal.
→ Subscribe to the Podcast → ✦ Underwrite Human Signal