Skip to main content

← The AI Governance Record  ·  Issue No. 015

Issue No. 015 · Guest Feature · AI Governance · Systems Design

The Veteran's Diagnosis

High-performing people. Low-performing ecosystems. A Marine's field report from inside the rooms where AI decisions get approved.

A Guest Feature with Dr. Rhonda Farrell — conducted by Dr. Tuboise Floyd, Founder, Human Signal

Human Signal™ · April 2026


Dr. Rhonda Farrell has spent more than twenty years inside rooms most practitioners will never enter. The Marine Corps. The Department of Defense. The NSA. The broader intelligence community. As a transformation strategist she has shaped programs at the Department of State, the Department of Homeland Security, and across enterprise advisory engagements spanning the full concept-to-field arc. She is an ASQ Fellow, an IEEE senior member, an ISSA Distinguished Fellow, a United States Marine Corps veteran, and the founder of Cyber and STEAM Global Innovation Alliance — a transformation effort building toward 10,000 partners and serving 1 million people globally.

She has sat in rooms where AI decisions get approved. And she arrived at this conversation with a thesis most leaders are not ready to hear.

Organizations are not failing because they lack effort. They are failing because policy, governance, process, and workforce were never designed to operate as a system. Technology scales that system. It does not fix it. In an AI-first environment, the cracks are no longer invisible. AI is the revealer.

What follows is the architecture of that diagnosis — and the Monday-morning move that comes after it.


I

High-performing people. Low-performing ecosystems.

The pattern Farrell keeps seeing across three worlds — the Marine Corps, federal government, and enterprise advisory — is not about effort. The people are trying. In the Corps, she says, execution discipline is non-negotiable. In federal mission environments, mission clarity is strongest. In enterprise, innovation and ideation are constant under competitive pressure. And yet across all three domains, outcomes break down in the same place.

"What I saw consistently was high-performing people operating inside low-performing ecosystems. It wasn't because the people, the workforce, the leaders were not trying their hardest. It was because the ecosystem itself was misaligned. Policy says one thing, process may enable another, platforms — whether legacy systems or modern tools — are configured around partial assumptions, and the workforce and the leaders are left to reconcile all of it in real time."

— Dr. Rhonda Farrell

That sentence is the diagnosis. Misalignment doesn't just create friction, she notes — it compounds exponentially. Delays, rework, inconsistent data, decision bottlenecks that no single function owns. Each function is optimized in isolation. The integration between them is what was never designed.

This is not abstract. It is the byproduct of acquisition and funding models that scope each component separately, wire them into laser-focused contracts, and never engineer the cross-functional execution pathways between them. Organizations invest heavily. They still underperform. Not because the strategy is wrong — because the interconnectivity was never purchased.

Effort cannot compensate for structural lack of interconnectivity. Eventually what occurs is that dysfunction across the four — policy, process, people, platforms — and competing priorities ultimately wins.

The result is an organization that runs on heroic leadership and heroic workforce effort. That pattern does not scale. It burns people out. And when the operational pressure from AI adoption hits, the ecosystem has no reserve capacity to absorb it.

Listen to Dr. Farrell · A Clip from the Forthcoming Episode

"Policies and trainings won't fix culture. Mindset will."— Dr. Rhonda Farrell


II

Where the system cracks.

If misalignment is the condition and effort is not the fix, the practical question becomes: where, precisely, does the breakdown happen? Farrell's answer is structural and unsparing.

"It cracks at the point of translation. Policy is set at the executive level and aligned to strategic intent, but process lives in operations where the work actually gets done. There is no deliberate bridge between the two. The workforce ultimately becomes the translation. And translation without structure introduces variability."

— Dr. Rhonda Farrell

The variability is not a single failure. It is distributed and compounding. Different teams interpret the same policy differently. Different roles make decisions based on incomplete or inconsistent guidance. Platforms are configured without full alignment to policy intent or process reality. The outputs become fragmented. The data becomes conflicting. The decisions become delayed. And over time, the inconsistency erodes trust — without leaders ever being able to name the root cause.

Her operational response is a concept she calls traceability. In her framing, a policy is only real if it can be traced to a specific process step, owned by a named individual or role, and reinforced through the platforms that individual operates in. No traceability, no governance.

This is the same architectural failure Issue 014 of this newsletter named at a different altitude. There it was called the mandate-mechanism gap — regulators write mandates, institutions produce documentation, and nothing in between connects the two. Farrell articulates the same gap from the operator's seat. Her language: "policy without execution is just theory." The mechanism is the trace from policy to platform.


III

The four-P architecture.

Farrell's integrated-design model sits on four elements. She calls them the four P's.

Policy

Set at the executive level. Aligned to strategic intent. The declared governance stance of the organization.

Process

Where the work actually gets done. Operations. The workflow in which policy either executes — or reveals itself as aspirational.

People

The accountable individuals and roles that own decisions. Where decision rights are clarified — or where translation happens by default.

Platforms

Legacy systems and modern tools. The enforcement layer. The technical substrate that either reinforces policy intent or quietly contradicts it.

The unit of analysis is not any one of the four. It is the interaction point between them. Policy needs to trace to process. Process needs to bind to named people. People need to operate through platforms that enforce the policy they were told to follow. When any of these bindings fail, the ecosystem runs on translation. And translation scales poorly under AI-level velocity.

Most leaders are trained to manage vertically — policy in one lane, process in another, platforms in another, workforce somewhere in between. But execution doesn't happen vertically. It happens horizontally across the integration points. That's where the biggest pain points and the biggest unrealized opportunities exist.

The mindset shift Farrell pushes is from managing components to building bridges. From optimizing functions to designing interaction. The organizational chart is not the operating system. The horizontal integration is.


IV

AI as the revealer.

Here is where the veteran's diagnosis meets the AI moment directly. Farrell is unambiguous: AI did not create the misalignment. It made it visible.

"AI is exposing huge decision gaps. Who owns the decision. Who makes the call. What triggers AI automated actions. This ambiguity might have been manageable in legacy or prior digital transformation efforts, but it is now immediately visible under speed."

— Dr. Rhonda Farrell

That second sentence is the entire governance case for moving now. The ambiguity was always there. Slow systems absorbed it. AI-speed systems do not. The moment an automated action executes without a clearly owned decision behind it, the governance failure is both instantaneous and legible — to a regulator, to a plaintiff, to a journalist, to a board.

Farrell's corrective is not to slow AI down. It is to stop deploying it on top of unmeasured architecture. She describes a phased approach: pilot, validate, govern, then scale. Secure and compliant technology baseline first. Then the NIST Cybersecurity Framework, the Cybersecurity Maturity Model, the NIST AI Risk Management Framework, and the TAIMScore™ Trusted AI Model — used in sequence, not in parallel, as a phased maturation program that grows capability alongside control.

The companion between her position and Human Signal's is direct. Issue 014 of this newsletter argued that NIST AI RMF is the mandate and TAIMScore™ is the mechanism. Farrell's work, from the inside, validates why the mechanism is necessary: the mandate alone cannot reach the point of execution fast enough for AI-speed organizations. Her traceability model is the operator's version of the same claim.


V

The Monday morning move.

A diagnosis that does not translate into action is another form of performative work. Farrell's final practical contribution to this issue is an exercise leaders can run tomorrow morning. She has walked leadership teams through it live. The breakthrough, she says, typically takes hours, not quarters.

The exercise is straightforward. Take a single, currently active policy or business rule. Then trace it.

Step One — Decision Surface

What decision is this policy supposed to enable or constrain? Who has the decision rights? Is that decision happening in real time, or is the policy acting as a reference document that nobody consults at the point of execution?

Step Two — Process Binding

Where does this policy live in process? Which workflow step? Who owns that step by name? If the answer is "the team" or "operations" — the policy is not bound to a person. The translation is happening by default.

Step Three — Platform Enforcement

What platform or system is actually enabling or constraining this policy in action? Is the technical enforcement layer aligned to policy intent — or is the platform configured around a partial assumption that contradicts the policy?

Run this once. The gaps surface immediately. Policies with no execution path. Processes with no clear ownership. Decision points with no defined authority. Platforms that do not reinforce the intended strategy. The exercise reveals where the ecosystem is relying on human translation instead of integrated design — and in Farrell's experience, the conversation immediately shifts from "what should we be doing?" to "where do we need to redesign the ecosystem?"

That's the Monday morning move. Not more planning. Not more training. Targeted redesign at the point where execution is actually breaking.

When the inevitable pressure arrives — a regulatory inquiry, a model failure, an audit, an incident — the organizations that have done this work do not have to invent their governance in real time. Their decision rights are aligned. Their processes are bound. Their platforms enforce. And as Farrell puts it: "pressure doesn't break solutions. It reveals them."


Coming Soon to The AI Governance Briefing

The Veteran's Diagnosis — A Conversation with Dr. Rhonda Farrell

The complete interview — with every quote in her own voice — is dropping soon on The AI Governance Briefing. Subscribe so you don't miss it.


About the Guest

Dr. Rhonda Farrell

CEO of Global Innovation Strategies and founder of Cyber and STEAM Global Innovation Alliance. United States Marine Corps veteran. ASQ Fellow. IEEE senior member. ISSA Distinguished Fellow. Twenty-plus years across the Marine Corps, DoD, NSA, and the broader intelligence community, with advisory engagements spanning the Department of State, the Department of Homeland Security, and enterprise transformation. Her work spans the full concept-to-field arc across policy, people, process, and platforms — from RMF Lifecycle maturation at DoD and NSA to enterprise transformation in commercial and critical-infrastructure environments.

Her current work includes the What's Next 2026 executive series and Cyber Advantage 2026: Risk Readiness and Strategic Control — both published weekly on YouTube — alongside her contributions to HISPI, American Society for Quality leadership forums, and white papers with AEA International including Securing Smart Cities in the Age of AI and Cybersecurity for the Frontline.

Connect with Dr. Farrell on LinkedIn →

Related Research

The Mandate-Mechanism Gap

Issue 014 of this newsletter named the structural failure Dr. Farrell describes from inside the rooms: regulators publish mandates, institutions produce documentation, and nothing in between connects the two. The position paper is the long-form argument. Farrell's traceability model is the operator's version of the same claim.

Read the Position Paper →

About Human Signal

Dr. Tuboise Floyd | Founder, Human Signal

Human Signal is an independent AI governance research and media platform dedicated to institutional risk analysis. We reverse-engineer institutional AI failures and develop frameworks operators can use when it matters — not frameworks designed to satisfy an audit.

Govern the machine. Or be the resource it consumes.

— Dr. Tuboise Floyd · Founder, Human Signal

#AIGovernance #DrRhondaFarrell #SystemsDesign #MarineCorps #DoD #HISPI #ProjectCerebellum #TAIMScore #HumanSignal #AIGovernanceRecord