AI literacy is no longer a technical elective — it is a civic and professional survival skill. In this live open forum recorded at Georgetown University, Dr. Tuboise Floyd (founder of Human Signal and host of The AI Governance Briefing) and Taiye Lambo (founder of HISPI — the Holistic Information Security Practitioner Institute) answered student questions about AI governance, deepfakes, the limits of AI tools, and how to navigate the job market in an era of accelerating AI adoption.
The central message: never blindly trust an AI system — always verify. What follows is the full transcript, lightly edited for readability.
Key Takeaways
- Employers now expect candidates to critically evaluate AI outputs — not merely use them
- Frame your AI literacy as risk awareness, not tool proficiency — it signals professional maturity
- AI literacy is not a STEM skill. It is a civic and professional survival skill for every major
- The standard in AI governance is "never blindly trust — always verify" — stronger than "trust but verify"
- Human in the loop is a structural requirement, not a slogan — and it must be an honest human in the loop
- Training data bias has life-or-death consequences in healthcare, environmental science, and defense
- Invest in AI governance and risk certifications over tool-specific training — governance knowledge transfers; tool knowledge expires
AI Literacy as a Career Survival Skill
Opening the forum, Dr. Tuboise Floyd gave students a framework to carry into every interview, internship, and co-op search:
- Employers across every sector now expect employees to critically evaluate AI outputs — not just use them. Being a digital native who can pick up tools quickly is table stakes. The market now demands judgment about when and whether to trust those outputs.
- Frame your AI literacy as risk awareness, not tool proficiency. This signals maturity. It shows you're thinking about downstream consequences — not just immediate outputs.
- Humanities and social science students: this applies to you. AI shapes narratives, content moderation, and policy recommendations. If you're in foreign service, policy, or the social sciences, understand AI as a policy recommendation engine — and understand the bias baked into those recommendations.
- The goal isn't to become a data scientist. The goal is to ask the right questions: Is this the right tool? Will it cause harm? What assumptions were embedded in the model?
"AI literacy is not a STEM skill. It is a civic and professional survival skill." — Dr. Tuboise Floyd
Spotting Deepfakes in Short-Form Media
Students raised the challenge of AI-generated video spreading through short-form platforms — particularly affecting older generations who are not primed to question what they see. Dr. Floyd shared a personal example: a TikTok video of an alligator crashing through a Walmart in Delray Beach, Florida — complete with a child and shopping cart — that was so realistic he nearly believed it until he noticed that the man standing near the child ran in the opposite direction, and a bystander by the door never moved.
One student made a critical observation about short-form content specifically:
"With short-form content, you're not in analysis mode. You're consuming for a quick hit of dopamine or shock value. You move on and you don't analyze it. The tells are there — like a doctor holding a syringe with the palm of his hand in a physically impossible way — but you only catch them if you deliberately slow down and look."
The skill of spotting AI-generated content is not a computer science skill. It is an observational and critical thinking skill — one that institutions like Georgetown are already developing across every major through existing liberal arts frameworks.
The Lawyer Case: What Happens When You Overtrust AI
Taiye Lambo described one of the most publicly documented cases of AI over-reliance in professional practice — a case that appears in Human Signal's Failure Files™ as a canonical example of structural insufficiency:
"A lawyer used ChatGPT to generate case law citations in a lawsuit involving Meta. The judge accepted those citations. The lawyer's client won. Then the judge discovered the case laws were entirely fabricated by ChatGPT. The decision was reversed, and the lawyer was disbarred."
The failure was not that the AI hallucinated — LLMs are known to do that. The failure was that the lawyer never verified the output, and when pressed, the AI confidently affirmed that the cases were real. The lawyer took that at face value.
Lambo's principle, drawn from cybersecurity access control, applies directly:
"In security we say 'trust but verify.' With AI, the standard must be higher: never blindly trust — always verify. Because even if the model is capable, if the data is corrupted or fabricated, it's garbage in, garbage out."
This case is precisely the kind of structural failure the L.E.A.C. Protocol™ is designed to surface — a failure not of the model's capability, but of the governance structure around it. The institution had no intervention layer between AI output and consequential action.
The Honest Human in the Loop
"Human in the loop" has become a widely used phrase in AI governance — but Taiye Lambo argues it is incomplete without one critical modifier:
"You need an honest human in the loop. You can have a human in the loop who rubber-stamps a bad output and blames the system when something goes wrong. That is not governance — that is liability deflection."
For high-risk AI systems — in healthcare, defense, environmental monitoring, and financial systems — honest human oversight means taking accountability for decisions made using AI outputs. It means having the courage to say "I made a mistake" so it doesn't happen again. Without that standard, human oversight becomes a checkbox rather than a safeguard.
This connects directly to the Noise Discipline framework: the problem isn't always that people are fooled by bad information — it's that they stop engaging their judgment entirely, processing AI outputs on autopilot rather than with deliberate evaluation.
Tokens, Hallucinations, and Knowing When to Stop
Dr. Floyd offered a practical governance tip that most users don't know:
- Approximately 3,000 tokens ≈ 2 pages of text
- In long AI sessions, as the context window fills, error rates increase and the model may begin to hallucinate
- You can ask the model directly: "Are you hallucinating?" — a well-calibrated model will often acknowledge the issue
- At that point, end the session and start a new chat with a fresh context window
Understanding the mechanical limits of AI tools — context windows, token budgets, hallucination triggers — is part of what separates informed AI users from naive ones. Human Signal's Hyperprompt™ protocol was developed specifically to address context control and reduce hallucination risk in extended LLM workflows.
Bias in Training Data: Who Was Included?
Dr. Floyd raised a question that should be front of mind for every science, policy, and healthcare student:
"When AI models were trained — who was in the training data? And who wasn't?"
He described a case where a medical device company was preparing to bring to market a product that failed to scan accurately on Black and brown patients — because the training data had been drawn exclusively from Caucasian subjects. The product had funding. It had a go-to-market strategy. And someone eventually had to ask that question before it caused harm.
For students in science, environmental studies, and healthcare: the same principle applies to species classification models, climate models, and diagnostic AI. Underrepresented data in training means underserved populations in outcomes. This is the GASP™ diagnostic in practice — the governance absence is structural, not technical.
Rapid-Fire Career Advice for the AI Era
Dr. Tuboise Floyd's Advice
- Get books. Read deeply in your subject area. Your brain is the most powerful computer you will ever own — train it on good data.
- Respect institutional knowledge. Talk to senior people in your field. The knowledge that predates AI is often the knowledge AI can't replicate.
- Think in outcomes, not outputs. What is the decision this AI system is informing? What are the second-order consequences? That is the governance question.
- Invest in AI governance and risk certifications, not tool certifications. Tools change. Governance principles transfer across every platform and every industry.
"The most dangerous person in the room is the one who doesn't know they are using AI to make a high-stakes decision." — Dr. Tuboise Floyd
Taiye Lambo's Advice
- Don't go into interviews as anti-AI. Show you understand adoption — and that you know how to do it safely.
- Do as many internships as possible. Real-world exposure tells you what the industry actually values, not just what job postings say.
- Show the balance view: "I can help you leverage AI as a tool, and I know we have to do it safely." That framing signals both technical awareness and mature judgment.
Full Transcript
Lightly edited for readability. Speaker labels and timestamps preserved from original recording.
[00:00] Welcome and Setup
Dr. Jeanetta Floyd: Thank you for coming to this event. It's going to be a bit laid back — this is your opportunity to ask questions of our experts. For everyone online, we have a ton of food you're missing out on, but I wanted to create space for students — many of whom are seniors heading into a new frontier — to ask real questions about AI governance.
Taiye Lambo is with us. He is the founder of HISPI — the Holistic Information Security Practitioner Institute — a think tank focused on information security and governance. We also have Dr. Tuboise Floyd, founder of Human Signal and an AI governance researcher and podcast host.
[00:52] Meet the Experts
Taiye Lambo: Thank you, Dr. Floyd — both Dr. Floyds — for having me. I want to be upfront: I am not an AI expert, and I think most chief information security officers and chief information officers, if they're being honest, would say the same. We're all still figuring this out. What I have focused on for the past three years inside HISPI is AI governance — because governance is a critical, often overlooked component of any AI strategy or program.
Dr. Tuboise Floyd: I'm the other half of the Dr. Floyd team — Dr. Tuboise Floyd. My career spans 15+ years in systems engineering. I've supported the federal government, worked with Tom Frieden at the CDC, and I'm a trained systems theorist and social scientist. The podcast is now called The AI Governance Briefing, and we are trending nationally and internationally — recently hitting the top 100 of all time in leadership and management after just one year.
[05:02] AI Literacy for Careers
Dr. Tuboise Floyd: Employers across every sector now expect employees to critically evaluate AI outputs — not just use them. Frame your AI literacy as risk awareness, not tool proficiency. The goal isn't to become a data scientist. The goal is to ask the right questions.
Taiye Lambo: [Audience poll: 3 students raised hands for threat; 6 for opportunity.] AI is here to stay. The question isn't whether to adopt it — it's how to leverage it while mitigating the risk.
[10:32] AI Literacy Is a Civic and Professional Survival Skill
Dr. Tuboise Floyd: AI literacy is not a STEM skill. It is a civic and professional survival skill.
Student: I've been watching AI-generated videos show up on my grandmother's iPad. Being able to differentiate between what's real and what's not is critically important.
Dr. Tuboise Floyd: I saw a TikTok video of an alligator crashing through a Walmart in Delray Beach, Florida. The video looked completely real. It was only when I noticed the man standing near the child ran in the wrong direction — and someone by the door never moved — that I caught it. That's the quality level we're dealing with.
Student: With short-form content, you're not in analysis mode. You're consuming for a quick hit. The tells are there — like a doctor holding a syringe with his palm in a physically impossible way — but you only catch them if you deliberately slow down and look.
[16:14] The Lawyer Case: When Overtrusting AI Has Consequences
Taiye Lambo: A lawyer used ChatGPT to generate case law citations in a lawsuit involving Meta. The judge accepted those citations. The lawyer's client won the case. Then the judge discovered the case laws were entirely fabricated by ChatGPT. The decision was reversed. The lawyer was disbarred. The lesson: never blindly trust — always verify.
[19:06] The Wikipedia Analogy and Real-World Risk
Dr. Tuboise Floyd: AI tools are like Wikipedia — useful starting points, not authoritative sources. In science and finance, the stakes are not an essay grade. When you're funding a project based on AI-generated analysis, you may be making decisions that affect people's health and lives.
[21:06] Continuous Audits in Clinical Settings
Dr. Tuboise Floyd: AI in clinical settings needs continuous auditing — not just a one-time validation. Human in the loop is not a slogan. It is a structural requirement for when the model fails.
[21:28] The Honest Human in the Loop
Taiye Lambo: I add one word: honest. You need an honest human in the loop. You can have a human in the loop who rubber-stamps a bad output and blames the system when something goes wrong. That's not governance — that's liability deflection. We need people who take accountability for decisions made using AI outputs.
[22:04] Environmental AI and Data Gaps
Dr. Tuboise Floyd: Species classification and climate models carry the same risks as clinical AI when trained on historically undersampled ecosystems. When systems that were tracking environmental data get turned off, AI models continue operating on incomplete baselines. When you enter your career, ask: What data was this model trained on? What's missing?
[23:13] Public Trust and Accountability
Taiye Lambo: Public trust erodes when AI-generated health outcomes are wrong and there is no accountability mechanism. When an AI system produces a harmful result and the answer is simply "the machine made an error" — that is not accountability. That is evasion.
[25:28] Tokens and Hallucinations
Dr. Tuboise Floyd: Approximately 3,000 tokens equals two pages of text. When you're in a long AI session and you start noticing increasing errors, the model may be running low on tokens and beginning to hallucinate. You can ask it directly: "Are you hallucinating?" At that point, stop the session and start a new chat.
[26:51] Bias in Training Data
Dr. Tuboise Floyd: When AI models were trained — who was in the training data? And who wasn't? There was a case where a medical device failed to scan accurately on Black and brown patients because it had been trained exclusively on data from Caucasian subjects. They had funding, momentum, and a go-to-market plan. Someone eventually had to ask that question.
[27:56] Interviewing in the AI Era
Taiye Lambo: Don't go into interviews as anti-AI. Show you're on board with adoption — and that you understand how to do it safely. The balance view: "I can help you leverage AI as a tool, and I know we have to do it safely." We've seen Coca-Cola and Walmart CEOs step down, saying they're making room for a generation that can move faster with AI. Entry-level candidates who are AI-fluent and governance-aware have a genuine competitive advantage.
[30:28] AI Disruption and the Generational Shift
Dr. Tuboise Floyd: You are at the beginning of the AI era, just as my generation was at the beginning of the internet. Respect the institutional knowledge that predates AI. The most dangerous person in the room is the one who doesn't know they are using AI to make a high-stakes decision.
[33:21] High-Stakes AI Blind Spots
Taiye Lambo: We've seen cases where military decisions were made using outdated mapping data. I know Google Maps isn't always current — even I can see that when I look up my own address. The question is whether the humans in the decision chain are applying the same critical scrutiny to AI-assisted decisions they would apply to any other high-stakes judgment.
[36:02] Rapid-Fire Career Advice
Dr. Tuboise Floyd: Get books. Read deeply in your subject area. Your brain is the most powerful computer you will ever own — train it on good data. Respect institutional knowledge. Invest in governance certifications, not tool certifications.
Taiye Lambo: Do as many internships as possible. Stay plugged into the industry. And know that the landscape will change dramatically — go into it with the cup half full, and master the tools before they master you.
[41:03] Closing
Dr. Tuboise Floyd: It has been an honor to come back to the Hilltop. Keep your eye on AI governance training and certification programs. Invest your professional development dollars in governance, risk assessment, and analysis — not tool-specific certifications. That investment will compound over your entire career.
About the Guest: Taiye Lambo
Taiye Lambo is the Founder and Chief AI Officer of the Holistic Information Security Practitioner Institute (HISPI), a think tank focused on AI governance and information security practitioner development. Connect with him on LinkedIn.
About the Host: Dr. Tuboise Floyd
Dr. Tuboise Floyd is the founder of Human Signal, an independent AI governance research and media platform based in Washington, DC. A PhD social scientist and former federal contracting strategist, he reverse-engineers institutional AI failures and designs governance frameworks that survive real humans, real incentives, and real pressure. Connect on LinkedIn.
Build Your AI Governance Competency
TAIMScore™ Assessor Workshop — Learn to assess AI governance maturity using the TAIMScore™ framework. The professional credential for institutional operators who own the outcomes.
→ TAIMScore™ Assessor Workshop → Register NowSubscribe to The AI Governance Briefing — New episodes every month. No vendor decks. No compliance theater. Just signal.
→ Subscribe to the Podcast → ✦ Underwrite Human Signal