HISPI Affiliate Partner · Human Signal™
Are you ready to defend it? The TAIMScore™ Assessor Workshop is a one-day, instructor-led virtual training that gives practitioners a repeatable, audit-ready framework for AI risk management.
Next Session: Loading...
Virtual · Instructor-Led · One Day · 6 CPEs
About
TAIMScore™ is a Trusted AI Model framework — a maturity assessment and risk management training platform providing guardrails for Safe, Secure, Responsible, and Trustworthy AI use cases. The community platform aligns with global standards including NIST AI RMF, ISO/IEC 42001, SOC2, and the EU AI Act.
The tool delivers transparent risk scores, audit-ready reports, and actionable insights to help organizations identify, quantify, monitor, and proactively reduce their AI Incident Probability.
With 10% of cybersecurity job listings now specifically referencing AI skills as a requirement — mastering AI governance is no longer optional.
Workshop
A repeatable AI risk assessment methodology you can use immediately
Fluency in NIST AI RMF, ISO/IEC 42001, SOC2, and the EU AI Act
Hands-on scoring experience using the TAIMScore™ platform
6 CPEs + TAIMScore™ Assessor Certificate
Framework
01
The workshop breaks down the complex ideas and processes of using the TAIMScore™ tool into simple steps that you can start using right away.
02
The training focuses on measurable controls and accountability rather than theoretical AI discussions.
03
Hands-on exercises designed to encourage meaningful engagement across disciplines. Participants leave with a shared vocabulary and common expectations.
04
Advance strategic understanding, enhance cross-functional collaboration, and discover a concrete path for implementing responsible AI practices across the enterprise.
Reviews
I loved how interactive the workshop was and how enthusiastic the speaker was about the training especially when each of us worked in breakout rooms.
Joanne L.
The workshop provided a structured and practical framework for AI governance. The TAIMScore™ methodology was clearly articulated and supported by real-world examples.
Anton K.
The TAIM Workshop was an exceptional and highly valuable experience. The session was thoughtfully structured and grounded in practical application.
Ralph J.
Upcoming Sessions
3rd Friday of every month · Virtual · 10am ET · Instructor-Led · 6 CPEs
No upcoming sessions scheduled.
Check back soon.
Workshop Investment
Secure checkout via PayPal · No hidden fees · 6 CPEs + Certificate included
TAIMScore™ In Action
Human Signal applies the TAIMScore™ framework to real AI failures on the podcast. These Failure Files show exactly what you'll practice in the workshop.
Tap any card to read the full breakdown
Accountability & Training
Microsoft TAY · AIID #6 · Underwritten by HISPI Project Cerebellum
Microsoft spent an estimated $0 on adversarial input controls before releasing TAY in March 2016. Within 16 hours, TAY published racist propaganda and called for genocide. The root cause was a GOVERN 2.2 failure — no accountability structure for what happens when your AI learns from the internet without guardrails.
TAY fails four TAIM domains simultaneously: no adversarial testing protocol, no real-time kill-switch SLA, no viable non-AI alternative planned, and no defined deactivation authority.
The price tag wasn't a line item. It was five years of Microsoft rebuilding trust in its AI products. If your org is shipping a public-facing AI system and nobody has asked "what does our AI do when someone tries to break it?" — you are six months from your own TAY moment.
Join the Next Session →Third-Party AI Risk & Supply Chain
Mata v. Avianca · AIID #541 · Underwritten by HISPI Project Cerebellum
Steven Schwartz had 30 years of legal experience. ChatGPT told him six fabricated court cases were real and verifiable. The judge fined him $5,000. His firm was sanctioned. The case was dismissed. His name is now a global cautionary tale.
This is a GOVERN 6.1 failure: zero supply chain controls on AI tooling. ChatGPT was functioning as a third-party vendor with no SLA, no verification protocol, and no accountability structure. MAP 4.1 would have caught it earlier — mapping the legal risks of every AI component before it touches a court filing.
The pattern is identical in federal procurement, healthcare documentation, and intelligence workflows. If your teams use generative AI in any document-intensive workflow without a human verification checkpoint, you are one hallucinated citation away from your own Mata v. Avianca.
Join the Next Session →Measurement & Incident Tracking
Australian Research Council · AIID #559 · Underwritten by HISPI Project Cerebellum
Researchers applying for Australian government grants worth up to $500,000 found two words at the bottom of their peer review: "Regenerate Response." A reviewer had used ChatGPT to evaluate confidential research — and forgotten to remove the interface artifact.
This is a MEASURE 4.3 failure. No feedback mechanism existed for identifying AI-contaminated assessments. No field data was being collected on how reviewers actually completed evaluations. MANAGE 4.3 compounds it — when discovered, the ARC had no incident response process. The response was reactive, public, and incomplete.
If your institution uses human evaluators in any high-stakes process — procurement review, grant assessment, performance evaluation — and has no mechanism to detect AI-generated output, this incident is your gap analysis.
Join the Next Session →Post-Deployment Monitoring
DeepNude / Telegram · AIID #530 · Underwritten by HISPI Project Cerebellum
By July 2020, a Telegram deepfake bot had been used to non-consensually generate explicit images of at least 100,000 women and girls — the majority unaware. The original DeepNude app was taken down within 24 hours of 2019 press coverage. The technology migrated to Telegram and operated for over a year before researchers published findings.
This is a MANAGE 4.1 failure at scale. No post-deployment monitoring. No abuse reporting integration. No decommissioning procedure for an AI capability that had escaped its original deployment context. The platform didn't know what it was hosting.
If your organisation deploys any generative capability — image, voice, text, or video — without a post-deployment monitoring plan, this incident is your threat model.
Join the Next Session →Privacy Risk & Socio-Technical Design
OpenAI Class Action · AIID #561 · Underwritten by HISPI Project Cerebellum
In June 2023, a class action alleged ChatGPT was trained on private data without consent — including children's data, copyrighted work, and PII — in a 157-page complaint. The FTC opened an investigation. The regulatory fallout is still accumulating.
Every organisation that deployed ChatGPT in a regulated environment without asking "what data was this model trained on?" inherited this risk on sign-up. That question is MAP 1.6. MEASURE 2.10 is where most organisations fail: privacy risk exists but is never formally scored.
In healthcare, federal procurement, and financial services, downstream exposure from deploying a model with contested training data provenance is active regulatory territory — under HIPAA, TRAIGA, the Colorado AI Act, and the EU AI Act simultaneously.
Join the Next Session →Workforce Security & External AI Threats
FBI Sextortion PSA · AIID #551 · Underwritten by HISPI Project Cerebellum
In June 2023 the FBI warned that deepfakes were being weaponised for sextortion — targeting victims using photos scraped from public social media. Sextortion cases involving AI-generated imagery increased 322% in a single year.
The institutional exposure: a federal contractor whose employee becomes a target faces potential coercion, credential exposure, and operational security compromise. None of this appears on a standard AI risk register. This is a MANAGE 2.4 failure — no mechanisms existed to respond to AI systems weaponised against the workforce from outside the perimeter.
If your AI governance program only covers AI you deploy — and not AI deployed against you — your threat model is incomplete.
Join the Next Session →Trustworthy AI Evaluation & Human Subjects
Myanmar Safe City · Underwritten by HISPI Project Cerebellum
Myanmar activated 335 Huawei AI surveillance cameras in December 2020 — a $1.2M Safe City initiative. Six weeks later, the military executed a coup. Those cameras were now operated by a junta that had suspended citizens' right to be free from warrantless surveillance.
This is a MEASURE 2.2 failure. The system was never evaluated for trustworthy characteristics in the context of its actual deployment population. Human subjects protection requirements were not applied. The question TAIM forces — "Have we evaluated impact on those with no power to refuse?" — was never asked.
For US federal agencies and defense contractors procuring surveillance-capable AI: the EU AI Act classifies real-time biometric surveillance as high-risk. TRAIGA and the Colorado AI Act have active requirements. The technology your agency procures today may already be non-compliant.
Join the Next Session →AI Validity, Reliability & Generalization Limits
Opaque AI / Child Welfare · Underwritten by HISPI Project Cerebellum
The Hackneys took their lethargic infant to the ER — the correct decision. Their screening data was fed into an opaque AI risk-scoring tool. The tool flagged them for parental negligence during a national formula shortage. Their child was taken.
This is a MEASURE 2.5 failure. The AI had never been demonstrated valid outside the narrow conditions it was developed under. Its generalisability limits were undocumented. When a real-world edge case appeared — a supply chain disruption affecting feeding patterns — the system had no mechanism to flag uncertainty or defer to human judgment.
This pattern is active in federal welfare systems, veteran services, and disability determination right now. The AI scores. The human follows. The institution absorbs the harm. If your org uses AI scoring in decisions affecting someone's family or benefits, and cannot explain model performance at its edges, this incident is your risk exposure.
Join the Next Session →Bias, Fairness & Contextual Deployment
Wrongful Arrests · AIID #74 · #896 · Underwritten by HISPI Project Cerebellum
Robert Williams. Michael Oliver. Nijeer Parks. Three Black men. Three wrongful arrests. Facial recognition technology never validated for the population it identified. Detroit Police acknowledged their system would yield misidentifications 96% of the time when used in isolation.
Detroit settled with Robert Williams for $300,000. MAP 1.2 failure: no demographic performance analysis was documented for the deployment context. MEASURE 2.11 failure: fairness and bias were never evaluated before deployment — they were evaluated after arrests made national news.
For federal law enforcement and DHS components: TRAIGA, the Colorado AI Act, and the EU AI Act all have active requirements in this area. The biometric identification technology your agency uses today may already be non-compliant.
Join the Next Session →Novel Risk Response & Unknown Unknowns
AI Pentagon Image · AIID #543 · Underwritten by HISPI Project Cerebellum
On May 22, 2023, a verified Twitter account posted an AI-generated image of black smoke near the Pentagon. The S&P 500 dipped. The Dow dropped. Analysts estimate ~$600M in market cap was temporarily erased before the DoD confirmed no incident had occurred. Total exposure time: under 90 seconds.
This is a MANAGE 2.3 failure: no procedures existed to respond to a previously unknown risk — AI-generated disinformation capable of moving financial markets. The response was reactive and slower than the algorithm that spread the image.
For financial regulators, Treasury components, and any federal operator with market exposure: synthetic media targeting high-volatility information categories — military incidents, regulatory announcements, leadership changes — is an active operational risk. If your AI risk register only covers AI you deploy, your incident response framework has a gap.
Join the Next Session →Model Change Management
AI Seinfeld / Twitch · AIID #462 · Underwritten by HISPI Project Cerebellum
Nothing, Forever was a 24/7 AI Seinfeld parody with tens of thousands of viewers. The team switched from GPT-3 Davinci to Curie during a technical outage — without realising Curie lacked the same content moderation. Within minutes the AI delivered transphobic content live. Twitch banned the channel for 14 days.
The failure mode was not the content. It was a model substitution made under operational pressure, without impact assessment, in a live environment. That pattern — "we just swapped the model, it's basically the same" — is active in government AI deployments, healthcare clinical decision support, and financial risk engines right now.
MAP 5.1: impact of model change never assessed. MEASURE 2.9: model never documented in deployment context, so nobody knew what the new output profile looked like until it was live. If your org has substituted an AI model in production without a formal impact assessment, this incident is your audit finding waiting to happen.
Join the Next Session →Feedback Systems & Context-Appropriate AI Use
Vanderbilt / ChatGPT · Underwritten by HISPI Project Cerebellum
Vanderbilt's Peabody School sent students a condolence email following the Michigan State mass shooting. At the bottom: "Paraphrase from OpenAI's ChatGPT." The backlash was immediate, national, and devastating.
The direct cost: $0. The institutional cost: the erosion of student trust, a public apology, and the permanent association of Vanderbilt's name with AI misuse in one of the most human moments an institution can face.
This is a MEASURE 3.3 failure. No feedback mechanism existed to flag high-stakes communication contexts where AI output should be reviewed, escalated, or prohibited. The governance layer that asks "Is this a context where a human being must own the words?" did not exist. TRAIGA's disclosure requirements and EU AI Act transparency obligations both apply here.
Join the Next Session →
Free Study Tool
All 72 controls from the Trusted AI Model — GOVERN, MAP, MEASURE, and MANAGE. Study before the workshop. Review after. Tap any card to flip.
Ready to go deeper?
Put the framework into practice.
The Assessor Workshop is where you score real incidents, work in breakout rooms, and earn 6 CPEs.
Secure Your Spot →