What It Is
A joint international guidance document from CISA and 8 other global cybersecurity agencies providing a framework for safely integrating AI into operational technology (OT) environments that control critical infrastructure.
Why It Matters for Healthcare
Healthcare OT systems—biomedical devices, HVAC, physical security, power systems, lab equipment, medication dispensing—could potentially benefit from AI, but integration creates significant patient safety and security risks that must be carefully managed.
Four Core Principles
- Understand AI - Know the unique risks, secure development lifecycle, and educate staff
- Consider AI Use in OT - Evaluate business case, manage data risks, understand vendor roles
- Establish Governance - Create frameworks, integrate with existing security, test thoroughly
- Embed Oversight & Failsafes - Monitor continuously, maintain human-in-the-loop, plan for failures
Critical Warnings
🚨 LLMs should "almost certainly" NOT be used to make safety decisions for OT environments - the reliability and hallucination risks are too high when physical safety is at stake.
🚨 Human-in-the-loop is mandatory - AI should assist, not autonomously control critical systems. Operators must maintain manual skills and be able to intervene.
🚨 Vendor transparency is non-negotiable - Demand SBOMs, explicit data usage policies, ability to disable AI features, and on-premises operation options.
Top 10 Risks Identified
- Cybersecurity vulnerabilities (prompt injection, model poisoning, token compromise)
- Data quality issues (poor training data degrades safety)
- AI model drift (accuracy degrades over time)
- Lack of explainability (can't audit decisions)
- Operator cognitive overload (alarm fatigue)
- Regulatory compliance challenges (standards still emerging)
- AI dependency & skill erosion (operators lose manual capabilities)
- Interoperability issues (integration complexity)
- System complexity (new attack surfaces)
- Reliability concerns (hallucinations, incorrect outputs)
Healthcare-Specific Takeaways
What This Means for You:- AI integration in healthcare OT requires significantly more rigor than IT AI deployments
- Patient safety trumps efficiency gains - if you can't guarantee safe operation during AI failure, don't deploy
- Vendor evaluation criteria must expand - technical capability isn't enough; demand security transparency
- Existing risk frameworks must evolve - embed AI-specific assessments into your current OT risk management
- Staff training is critical - engineers and operators need AI literacy without losing manual competencies
Decision Framework: Before considering AI in any OT system, ask:
- Can this fail safely without causing patient harm?
- Do we have test infrastructure to validate it thoroughly?
- Does the vendor provide full transparency (SBOM, data usage, security documentation)?
- Can operators maintain manual skills and override AI decisions?
- Does our incident response plan account for AI-specific failures?
If you can't answer "yes" to all five, the system is not ready for production deployment.
Bottom Line
This guidance validates what we discussed earlier about new AI Security startups: healthcare organizations cannot afford to be early adopters of immature AI security technologies in OT environments. The stakes are too high, the standards are still evolving, and patient safety is non-negotiable.
The document provides an excellent framework that directly supports your conservative, risk-based approach to AI security vendor evaluation.
To read the full report, click on the link in the CISA landing page.