AI Security Risks Are Also Cultural and Developmental

Security teams spend much of their time tracking vulnerabilities, abuse patterns, and system failures. A new study from an international group of scholars—including Ludwig Maximilian University of Munich, the Technical University of Munich, and the African Union—argues that many AI risks sit deeper than technical flaws. Cultural assumptions, uneven development, and data gaps shape how AI systems behave, where they fail, and who absorbs the harm. The research examines AI through international human rights law, with direct relevance to security leaders responsible for AI deployment across regions and populations.

The study finds that AI systems embed cultural and developmental assumptions at every stage of their lifecycle. Training data reflects dominant languages, economic conditions, social norms, and historical records. Language models perform best in widely represented languages and lose reliability in under-resourced ones. Vision and decision systems trained in industrialized environments misread behavior in regions with different traffic patterns, social customs, or public infrastructure. From a cybersecurity perspective, these weaknesses resemble systemic vulnerabilities that widen the attack surface by producing predictable failure modes across regions and user groups.

AI systems increasingly shape cultural expression, religious understanding, and historical narratives through generative tools that summarize belief systems and reproduce cultural symbols at scale. Errors in these representations influence trust and behavior—communities misrepresented by AI outputs disengage from digital systems or challenge their legitimacy. In political or conflict settings, distorted cultural narratives contribute to disinformation, polarization, and identity-based targeting. Security teams working on information integrity and influence operations encounter these risks directly, as the study positions cultural misrepresentation as a structural condition that adversaries exploit rather than an abstract ethics issue.

AI infrastructure relies on compute access, stable power, data availability, and skilled labor—resources that remain unevenly distributed worldwide. Systems designed with assumptions of reliable connectivity or standardized data pipelines fail in regions where those conditions do not hold. Healthcare, education, and public service applications show measurable performance drops when deployed outside their original development context. Security teams relying on AI-driven detection inherit these blind spots, as threat signals expressed through local idioms, cultural references, or non-dominant languages receive weaker model responses. The research frames epistemic limits as structural constraints that shape incident response quality across regions.

Credit to Anamarija Pogorelec from Help Net Security for the insight!