Greetings to all of you #RTWABers out on the Internets! I have been meaning to share this great article from McKinsey around deploying agentic AI safely and securely. I had only recently stumbled across their content as I am not a big fan of the big 'consulting' companies. However, after reading some of the content they have shared, I may have to 'warm' up to this particular company. :)
Alright let's jump into the article. They speak to how business leaders are rushing to embrace agentic AI. While that may sound like 'fluff' to some people, from what I am experiencing this is a true statement. For example, a few weeks ago I was fortunate to run into an executive level leader and there were asking very pointed questions around specific use cases like customer service, auditing, etc.
The one part they speak to that I do have 'pause' on is the projected 'value add':
"A growing number of organizations are now exploring or deploying agentic AI systems, which are projected to help unlock $2.6 trillion to $4.4 trillion annually in value across more than 60 gen AI use cases, including customer service, software development, supply chain optimization, and compliance.
That number seems a little high to me unless they are taking into account that amount of money over a longer span of time.[Ed.- here is the link to the article containing their estimates.]
Let's move on to what I think is the most important part of the article. Emerging risks. They have identified 5 emerging risks in the agentic era.
- Chained vulnerabilities. A flaw in one agent cascades across tasks to other agents, amplifying the risks.
- Cross-agent task escalation. Malicious agents exploit trust mechanisms to gain unauthorized privileges.
- Synthetic-identity risk. Adversaries forge or impersonate agent identities to bypass trust mechanisms.
- Untraceable data leakage. Autonomous agents exchanging data without oversight obscure leaks and evade audits.
- Data corruption propagation. Low-quality data silently affects decisions across agents.
While I can't go into specific details, I can confirm that some of these items are either things I am directly seeing in the 'real world' or those that are a higher probability and I am planning for.
That brings me to the next section of the article what I want to speak to. Guiding principles for agentic AI security. They highlight three areas to consider before you do agentic deployment.
- Does our AI policy framework address agentic systems and their unique risks?
- Is our risk management program equipped to handle agentic AI risks?
- Do we have robust governance for managing AI across its full life cycle?
Let's dive in more on the first item. At this point in time with 2026 a few weeks away, I would hope your company already has AI policies in place. The goal for this step is to make changes to your current policies, standards or guidelines to take into account agentic AI workflows.
Next up let's look at Risk Management. Interestingly enough, this was one of the talking points of one of the Cloud Con Tech Talks. It is very important for a company's risk program to understand that they may have to revisit certain projects because it involves artificial intelligence.
The last one ties a little bit into the previous one. AI Observability is what I think is the #1 thing that companies need to understand and plan for in 2026 and beyond. There are great resources out there that can speak in more detail to this concept.
There are several other great parts to this article that I am not going to go into detail right now. I highly recommend taking the time to read the whole thing.
Here is the link to the full article.