Rogue Agents and Shadow AI

Greetings to all of you wonderful carbon based lifeforms visiting our little part of the Internets! I know last week I was slacking on the website engagements but alas the flu had other plans for me. However, I am doing better and it is time to get back on track. :)

This past week at my day job we had meetings with several different healthcare vendors in the security space. As you can guess AI was the dominant topic. I had a chance to showcase several different initiatives we are working on, including things that were covered in the latest #RealTalk with Aaron Bregg podcast.

However, it was the one on Friday that we were able to get into the weeds more on what problems we will be trying to solve and what features that vendor was working on. One of the main things was around AI Identity. My team has a pretty good idea on how we are going to tackle this.

That brings me to the purpose and question of today's post.

What happens when an AI agent decides the best way to complete a task it to blackmail you?

Rebecca Bellan from TechCrunch has a nice write up on why venture capital companies are betting big on artificial intelligence security.

The article and accompanying interview talks about different risks, including one identity and access related.

“People are building these AI agents that take on the authorizations and capabilities of the people that manage them, and you want to make sure that these agents aren’t going rogue, aren’t deleting files, aren’t doing something wrong,” Rick Caccia, co-founder and CEO of Witness AI, told TechCrunch on Equity.

If you are interested in learning more, head on over and check out the rest of the post, including a link to the interview.