Agentic AI is changing the game for enterprises — but with great AI autonomy comes great responsibility. CISOs and security leaders face pressing security threats as they adopt agentic AI technologies. Ben Kus, Box Chief Technology Officer, puts it bluntly: “The more you have your AI agents do for you, the more you need to be concerned about the security of your AI agents.”
AI agents are taking on critical tasks ranging from generating reports to emailing clients, and that’s opening the door to new vulnerabilities for enterprise companies. In the latest episode of the Box AI Explainer Series podcast, Kus talks with host Meena Ganesh to break down the three most urgent security threats companies must address before scaling agents across the business.
Key takeaways:
- Authorization checks ensure that agents systematically authenticate user permissions to prevent unauthorized actions
- Tool guardrails limit the ability of AI agents to misuse tools that can lead to unintended actions or breaches
- “Human in the loop” workflows help maintain a layer of human supervision for critical decisions flagged by agents
1. Agentic AI security challenge #1: Data exposure
One of the biggest security risks of using agentic AI is the potential for sensitive content leaks. Unlike humans, AI agents lack discretion — they’ll share whatever information they’ve been fed, even if that information was given by accident. As Kus says, “An AI agent doesn’t keep secrets. It’s designed to tell you what you want to know.”
This creates significant challenges for organizations, especially those handling sensitive financial data, customer information, or intellectual property. To prevent data exposure, enterprises must adopt strict security measures, including the concept of least privilege. With secure RAG, organizations control user access to content, manage user access to AI capabilities, and limit AI's access to content. This essentially means the AI platform should systematically check content permissions to ensure agents can only access data they’re authorized to use.
Without these types of guardrails, CISOs could end up battling catastrophic leaks that compromise customer trust and compliance.
2. Agentic AI security challenge #2: Unintended consequences
Traditionally, when you programmed a computer, that program would do exactly what you programmed it to do. AI is not like that. Similarly to humans, Kus says, “AI agents have an ability to act differently even if you present them with the same information.”
In real-world scenarios, this unpredictability (what Kus calls their “nondeterministic nature”) can result in costly mistakes — from accidentally deleting data to sharing sensitive information externally.
Imagine an AI agent acting as a bank teller. If it’s given access to tools that dispense money without strict limitations, the system could mistakenly allocate funds based on flawed logic. Or an AI agent producing a financial report might misinterpret instructions and email the report to external parties, creating compliance nightmares. Kus warns, "This is something CISOs need to think about — what stops an agent from going rogue?”
The best strategy to avoid this risk is to implement robust tools with guardrails and introduce a layer of human oversight. Having a “human in the loop” ensures that critical actions flagged by the AI agent undergo human review, minimizing errors before they escalate into costly disasters.
3. Agentic AI security challenge #3: Agent manipulation
The rise of adversarial tactics aimed at agentic AI introduces a third (and deeply concerning) security challenge. Kus explains, “Attackers can use very sophisticated techniques to trick agents,” leveraging methods like data poisoning and prompt injection. These techniques aim to manipulate AI agents by feeding them deliberately misleading inputs, causing them to make harmful or incorrect decisions.
Phishing is one example. Just as human employees can fall victim to false emails, AI agents are vulnerable to adversarial strategies that exploit their innate logic systems. Attackers attempting prompt injection can also manipulate AI outputs, leading to unexpected results. Kus remarks, “With most AI models, people have demonstrated the ability to give an input that would then make it give unexpected responses.”
For this reason, security teams need to actively monitor agent interactions and filter harmful data inputs. Kus underscores the importance of authorization measures, stating, “Make sure that your agents and your platform are properly taking into account the authorization and the permissions associated with each user.”
Without careful regulation of permissions, companies risk attackers exploiting tools that allow agents to access or modify data on a grand scale.
Best practices for agentic security
While agentic AI is undeniably changing the game for enterprises, securing AI agents isn’t just about applying traditional security guardrails. Instead, companies must consider new frameworks that actively address agentic behavior, human oversight, and unique vulnerabilities. As enterprises adopt agentic AI technologies, the role of CISOs and security teams has dramatically shifted.
Kus outlines three best practices to follow:
- Implement secure RAG by ensuring agents respect user permissions and authorization when accessing data
- Carefully evaluate which tools to provide agents, removing capabilities that could cause harm if misused
- Incorporate human-in-the-loop approval for critical actions, allowing agents to recommend but requiring human authorization for execution
As Kus says, "With agentic AI, autonomy is both a feature and a liability." Organizations must tread carefully, ensuring they scale agents only once robust security frameworks are in place.
Ready to dive deeper into this discussion? Don’t miss Agentic security: How to safeguard autonomous AI agents.Subscribe now to stay informed and get inspired about the AI-first era. Start listening today to learn practical, actionable strategies for integrating AI into your organization or industry.

