How AI agents will power the next era of content security

|
Share
Manoj Asnani

Security teams are drowning. A mid-size enterprise now faces 1.3 million security alerts annually — a number that’s grown exponentially as AI agents multiply content creation and access patterns. 

Meanwhile, AI has created an entirely new attack surface: Employees unknowingly accessing sensitive data through AI chatbots, agents that could be compromised by malicious prompts, and ransomware attacks that cost organizations over $1 billion last year alone.

The old playbook — bolting security tools onto content platforms — can’t keep pace. When a patent application and a product datasheet look identical to traditional classification tools, and when every alert requires hours of manual investigation, security teams need a fundamental rethink.

We sat down with Manoj Asnani, Vice President of Product Management of Security & Compliance at Box, to discuss how AI agents are transforming from security risk to security solution with the newly announced Box Shield Pro.

Key takeaways:

  1. Box Shield Pro automates threat triage so security teams review far fewer alerts and respond faster
  2. Agentic AI enables nuanced content classification that identifies sensitive documents traditional tools miss
  3. AI agents increase attack surfaces by increasing the odds of accidental data exposure via chatbots and automated access
  4. Native content-platform protection leverages context and outperforms bolt-on solutions
  5. Box Shield Pro detects ransomware and anomalous collaboration patterns to sever threats before they impact critical repositories

Why is threat analysis becoming impossible for security teams to manage manually?

When you think about security operations teams, on average, a mid-size security operations team deals with 1,300,000 alerts a year. That’s a massive number. 

How do they scale? They typically have some automation to identify false positives or noncritical alerts. But they still have to manually review many alerts that seem worthy of their time. They have to go to different systems to pull information, understand the threat’s history, identify threat actors, and more.

With Box Shield Pro, which we announced at BoxWorks, we’re creating a Threat Analysis agent that serves as the first reviewer on all alerts. When an alert triggers, the agent determines whether it requires manual intervention and creates a concrete, concise summary with clear delineation of what the threat was, which user was impacted, and what actions were carried out. These summarized alerts are designed to be easy to digest, with specific instructions that enable security teams to respond quickly and effectively.

How can AI understand document sensitivity when traditional classification tools can’t?

Most organizations classify only a fraction of their content. Now, we can classify almost all content — not just that, but with much deeper and nuanced understanding.

Let me give you an example. Take a technical architecture document that has really detailed information on how your technology works, and compare that to product data sheets, which are publicly available. When you look at those two documents, they might look the same to the naked eye. But the agent can get the context and nuance — “This looks like an internal tech architecture and therefore is company IP and this looks like a data sheet.” Therefore, this is probably sensitive, and the data sheet is not. The agent supercharges your ability to classify content that you otherwise just wouldn’t be able to classify.

Q: How is security changing in an AI-first era?

AI has brought a whole new type of user to the enterprise — agents. We’ve already seen the beginnings of agents engaging with content, collaborating with users, and effectively multiplying your workforce. Adoption isn’t slowing down.

But securing an agent-driven organization is complicated. You need to match the speed and scale of how AI and agents work on content. You need to verify outputs meet compliance and regulatory requirements. And as we’ve seen, you need to supercharge security teams’ ability to deal with the scale of issues coming their way.

There’s also a new vulnerability we’re seeing: AI makes accidental data exposure more likely. If permissions are set incorrectly, a user might not know they have access to sensitive content, so they won’t go looking for it. But ask an AI chatbot for information, and if that data exists in a document they technically have access to, they’ll get it — even if they shouldn’t. AI essentially increases the attack surface.

Fortunately, AI agents also present an opportunity. They can provide the deeper, nuanced understanding of content we need to protect it better. And they can help security operations teams handle threats at the scale and speed required in this new era.

Why can’t organizations just add AI security tools on top of existing systems?

In my opinion, the most common misconception — or the most common approach that organizations take in terms of protecting content, which I believe is not effective — is creating bolt-on solutions that they just layer on top of the content platform. The content platform is one thing, and then you have a separate content protection solution.

To me, that doesn’t really work well because the content protection solution requires a much deeper understanding of the content itself. It’s very hard to get that understanding if you have a bolt-on solution.

There are aspects of content understanding — it’s not only about understanding what the content is about. If you think about the content itself, you can think about it in terms of what the meaning of the content is, what the value of the content is to the organization, and what purpose it’s trying to serve. That’s one aspect.

But there’s also the aspect of who authored that content, who’s collaborated on that content, how old that content is. There’s a bunch of additional context that you need to put together to have a much better understanding of the sensitivity and therefore the value of the content. Then you can classify it, label it, and apply the right protection segmentation.

This is obviously very hard to do with bolt-on solutions. What you really need is a content platform that has built-in content identification and understanding solutions in place. That’s really what you want to use.

Box Shield Pro’s agents work alongside your security teams as an extension of them, automating processes that security teams today have to do manually.

Manoj Asnani, VP of Product Management of Security & Compliance at Box

Q: How will AI-powered security tools work across different industries?

Last year alone, more than $1 billion was paid out in ransomware attacks, which is just crazy. And what often happens is that an “endpoint” — like an employee’s laptop — gets infected with a ransomware payload. 

So let’s say in a hospital, an employee with access to all patient record repositories has their laptop impacted by ransomware. It starts encrypting files and uploading them back to the data repository, rendering them useless to doctors and nurses who need access.

Our ransomware detection solution, part of Box Shield Pro, would spot these patterns and sever the laptop’s connection to the Box data repository. The ransomware doesn’t get into Box, and patient records aren’t impacted.

Similarly, in the case of large media and entertainment studios, the Classification Agent (part of Shield Pro) can drive a far better understanding of a pre-release movie script, its context (i.e., collaboration patterns, dates) — and can determine its sensitivity and thus its “confidential” classification label. This can now drive all the downstream security controls such as access policies, DLP and watermarking.

Q: What new security challenges will emerge as AI agents become more autonomous?

As I mentioned earlier, AI introduces a new type of user to the enterprise, which is the agent. And just like any user, agents need to be secured.

The types of challenges our customers should be concerned about — if they’re not already — are: What if my agent gets a malicious prompt? How do I protect against that? What if my agent starts to take actions it’s not supposed to take? How do I make sure that my agents don’t accidentally expose sensitive information to internal and external entities?

In the next six to twelve months, we’re going to be focused on solving these challenges for our customers. Making sure that AI is secure and users are secure, and they feel confident using AI without any ambiguity.

Ready to dive deeper into BoxWorks? Get insights on all our announcements and new innovations in this event recap.