The Content Threat Landscape

This post is part of a series where the Box and our Security community addresses the impact of AI on content security and develops a strategy for keeping content secure in the world of Artificial Intelligence (AI).

In our last post, we covered how the nature of content was evolving across its entire lifecycle due to the explosion of AI-powered tools, and steps that needed to be taken to securely capitalize on the increased content velocity that has emerged as a result. But it’s not enough to just plan for the opportunities and challenges of AI, we have to be ready to protect ourselves and our content against new threats that can emerge from AI-powered bad actors.

AI introduces new dimensions and landmarks to the content threat landscape.  Some aspects are immediately recognizable as threats, while others have latent features and implications. The obvious and real threat of AI generating new malware, or AI being used within the malware to create adaptive strains that are more difficult to detect and remediate, presents new demands on tools we lean on to defend our content. Not only malware, bad actors also leverage generative AI to accelerate the creation of phishing attacks dramatically, which contributed to a 1265% increase in malicious phishing emails in 2023. And we can’t discount the new threats to data integrity, with deepfakes and similar tools making it more difficult than ever to discern the real from the artificially generated.

On the other side of the coin, we have new potential threats stemming from how organizations leverage AI internally. New privacy concerns around generative AI tools are driving the creation of countless new regulations because the risk of poorly managed AI models misusing personal data is real. On the topic of private information, generative AI can also run the risk of generating new content containing sensitive information, from company financials to social security numbers, and without proper controls this threat can turn into real damage.

“You need to have a constant vigilant eye watching for risks, it is evolving so quick it’s a challenge to manage, our IT security staffing has expanded exponentially over the last few years. We are a book publisher, and our content is shared globally, so we need to have policies and monitoring for sharing with external partners” -Tom Saal, Penguin Random House

Developing Strategic Priorities

In collaboration with security professionals from a wide range of industries, we identified strategic priorities around how to deal with the emerging threats that widespread AI adoption can pose.

Below is a selection of the feedback - for even more discussion, detail, and strategic priorities look for our combined eBook later this year

Strategic Priority: Identify threats embedded in content

Having the tool set to identify content threats, be they AI-generated or AI-powered, is essential. This is a place where AI-tools are needed to keep pace, with advanced detection and automated protection helping identify even sophisticated malware/ransomware before it proliferates and helping security controls scale to keep pace with volumetric attacks. In general, this priority centers on understanding and insight into the content, being able to intelligently identify potentially malicious or anomalous characteristics within the content, at scale.

“Threats can be embedded in content, threats can surface or be ‘derived’ from content, and generative AI will expose new threats, the key is to consider how we mitigate them through a fundamental rethinking of how we generate, engage with and leverage digital content.” -Don Hammons, mxHERO

Strategic Priority:  Identify threats derived from content

Addressing the threats created by generative AI usage within the organization, the highest priorities raised were around maintaining visibility and control of potentially sensitive information. Participants expressed concern that without a thorough understanding of how AI was handling their data when generating new content, and effective controls in place for generated content, it would be hard to use generative AI in any significant way. Mitigating the risk of violating regulations, customer/employee privacy, or otherwise exposing confidential content is a major priority.

“We have data access/sharing concerns. For example, we are testing [generative AI solution] and are not sure where the data goes, what path it takes, and what is shared. There is no transparency, and it is unclear what principles apply to access of our data both inside and outside our company” -Derek Fuller, Kauffman Foundation

Conclusion

We got plenty of great insight on how AI impacts the threat landscape from our discussion with our security community, from automatically generated threats to more subtle threats that AI-generated content can bring. Regardless of the source, the key point is still to understand your content, maintain automated security controls that protect your content at scale, and above all, to leverage AI yourself to stay a step ahead.

This blog is (some of) the second chapter of our upcoming eBook, “Developing your AI-infused Content Protection Strategy”, where we and the Box Security community attempt to chart a course for effectively securing your critical content in the age of AI. Our next chapter (and post) will focus on the goals and guardrails that need to be put in place to securely leverage AI within an organization.

Free 14-day trial.
No risk.

Box free trial includes native e‑signatures, lets you securely manage, share and access your content from anywhere.

Try for free