Moving from AI pilots to enterprise transformation with Box and Deloitte

|
Share

For the last two years, the business world has been obsessed with a single question: "How do we use AI?" 

We’ve seen a gold rush of chatbots, summarization tools, and automated email drafts. But as the initial novelty fades, a more profound shift is taking place. We’re moving away from a world where AI is a tool we pick up and put down, and toward a world where AI is a teammate that works alongside us — often autonomously.

The Box AI First Podcast episode “Moving from AI pilots to enterprise transformation with Box and Deloitte” explores how enterprise leaders can move from assistive systems to autonomous agents to re-imagine business processes. Joining podcast host Jon Herstein, Box Chief Customer Officer, is Beena Ammanath, Executive Director of the Deloitte AI Institute and author of the books Trustworthy AI and Zero Latency Leadership.

Herstein and Ammanath explore the transition currently happening in enterprise AI, with one clear takeaway: The organizations that win in the next decade will be the ones that rethink their culture, leadership, and business processes for an era of autonomous agents.

Key takeaways:

  • Organizations must shift from "piecemeal AI" automation to a "Day One" mindset that rethinks core business processes and workflows for an era of autonomous agents
  • Bridging the "literacy gap" across the entire organization, from the frontline to the boardroom, is essential to move from a defensive posture of fear to an offensive posture of innovation
  • Future enterprise success depends on developing an agile, use-case-specific governance framework that ensures AI systems are both responsible and trustworthy while acknowledging that the "AI" label will eventually become a seamless, assumed part of all software

The “day one” mindset required of AI transformation

Most companies are currently stuck in what Ammanath calls "piecemeal AI." Here and there, they’ve taken an existing process and sprinkled a little bit of automation on top to save a few minutes. While this might provide a small boost in efficiency, it misses the larger opportunity of AI.

True transformation requires a "Day One" mindset: If you were starting your company today, knowing what modern technology can do, would you build your workflows the same way? Likely not. As Herstein notes, "The North Star may start out as being about process efficiency, but it really should be much more than that."

This isn't just about doing things a little differently; it's about doing different things entirely.

Instead of asking how AI can make a meeting faster, we should be asking if the meeting needs to happen at all. Perhaps an autonomous AI agent could instead coordinate the decision-making process that would otherwise happen in the meeting. This isn't just about doing things a little differently; it's about doing different things entirely.

Not just what’s possible; what’s responsible

Still, even as enterprises work toward reenvisioning work with AI and particularly AI agents, Herstein notes, “The question isn’t just what’s possible, but what’s responsible. How do organizations build systems that are autonomous while still being trustworthy?”

Ammanath says, “We tend to think about trust as one size fits all, and it's really not.” Different AI use cases in different industries have different parameters around trust. For instance, Ammanath uses the example of her early AI work building models that could predict rates of jet engine failure or wind turbine power generation, and says, “Bias doesn’t really matter in these scenarios. What matters is the reliability of the algorithm.”

But in a modern medical AI use case, she notes, “You cannot have black box algorithms.” There’s a need for transparency and strict compliance around data privacy. This constitutes a completely different type of trust than the prediction models she first mentioned.

At the same time, Ammanath concedes, "The reality is AI is not yet fully reliable in every possible use case. And rightly so; it’s not mature enough yet. So I think that’s where it still needs to be human-led and AI powered."

In some use cases, strict compliance and transparency around AI is paramount. In others, technology must simply perform better than a human to be accepted. For leaders, this means governance cannot be a static checklist. It must be an agile, ongoing process. And because AI models can change or "decay" over time, the responsibility for oversight must be distributed across the company, not just siloed in the IT department.

The literacy gap is the real bottleneck

We often talk about the "skills gap" in technology, but Ammanath argues that the real hurdle is a "literacy gap." AI literacy isn't just for developers and data scientists. It’s a requirement for everyone from the frontline worker to the boardroom.

"AI literacy is the number one thing to bring to your team and the broader organization," she explains, "and not just to your teams — to your leadership and to the board."

AI literacy isn't just for developers and data scientists. It’s a requirement for everyone from the frontline worker to the boardroom.

"AI literacy is the number one thing to bring to your team and the broader organization," she explains, "and not just to your teams — to your leadership and to the board."

Without AI literacy, fear takes root. Employees worry about replacement, and leaders worry about risks they don't fully understand. When an organization understands the capabilities (and the very real limitations) of these systems, they can move from a defensive posture to an offensive one. Literacy creates the psychological safety needed for innovation.

At Box, Herstein has seen this firsthand: "We have a weekly company lunch where everyone is invited. We give updates on business and so forth. And pretty much every single week now, we have an employee showcase a thing that they did with Box AI. It does create a little bit of a competitive spirit."

The end of the AI label

Perhaps the most provocative prediction from the discussion is that the term "AI" itself has an expiration date. Today, companies go out of their way to highlight "AI-powered" features in their software. But as these capabilities become woven into the fabric of every application, the label will become redundant.

"I think by the end of 2026, calling out AI features will be gone," Ammanath predicted. "It should be so seamless that you don’t need a manual or any kind of training."

Herstein agreed, suggesting that we are approaching a time when the presence of intelligence in our software is simply assumed. "Right now, the AI is very explicit," he said. "My prediction is, not too far in the future, we’ll assume, ‘Of course there’s AI in here. So why would you call it out?’"

Leading into the autonomous era

The transition to an agentic enterprise requires leaders to balance the rapid pace of technological change with the slower pace of human and organizational adaptation. As Herstein summarizes, "The future of the enterprise isn't just about deploying smarter technology. It's about leading organizations that are ready for autonomy, speed, and change."

The goal isn't to build a company that uses AI. It's to build a company that’s ready for whatever comes next.

Watch the full episode.