"There have to be no bad ideas. The thing that seems the most outlandish might be the greatest opportunity going forward."
When it comes to inspiring AI adoption in higher education, this is the ethos of Baylor University CIO and CISO John Allen, whose job it is to both enable AI use and simultaneously protect the university’s (and users’) data.
Baylor is a private Christian educational institution located in Waco, Texas, that’s achieved R1 status — the highest classification level for US doctoral universities. Baylor holds itself deeply accountable when it comes to using and protecting information, but at the same time, under Allen’s leadership, the university empowers staff and students to use AI tools to work more quickly, efficiently, and downright differently than ever before.
In the latest episode of the Box AI-First Podcast, Box Chief Customer Officer Jon Herstein sits down with Allen to explore the topic of AI integration in higher education amid an evolving technological landscape. From AI as an enabler of human interaction to advanced cybersecurity strategies, this was a conversation packed with valuable insights about the role of technology in the university.
Key takeaways:
- AI should enhance human interaction rather than replace it, allowing people to focus on higher-value work by handling routine tasks like drafting reports and creating job descriptions
- Innovation and cybersecurity have to exist in balance — managing risk without being overly conservative
- Organizations must continuously evaluate AI vendors, asking critical questions about data training, prompt handling, and visibility to maintain control while maximizing AI potential
- The best innovation comes from employees who understand specific problems, requiring leaders to create environments where all ideas are welcomed and real-world use cases are showcased
Democratizing the use of AI in higher education
Allen has been at Baylor for over 25 years, currently serving as both CIO and CISO — a responsibility he jokingly describes as "sitting in his office arguing with himself." It’s his job both to hold the vision for technology at Baylor and to understand cybersecurity risk across the organization. This dual role means balancing innovation with security, and Allen says,
You need to make sure you’re enabling the business — but doing it in such a way that you’re protecting your constituents’ information and your organization’s intellectual property. It’s a tedious balance.
Allen emphasizes the importance of managing risk without being overly conservative, offering a candid take on widespread assumptions about cybersecurity professionals: "I think people expect I should be as risk averse as possible — kinda tinfoil hat."
But he takes a more visionary approach to AI and the future of technology within higher education. He believes in making technologies like AI usable by the university’s diverse community, saying, “In no way should you have to be a technologist to use these things. The word I like to use is democratized."
Allen’s attitude reflects a future where technology no longer feels intimidating or exclusive but instead extends its capabilities to educators, staff, and students.
AI as an enabler of human work — not a detractor
As CIO, Allen’s general philosophy toward AI is that “The greatest innovation happens with people around the table, a coffee pot, or a cooler — not necessarily sitting there typing away on their screen all day trying to get their tasks done.”
For this reason, he views AI tools not as replacements for human workers but as ways for talent to flourish without being tied to banal tasks. AI can, for instance, handle tasks like drafting accurate divisional reports, creating job descriptions, and comparing RFP outputs. With these sorts of tasks, Allen credits AI with the ability to do "a better job than if I had a committee around a table spending a day working on it — and get you 90% of the way in seconds.”
Leveraging AI to enhance human interactions (rather than eliminate human elements) is a cultural and operational pivot he believes universities must embrace. “Now,” he says,
People are able to put their energy toward additional value — not just having to go through and fact check.” For this reason, “AI is actually an enabler to greater human interaction. It’s not a detractor from human interaction.
And on a practical, level, Herstein points out how these technologies hold the potential to tangibly reduce staff hours and accelerate processes: "If you could take forty to sixty hours of staff time and not only reduce it, but also compress it in a way that the turnaround back to the requester is faster, it just means you accelerated the velocity getting that through the system."
Creating an “office of how” versus an “office of no”
During the podcast conversation, Herstein asked Allen, “How do you foster a culture that balances the desire to use these tools and be innovative — but also be aware of risks?”
With his CISO hat on, Allen views AI tools with strict scrutiny. Particularly in higher education, where data protection must extend beyond intellectual property to safeguarding student and staff identities, data security is pivotal. Allen calls data sprawl in particular “the enemy of cybersecurity,” which is why consolidating the organization’s content on one secure platform is so critical before taking advantage of AI.
Moving toward centralized platforms and automating processes like content classification can significantly mitigate risks, and Allen praises next-generation tools designed to manage data securely without requiring constant user intervention. For example, solutions like Box that integrate cybersecurity and compliance features directly into their frameworks ensure that data governance occurs seamlessly in the background, strengthening risk posture while simplifying workflows.
This is important because, rather than being perceived as obstructionist, Allen believes cybersecurity functions must pivot toward being solution-driven — what he refers to as the "office of how" rather than the "office of no." A comprehensive strategy involves thinking beyond protecting data to ensuring systems remain functional and accessible — balancing security with usability. Technology, Allen emphasizes, should empower users, not confuse or alienate them.
Giving a digital-native generation the AI tools they need to innovate
Leaders have to foster environments where ideas, no matter how unconventional, are invited and explored. That means listening to all ideas. As Allen says, "The people who understand problems the best are down in the organization. They have to have the understanding of what's possible to then be able to float up to us."
Herstein backs up the dual importance of executive leadership with grassroots innovation when it comes to AI: “There are going to be top-down initiatives, but at the same time, you want users bubbling up and saying, I’m actually using AI in my day-to-day right now to help me do things.”
Indeed, effective change management revolves around making technology approachable and helping users see its value in real life — not as an abstract future concept but as tangible progress today. With incoming university students (and, increasingly, staff) who are digital native, there’s an expectation of advanced technology in the university, and there’s also a deep willingness to use it.
But this digital-native generation, for all its savvy and bravado around new tools, doesn’t automatically have the context of data security that’s important to an institution. So Allen says. “We need to make sure we’re educating students before they leave. They don’t want to learn that lesson in their first job.”
Above all, he teaches
The golden rule of data — treat your information that you’re interacting with at work every day like you would your own personal data.
Vendor assessment and continuous cybersecurity evaluation
As AI use and tools evolve, and the regulatory landscape shifts, Allen stresses the need for ongoing vendor assessments in order to keep operational integrity intact. He outlines key questions leaders should keep in mind when evaluating AI solutions:
- Are you training on our data?
- Is your data being poured into a larger corpus of training material?
- Are prompts handled as confidential information?
- Are prompts stored?
- Who all has visibility into those prompts?
Understanding these nuances of AI tools ensures that organizations maintain control over their data while unlocking its full potential. Allen also evaluates vendors in terms of their cyber risk profile, insurance plan, past audits, and other practical metrics to ensure he’s making grounded technology purchasing decisions.
Cybersecurity is not just about data confidentiality
With these foundational security boxes checked, Allen’s perspective on AI cybersecurity is that it’s actually about a lot more than protecting data.
Often, all people want to talk about when they talk about cybersecurity is confidentiality,” he explains. “They often overlook things like business continuity — which is availability, another huge tenet of cybersecurity. Being in the cloud really checks that box in a significant way.
Finding the right technology partners is crucial to this aspect of Baylor’s AI approach. Allen likens this concept to having a Harley Davidson (which he does). When he has to service his Harley, he takes it to someone who specializes in Harleys. “I don’t just go down the street to a general mechanic. And I look at cybersecurity and solution platforms the same way — I go to a group providing the most secure cybersecurity for identity. I don’t go to a generalist for a really strong identity suite.”
Technology is evolving and changing so quickly today that it’s critical for higher education institutions to partner with technology specialists who are on the leading edge when it comes to modern security and compliance demands.
Driving successful adoption in the university
Above all, institutions must ensure that the AI platforms and tools they use deliver continuous improvement and strategic alignment, keeping human oversight and transparency as core values. Allen advises leaders to continually question whether new tools are truly solving problems and continue to align with strategic goals: "Regardless whether it's AI or any technology, it should meet those two criteria. If it’s not meeting those criteria at a baseline, you have to ask yourself, why are we doing this?"
Both Allen and Herstein foresee transformative possibilities for institutions willing to embrace AI thoughtfully — balancing accessibility, ethical considerations, user empowerment, and strategic alignment. Their insights paint a future of higher education that doesn’t just integrate cutting-edge technology, but reshapes it to meet the needs of a dynamic and diverse academic community.
Catch the full episode
Ready to dive deeper into this discussion? Don’t miss the Box AI-First Podcast. Subscribe now to hear from CIOs and other tech leaders on reimagining work with the power of content and intelligence — and putting AI at the core of enterprise transformation.
