Trust is the essential middleware that speeds up or slows down how we interact with each other and our environment. Every day we make split-second decisions on the extent to which we’ll trust others, usually without thinking about it.
This is especially true with strangers and as we get to know people we’ve just met. It’s the same with technology. Are we going to trust that a technology will work as intended? Why or why not? And how does our level of trust change when sensitive data or significant risk is involved?
AI is the stranger that just moved into town. We’re hearing a lot, especially about the upside potential, but there are also important considerations around trust. Today, we’ll discuss questions you should ask AI vendors – or at least consider internally – when evaluating different options related to working with AI. Then, we’ll share Box’s approach to building trust for our upcoming AI offering.
AI questions we should all ask
In May, Box published our AI principles, as reflected in our Box AI Acceptable Use Policy. These principles and policies provide a foundational blueprint for what we suggest organizations should ask potential AI vendors.
- Do we control AI usage on our data? You should be able to enable or disable the use of AI and control what content AI will be used on, as well as how it will be used, including what data AI vendors use for training their models.
- Is our content secure? The reality is that bad actors are also adopting AI and AI is fundamentally changing data security. As with working with any SaaS vendor, if you don’t receive sufficient assurance that your data will be protected, you need to question why you would work with that vendor.
- What is the Governance model for AI? You should know what AI governance the vendor has in place, to what extent it is supported by leadership, and whether it meets the baseline requirements of existing security, privacy and applicable regulations and standards.
Building Trust in AI at Box
Box strives every day to build trust with our customers and partners. With the exciting advent of AI, here are some of the steps we’re taking to maintain and grow your trust in Box as we incorporate AI into our product portfolio.
- Third Party Assessment and Contracting. It starts with reviewing the security and compliance of potential AI partners and agreeing to contractual terms to formalize our requirements.
- Product and Engineering. Products need to be built with security, compliance, and privacy requirements in mind. We’re doing this - At Box, security and compliance are part of our DNA.
- AI Governance. We have a cross-functional AI Governance plan in place, with senior leadership support and involvement, that provides the structure, process and oversight to make sure we are adhering to our AI principles. Stay tuned for a separate AI Governance blog.
- Marketing and Customer Trust. We have been – and will continue to be – upfront and transparent with you, our customers and partners, on what we’re doing – and what we’re not doing – in the AI space. See our Box AI site for the latest on Box’s AI product and the Box Trust Center for all things trust.
- Feedback. Building trust is about more than providing information. It’s also about listening and responding to feedback. As we move forward with Box AI, teams across Box will gather feedback to better understand evolving customer needs in our never-ending effort to blow our customers’ minds.
To learn more about AI at Box, please join us for BoxWorks on Oct. 11, 2023. Registration is free and speakers include Sam Altman, OpenAI CEO; Dustin Moskovitz, Asana CEO; and Aaron Levie, Box CEO, and be sure to watch our session “Ensuring data privacy and trust across AI innovations” to hear from Box’s Chief Compliance Officer, Tom Cowles, and Chief Privacy Officer, Leah Perry.
To learn more about Trust at Box, you can visit the Box Trust Center.