“Everyone’s buzzing about open source models. But what are they, and how do they differ from regular models?” Host Meena Ganesh explores this question with Box CTO Ben Kus in our latest AI Explainer Series episode.
In today’s changing AI landscape, businesses must make important decisions on whether to leverage closed proprietary models, open weight models, or other open source alternatives. Understanding the key differences among these options is crucial for harnessing AI effectively while balancing control, cost, and operational requirements.
Key takeaways:
- Closed AI models operate as “black boxes,” providing powerful performance but limiting user insight into their internal mechanics
- Open weight AI models offer transparency and control by allowing users to download and run the model’s core files, or “weights,” locally
- Adopting open weight models offers customization and autonomy, enabling businesses to fine-tune and run AI tools on their own infrastructure
- Free doesn’t always mean free, as operational costs for servers and electricity can make open models more expensive than their closed counterparts
- Open weight AI models provide an important alternative to proprietary solutions and help prevent vendor lock-in
Open vs. closed AI models
“You could think about models in two categories,” says Kus. “One would be the proprietary, or closed, models, where you don’t actually know how they work. It’s kind of a black box.”
OpenAI’s GPT-5 and Anthropic’s Claude fit into this category, delivering robust performance with minimal insight.
For enterprises looking for more control and customization, open source models offer unique advantages, much like open code for developers. Anyone can find, download them, and run them locally. Examples include Meadow, Llama, Mistral AI, and GPT OSS.
“Open models come from that same sort of mindset, although some of the benefits of open source don’t quite apply,” Kus says. “So instead of calling these models open source, we call them open weight.”
What’s in a name?
The term open weight reflects the technical essence of open models, where most of the innovation lies in massive files containing model parameters, or “weights.”
Open up a familiar open source database like MySQL or Linux, and you’ll find millions (or even tens of millions) of lines of code, including all of the drivers and other elements that make up this type of open source offering. For a really huge application platform like Facebook or Google, lines of code might reach the hundreds of millions.
You might expect that an open source AI model, then, would have exponentially more lines of code.
“You think it would be so big — after all, it’s some of the most revolutionary technology ever made,” Kus explains. “But when you look at the base models, they really only have thousands of lines of code.”
But — and this is big — the model comes with giant files known as weights. He continues: “Most of the value of these models revolves around these weights. If you opened it up, you would see a bunch of floating point numbers.”
In fact, these weights can measure hundreds of gigabytes. For instance, OpenAI’s GPT OSS weighs in at an eye-popping 240 gigabytes. These massive files are both a strength and a challenge of open weight models.
Not quite open source, but still powerful
“One of the benefits of open source software is that you can see exactly how it runs,” says Kus.
For someone who has a decent understanding of code, open source software is easy to read. But open weight AI models are a little different.
“If you look at the weights, it doesn’t really make sense to you,” Kus explains. “The fact that it’s open is great, but the weights are not really readable by anyone.”
So what’s the benefit of open weight AI? “One of the reasons people are excited,” Kus says, “is that you so easily download and run some of these models. There’s a whole variety of them, and they come in different sizes, from different vendors. You have full control.”
You can also fine-tune these models if you know how — taking the base models, adding some training runs and different techniques, and updating some of the weights.
“Rather than do it yourself from scratch,” Kus says, “which would be incredibly expensive, you can build on top of these models.”
As Ganesh points out, you can also use a closed source model on any device, if it’s powerful enough. But this requires that you access the model over the internet. You can’t run the model without internet connectivity, and you can’t ensure an isolated model. With open-weighted AI, you can.
Taking fine-tuning out of the black box
Closed models, too, allow for fine-tuning. You can feed them examples, and they’ll update the model. But in a closed model paradigm, the vendor creates a new model and hands it to you.
“It’s still in a black box,” Kus says. “You don’t really know how it works.”
If companies want greater autonomy, open weight models provide more control. With an open weight AI model, you can fine-tune and run the model yourself.
Ultimately, enterprises need to weigh their priorities. Closed proprietary models offer a service-oriented approach, where big AI vendors handle the technical operations for users. Meanwhile, running open weight models internally gives businesses ownership, customization, and deeper control over their AI tools.
The misleading nature of “free”
One major advantage of open weight models: They’re free.
Well, sort of. There’s no licensing software, but you still have to run the model. And while the barrier to entry for open weight models may seem low, operational expenses like the cost of GPU servers can quickly add up. Ben contextualizes this with a familiar analogy:
“Would you rather take an Uber, or would you rather be given a free car?”
It might seem smarter to take the free car and drive it around versus relying on a paid service to get everywhere. But if you step back and assess the total cost of ownership of a car, the equation changes. With an Uber, you don’t pay insurance, you don’t pay for gas, you don’t pay for parking when the car isn’t in use. But your free car can cost you a lot.
“It’s very similar when running infrastructure at this scale,” says Kus. “Just to provide and get these systems, you have to pay for the costs of servers, space, and electricity, and typically you don’t have the model running all the time — but you still have to pay for it in order to be able to use it on demand.”
These costs can actually be more expensive than simply using a closed source AI model. Many leading closed models are pretty reasonably priced, and costs are constantly coming down as models become more efficient. So, Kus says, “In many cases, they can actually run it cheaper than you can run it ‘free’ yourself.”
Back to the Uber vs. free car analogy. One of the other downsides of a free car is that, once you commit, you’re locked into that car. But the maker is constantly releasing new models. The same goes for closed models.
The security question
With an open-weighted model, because you run it yourself, you’re in complete control. You can run it on the servers you want and control the security parameters in a highly custom way.
“This is an important security consideration,” says Kus. “But many people will agree that models hosted by trustworthy vendors and platforms will keep the model safe and secure.”
Similar to how you might trust Azure or GCP with your data, you can run AI safely and securely even if it’s hosted elsewhere. Kus suggests that the security consideration is a non-point because “You can pick either option, and there’s a way to do it very securely.”
So, should you use open AI models?
“It’s great to know that open models exist and that you could potentially use them,” Kus concludes. “But for the most part, you don’t actually need to, because the current way most people use models (via closed platforms or API) is still a great option.”
Open-weighted models might be right for you if you’re a very advanced AI user who wants to run on a specialized device or has another unique deciding factor, but the number of trusted enterprise providers offering secure open models makes them the right option for most cases.
However, Kus emphasizes that the advent of open weight AI models is very important to the overall industry and the advancement of enterprise AI in general. Open weight AI models represent an incredible step forward for transparency, innovation, and competition in the AI ecosystem. The existence of open models also ensures that enterprises can avoid vendor lock-in, which can be a significant concern.
As companies like OpenAI and Google release open weight versions of their models, they create alternatives to proprietary solutions that benefit both customers and competitors. Ultimately, the existence of open weight AI models fosters a more vibrant and competitive industry, and that’s something worth celebrating.
Catch the full episode
Whether you embrace these models directly or benefit from their indirect influence, it’s clear they’re an important part of the AI evolution. As Kus says, “Even if you don’t use the open models, you’re excited that they exist, because it shows a path of openness for the whole industry.”
Ready to dive deeper into this discussion? Don’t miss “Open source vs. open weights: What enterprises should really know” on the AI Explainer podcast.Start listening (and subscribe) today to learn practical, actionable strategies for integrating AI into your organization or industry.


