Open Source AI Is No Longer A Sideshow
The AI model layer is quietly becoming plural, and I think that shift matters more than most people appreciate. OpenAI and Anthropic still set the pace on proprietary frontier models, and they are genuinely impressive at what they do. But if you look at what is actually being released week after week, the open source side no longer appears as a sideshow.
Llama gave the open weights world something serious to build on. Over the last couple of years the centre of gravity has shifted though, and a lot of the most interesting open models now come out of Chinese labs. They are shipping fast, they are shipping good, and they are shipping open. Whether that is permanent or geopolitical or both does not really matter. The goal is that nobody should own the model layer an this is what we get.
For incumbents this is understandably an awkward situation to be in. For everyone else it is great news.
Nobody outside a handful of boardrooms actually knows how much capital the frontier labs are burning, what their real unit economics look like, or how long they can keep running the operation at this scale. We are all placing long-term bets on companies whose business model is still being written. If one of them stumbles, pivots, or quietly decides your use case is no longer a priority, it would be nice to have somewhere else to go. Open weights models give you have options.
They also open doors that where mostly shut before. European companies in particular have an opening here. You do not need a ten billion dollar training run to build something valuable on top of strong open weights and a well designed agentic harness. The model is the core technology. It is the thing that makes any of this possible, but turning that raw capability into a real-world use case is a separate problem, and that is what the harness is for - the tools, the orchestration, the context management, the evaluations, all the unglamorous plumbing that connects the model to the job the customer actually wants done.
That is the layer where most real-world use cases get built, and that layer does not require you to be a frontier lab.
This is something we feel directly at ChatBotKit. When we started a few years ago, OpenAI was basically it. We are grateful for that and we would not be here without it. But our internal workloads are increasingly open source. The models are good enough, the cost curve is different, and the control is better. The majority of our customers are still running frontier models, and for most of them that is still the right choice today. Our job is to make sure that when they want to add a second model, swap one out, or run a cheaper open model for a specific task, the harness does not get in the way.
The future I see is not one model to rule them all. It is many models, chosen for purpose, swapped when needed, orchestrated by harnesses that care less and less about which logo is on the weights. That world is more diverse, and I think more stable too. Alternatives exist. Appreciating them is the first key.
The second is knowing how to use them well. That is the part that is still largely up to us.