back to reflections

The AI Job Loss Story Is Mostly Fear

AI will change work, but many current job-loss decisions are driven less by working automation and more by executive fear that competitors may be using AI better.
Petko D. Petkovon a break from CISO duties, building cbk.ai

Automation causes job losses. That is not controversial. Though, I do not think that explains what is happening with AI right now.

The strange thing about this moment is that many companies are reacting to an imagined future more than an operational present. They are scared of existing in a market where their competitors might be using AI tools better than they are. That fear creates pressure to adjust before anyone really understands what the adjustment should be.

The adjustment usually goes in one of two directions.

If the company can afford it, it tries to equip existing staff with as many AI tools as possible. The hope is that productivity rises fast enough to keep the company competitive.

If the company cannot afford that, or if leadership believes AI is already a massive advantage somewhere else, the instinct becomes more defensive, such as reduce headcount, keep the same output target, and assume AI will fill the gap.

That second move looks rational from far away but up close, it often falls apart.

Outside a few well-known domains, especially coding assistance, most enterprises are not using AI as effectively as people imagine. There may be a chatbot that answers internal questions or a search tool over documents. There may even be a few teams experimenting with workflows, but fully autonomous work, especially unsupervised work, is still extremely rare.

The gap between what is said and the reality is truly enormous.

This is why I think some of the current job-loss threat is fictional. Not fictional in the sense that people are not losing jobs! They are! Fictional in the sense that the replacement story is often not true yet. The company did not find a magical machine that can do the work. The company got scared that someone else might.

The better approach is slower and more boring. Start small. Find the Pareto point. AI is much closer to an augmentation tool than a replacement tool right now. Treating it as replacement too early is how you get brittle systems, angry customers, confused employees, and executives wondering why the savings never turned into capability.

The barbell strategy makes a lot more sense. On one side, lots of small low-risk AI augmentations. On the other side, a few deliberate high-conviction bets where the upside is large enough to justify the learning curve.

Everything in the middle is where companies waste money.

I say this as someone building in this space every day. At ChatBotKit, we probably use more internal AI systems than most companies can even imagine. We have many agents already deployed. We use them constantly. They help us through work.

But this capability did not happen overnight.

It took time to understand where AI actually fits. It took time to learn what should be automated, what should be augmented, what should stay human, and what should not exist at all. We have a much better mental model now than we did a year ago, but I would still not claim the picture is complete. We are still learning.

And if that is true for us, a company that works on this all day, then most companies are probably earlier than they think.

The shortage is strategic understanding. Most companies still misunderstand what these systems are good at, where they fail, and how they should be introduced into real workflows. That understanding comes from practice, conversation, mistakes, and small deployments that teach you something.

The AI job loss story is mostly fear. Not because automation is harmless, but because most companies are nowhere near the level of AI adoption they imagine their competitors have.