The Superstition of Vibe Coding
In 1948, B.F. Skinner put pigeons in boxes that dispensed food at fixed intervals. The food arrived regardless of what the pigeon did. Just food on a timer.
Six of the eight pigeons developed rituals. They started spinning counterclockwise, bobbed their heads, etc. Each bird had seemingly learned that whatever it was doing when the food arrived must have caused it. Skinner called this superstitious behavior. It laid the groundwork for operant conditioning. The wikipedia page is fascinating.
The box is your IDE. The pigeon is you.
Spend five minutes watching the vibe coding scene, including current generation of SWEs. People are hooked. Hooked like a slot machine. Most of the time the output is wrong, mediocre, or subtly broken. But when it works, when the agent nails a refactor on the first try (i.e one shot it), the dopamine hit is real. Nobody stays up until 3am using a tool that works predictably. They stay up because it might work this time. That is variable reinforcement. The most addictive schedule in behaviour psychology. Casinos run on it. Social media was built on it. Now the entire AI coding industry runs on it too.
And just like Skinner's pigeons, people have invented rituals to control the output. Elaborate prompt engineering. Multi-page RFPs. System messages tuned with the reverence of an incantation. It is genuinely believed these rituals determine quality.
Sometimes they do. Often they do not. In a surprising number of cases the elaborate prompt makes things worse. The tool over-constrains, contradicts itself, hallucinates requirements that were never there. The ritual is the pigeon spinning counterclockwise.
The trap is that variable reinforcement makes the rituals look like they work. You craft a perfect prompt, the output is great, and you credit the prompt. You skip it, the output is garbage, and you conclude you should have written one. You never test whether the output would have been the same either way. The food dispenser does not care which direction you are spinning.
This is operant conditioning running at industry scale. Millions of developers. Variable reinforcement. Tight feedback loops. Multiplying rituals. No control group.
Now think about this. A carpenter does not develop superstitious beliefs about a hammer. The feedback is immediate and deterministic. Hit the nail, it goes in. Miss, it does not. AI coding assistants are the opposite. The feedback is probabilistic and opaque. You cannot open the box and see why the food arrived. So you keep spinning.
At ChatBotKit we try to break the cycle instead of feeding it. The platform lets you build your own agents, see what they are doing, and control them through structure rather than ritual. You define tools, set boundaries, compose behaviors, and observe results. When something fails you can see why, change it, and test again.
Claude Code is a black box. Feed it prompts, receive output, never know why it worked or why it did not. The only strategy is to adjust the ritual and try again.
ChatBotKit was built to be transparent and yes it is slightly more complex as a result. But that is ok. The agent is decomposed into parts you can inspect, modify, and reason about. That is the difference between superstition and engineering.
Practical note: Using simpler prompts and observing the results can help break the cycle of superstition. Just keep in mind that the goal of AI is to augment human capabilities, not to replace them.