Every time some says..
- Torsten Steiner
- Jul 10
- 2 min read
The real issue often lies with our expectations, not the model.

Many newcomers watch a glossy demonstration and assume that a language model will deliver instant, flawless insight. Moments later they try it themselves, notice a glaring error, and declare that “artificial intelligence is not there yet”. Their disappointment is understandable, but the gap is frequently of our own making.
How frustration takes hold
Most users arrive with cinema-trailer expectations. They ask a broad, loosely phrased question and receive a broad, loosely useful answer. The reply looks confident, so they treat every line as fact; when the first mistake appears, trust vanishes.
Beyond that, good results require a small amount of craft. A sharply focused question, a hint of context, and a clear request for format can turn the same model into a genuinely helpful assistant. Nobody tells beginners this. They paste a long email, hope for magic, and get a summary that ignores the only paragraph that mattered.
The model also lacks common sense about a firm’s world. It knows nothing of deadlines, risk appetite, or house style unless we supply that information first. Because training is often an afterthought, the first disappointment becomes the last: users walk away muttering that the technology has been over-sold.
Narrowing the gap
Set realistic expectations from the outset. In every rollout, display an answer that is wrong, then show how a better prompt repairs it. Colleagues quickly grasp that the tool is a co-pilot, not an autopilot.
Teach simple prompting habits. Short, specific questions work best. Maintain a shared collection of examples so that no colleague starts from an empty page.
Provide your own data before judging performance. Connecting past transactions, style guides, or risk rules turns a generic chatbot into a specialist assistant.
Finally, measure irritation, not only usage. Regularly ask teams what the model mis-handled this week and feed those stories into refresher sessions or technical adjustments.
A realistic takeaway
AI already drafts faster than we can type and spots patterns we overlook, yet it is still a bright junior analyst: full of promise, prone to overconfidence, always better with guidance. The technology will continue to improve, but until it does, pairing every new model with honest training and a clear understanding of its limits is the best remedy for frustration. When we meet the tool halfway, the help grows and the hype fades.
Comments