This topic is part of an interactive knowledge graph with 118 connected AI & data topics, audio explainers, and guided learning paths.

Open explorer →
Say What?How AI Works › AI sycophancy and confidence bias
How AI Works

AI sycophancy and confidence bias

By Mark Ziler · Last updated 2026-04-05

Two things to watch for when using AI: it always sounds confident (even when wrong) and it tends to agree with you (even when it should not). AI will state an incorrect number with the same assured tone as a correct one. And if you pitch a bad idea, it will often say "great idea, let us explore that" instead of "that has been tried and it failed for these reasons." This is why human oversight matters. Not because AI is unreliable, but because its failure mode is being confidently, agreeably wrong. The fix is structured verification — data-grounded answers from governed datasets, not open-ended opinions from a model trying to please you.

Go deeper

You walk into a strategy meeting and tell the AI: 'I think we should expand into commercial HVAC — our residential margins are shrinking.' A sycophantic AI responds: 'That is a great strategic direction, here are five reasons commercial HVAC is a strong market.' A well-designed AI responds: 'Your residential margins declined 3.2 points over 18 months, but your commercial bid win rate is 12% with an average project loss of $4,200. Before expanding, you may want to examine why current commercial jobs are unprofitable.' The first answer feels good. The second one might save you a quarter million dollars.

The common mistake is using AI as a brainstorming partner without building in a devil's advocate function. Most AI tools default to agreement because the models were trained to be helpful and users rate agreeable responses higher. If every answer from your AI starts with 'great question' or 'that is a smart approach,' your system is optimized for comfort, not accuracy.

Questions to build into your AI setup: Can we configure the agent to challenge assumptions rather than validate them? When the AI supports a recommendation, does it also present the strongest counterargument? Who on our team is responsible for stress-testing AI recommendations before we act on them?

Explore this topic interactively →