Trust, Tech, and the Psychology of Decision-Making
As artificial intelligence (AI) becomes more ingrained in our everyday lives—from chatbots and recommendation engines to tools for healthcare, finance, and education—there’s one subtle but crucial question we’re only beginning to understand: how does the behavior of AI affect the humans using it?
One particularly compelling angle has to do with confidence—specifically, the confidence that an AI system appears to have in its own predictions and recommendations. When an AI seems sure of itself, how does that influence our own sense of certainty? Do we become more confident—or more doubtful—in our own judgments?
This question is at the heart of a recent study from NUS Computing’s AI for Social Good Lab (AI4SG Lab) led by Assistant Professor Lee Yi-Chieh. This study was led by the Lab’s PhD student Li Jingshu, with contributions from Microsoft researcher Liao Vera (Qingzi) and PhD students Yang Yitian and Zhang Junti. Their efforts drive new insights into creating smarter, more intuitive decision-making tools.Their paper, “As Confidence Aligns: Exploring the Effect of AI Confidence on Human Self-Confidence in Human-AI Decision Making,” recently received an Honourable Mention at the ACM Conference on Human Factors in Computing Systems (CHI 2025)—an award given to the top 5% of submitted papers at the prestigious international conference.
What they found was as fascinating as it was unexpected: when AI expresses a high (or low) level of confidence, that confidence rubs off on us. And not only in the moment—it can continue to influence our decision-making even after the AI is no longer involved.
The Study: Understanding Confidence Alignment
The AI4SG Lab researchers wanted to investigate what they called “confidence alignment”—the phenomenon where a human user’s confidence begins to mirror that of the AI they’re working with. To explore this, they designed a behavioral experiment involving over 270 participants. The task? Predict whether individuals earned more or less than $50,000 annually, based on profile data like age, education, and occupation. It was a complex judgment call, the kind of thing that benefits from both human intuition and computational analysis.
In the first phase, participants made predictions on their own. This established a baseline for how confident they were in their decisions without AI assistance. In the second phase, they were shown predictions from a machine learning model along with a confidence score; for example, “I’m 80% sure this person earns more than $50,000”. Participants were then asked to make their own prediction, knowing the AI’s estimate and how sure it was.
Here’s where things got interesting: participants’ own self-confidence began to drift toward the AI’s confidence level. Those who had started out unsure of themselves grew more confident if the AI was certain. Meanwhile, even highly confident individuals dialed back their self-assurance when the AI displayed hesitation.
This wasn’t just a fleeting effect. In the final phase, when participants went back to working solo, their confidence remained influenced by what the AI had previously shown. Even in its absence, the AI had left an impression—a psychological residue of sorts.
Why Feedback Matters
To understand how to mitigate or even counterbalance this effect, the researchers tested another variable: feedback. One group of participants received immediate feedback on whether their predictions were correct. Another group received none.
The difference was clear. Those who received feedback were less susceptible to confidence alignment. In other words, when people had direct, factual data on how well they were doing, they were less likely to let the AI’s confidence sway their own.
This highlights an important design principle for AI systems: incorporating feedback loops may help users stay grounded in their own abilities, avoiding the pitfall of blindly trusting or doubting themselves based solely on what the AI says.
Does Collaboration Style Make a Difference?
You might expect that different collaboration models—where the AI acts as an advisor, peer, or even decision-maker—would result in varying levels of confidence alignment. Strangely, that wasn’t the case.Whether people were merely consulting with the AI or watching it take the lead, the alignment effect persisted. It wasn’t about control—it was about exposure. Just seeing the AI’s confidence was enough to influence participants’ own certainty.
This suggests that AI designers can’t rely solely on interface structure to manage the influence of AI confidence. It’s not just about who’s in charge. It’s about how confident the system appears to be—and how that confidence is communicated.
When Confidence Becomes Misleading
One of the most revealing aspects of the study was the impact of confidence alignment on decision-making accuracy. Ideally, higher confidence should align with better performance—a concept known as “confidence calibration.” But the study found that confidence alignment often made people less calibrated.
For example, participants who were initially overconfident and then worked with a very confident AI often became even more sure of themselves—even though their predictions weren’t necessarily improving. The AI’s apparent certainty inflated their own, worsening their accuracy-to-confidence match. On the flip side, underconfident participants sometimes benefited from AI input, finding a more realistic level of confidence that did align better with their actual performance.
This duality reveals both the promise and the peril of AI collaboration. While AI confidence can sometimes help us judge ourselves more accurately, it just as easily leads to misplaced trust or unnecessary doubt.
The Psychology of Influence
The research study touches on deeper psychological dynamics. When we work with confident individuals—whether people or machines—we naturally tend to defer. This is a well-documented phenomenon in social psychology, where confident voices are often perceived as more credible, regardless of their actual correctness. The study’s findings show that AI is not exempt from this effect. In fact, its aura of “objective authority” may make its influence even more powerful.
What’s particularly striking is how easily this happens, and how persistently it lingers. Even after the AI is out of the picture, we may continue to judge ourselves differently, guided by the shadow of its confidence.
Future Possibilities: Calibrating AI for Good
While this alignment can be problematic, it opens up an interesting follow-up question of whether it’s possible to intentionally design AI to help users become better calibrated over time.
Imagine an AI that modulates its confidence to gently nudge underconfident users toward greater self-trust, or to rein in those who are prone to overconfidence. Instead of being a passive tool, the AI becomes an active coach, helping users fine-tune their own decision-making.
This would require a careful balancing act. AI would need to be accurate not just in its predictions, but in understanding the user’s psychology. It’s a tall order—but a tantalizing one.
The Subtle Power of Influence
With new knowledge comes new responsibility. The confidence alignment effect observed in this study can be used to help—but it can also mislead. Overly confident AI systems, if left unchecked, may foster misplaced trust, especially in high-stakes contexts like finance, medicine, or justice. And as AI grows more expressive and persuasive—thanks to natural language models and emotionally intelligent interfaces—the risk of over-reliance or undue influence increases.
This research, while optimistic in tone, serves as a subtle warning. Designers must consider not just what the AI knows, but how it makes users feel about their own knowledge.
Rethinking Confidence in a Machine Age
In a world increasingly shaped by our interactions with AI, the researchers at AI4SG Lab are pushing us to ask deeper questions—not just about what AI does, but about what it does to us. Their findings show that AI’s confidence isn’t just an internal statistic. It’s a form of social communication—one that humans instinctively respond to, often unconsciously. That response can help or hinder us, depending on how well our confidence aligns with reality.
By illuminating this dynamic, the study offers more than just a new wrinkle in human-AI interaction. It points the way toward more thoughtful, more ethical, and ultimately more human-centered AI design—where building better tools includes building better humans, too.
As AI systems become increasingly confident, we must ask ourselves: are we becoming better decision-makers, or simply better followers?
The answer, like most human-AI relationships, will depend on how we choose to design—and to decide.
Further Reading: Li, J., Yang, Y., Liao, Q.V., Zhang, J. and Lee, Y.-C. (2025) “As Confidence Aligns: Exploring the Effect of AI Confidence on Human Self-confidence in Human-AI Decision Making,” In Proceedings of CHI Conference on Human Factors in Computing Systems (CHI ’25).