Why Our Minds Sometimes Say No — Even When AI Is Right

Cambridge, MA, June 12, 2025 (GLOBE NEWSWIRE) -- Would you trust an AI doctor to diagnose skin cancer — even if it’s more accurate than a human doctor?

A new study from the MIT Sloan School of Management sheds light on a puzzling paradox: Despite AI’s growing accuracy and efficiency, people often prefer human decisions—even when AI performs better. Yet in other contexts, such as when forecasting stock trends, people readily turn to AI over human experts. What explains this puzzle?

The research paper, titled “AI Aversion or Appreciation? A Capability–Personalization Framework and a Meta-Analytic Review,” was published in Psychological Bulletin. The authors are MIT Sloan associate professor Jackson G. Lu; professor Xin Qin, associate professor Chen Chen, doctoral students Hansen Zhou and Xiaowei Dong, and postdoctoral fellow Limei Cao from Sun Yat-sen University; Shenzhen University postdoctoral fellow Xiang Zhou; and Fudan University associate professor Dongyuan Wu.

The researchers conducted a meta-analysis of 163 studies involving more than 80,000 participants. They proposed a new theory, the Capability-Personalization Framework, which suggests that individuals focus on two key dimensions when deciding whether to rely on AI versus humans in a given decision context: 

  1. Perceived capability of AI: Is AI perceived as more capable than humans in this decision context?
  2. Perceived necessity for personalization: Is personalization perceived as necessary in this decision context?

Results show that, in a given decision context, people are more likely to prefer AI when AI is perceived as more capable than humans and personalization is deemed unnecessary. But when either of these conditions is not met, AI aversion emerges.

“People don’t simply love or hate AI,” said Lu. “Their response depends on whether AI fits both their utilitarian need to get the job done effectively and their psychological need to be recognized as a unique individual.”

For example, even when an AI system proves more accurate at identifying skin cancer from medical images, patients often still prefer human doctors — because they feel medical decisions require understanding of their unique circumstances.

The meta-analysis also uncovered key moderators that influence AI preference. People were more likely to appreciate AI when it was physically tangible (e.g., service robots in restaurants) compared to physically intangible algorithms, when outcomes were attitudinal (rather than behavioral), and in countries with low unemployment. Meanwhile, AI aversion was more pronounced in countries with higher (vs. lower) levels of education and internet use.

“Understanding people’s mindset toward AI is just as important as improving the technology itself,” said Qin. “For AI to be trusted and more widely adopted, developers must consider not only how capable it is, but also how well it aligns with users’ psychological needs.”

This research provides valuable guidance for developers and policymakers, encouraging them to go beyond technical optimization and consider how human psychology shapes people’s attitudes toward AI.

“Maximizing AI’s potential means understanding when it’s welcome — and when it’s not,” said Lu. “Only by addressing both capability and personalization can we move toward meaningful human-AI collaboration.”

Attachment


Matthew Aliberti
MIT Sloan School of Management
7815583436
malib@mit.edu
Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms Of Service.