New AI Suggestibility Score shows how artificial intelligence decides which experts to elevate

Artificial intelligence has become the primary gatekeeper for who appears as an expert online, who is considered credible, and who is quietly ignored. In response to this shift, Dr. Tamara โ€œTamiโ€ Patzer has introduced the AI Suggestibility Scoreโ„ข, a new metric designed to measure how likely it is that an AI system will select, elevate, and trust a specific professional.

The AI Suggestibility Scoreโ„ข is a central component of Patzerโ€™s broader FirstAnswer Authority Systemโ„ข, a framework that explains how modern AI models evaluate identity rather than just content.

โ€œAI no longer acts like a neutral index of information,โ€ Patzer said. โ€œIt evaluates identity. Suggestibility has become the new visibility. If AI does not find you suggestible, it does not select you, no matter how good your work is.โ€

The score examines machine-readable identity signals, cross-platform consistency, corroborated credentials, and trust patterns to determine whether AI systems are likely to treat a given expert as a reliable source.

The launch of the AI Suggestibility Scoreโ„ข comes at a time when journalism organizations and AI platforms are both focused on identity, verification, and trust. In 2025, the Poynter Institute, Columbia Journalism Review, Nieman Lab at Harvard, the International Fact-Checking Network, the American Press Institute, the Trust Project, the News Literacy Project, the Knight Foundation, the Reuters Institute for the Study of Journalism at Oxford, and UNESCOโ€™s media integrity programs all highlighted the growing risk of identity confusion and misattribution in an AI-driven information ecosystem.

At the same time, major AI systems have increased their reliance on structured identity and authority signals. Search and conversational platforms now place more weight on which person or organization appears to be the most stable, visible, and corroborated entity associated with a name or topic.

โ€œJournalism is tightening its standards for identity and sourcing at the same time AI systems are tightening theirs,โ€ Patzer said. โ€œThe people who do not have a clear, machine-readable identity are the ones who disappear first.โ€

A key risk the AI Suggestibility Scoreโ„ข surfaces is what Patzer calls Identity Collisionโ„ข, a phenomenon in which AI confuses two people who share a similar or identical name. In those cases, the system often defaults to the better-known or more frequently indexed individual.

For example, an author releasing a new book may share a name with a well-known actor. When someone searches that name, AI may highlight the actorโ€™s biography, credits, and interviews, while the author and their work remain effectively invisible unless a user knows additional details to narrow the search.

โ€œWhen all someone knows is your first and last name, AI tends to default to the most famous or most saturated version of that identity,โ€ Patzer said. โ€œYour name alone used to be enough for people to find you. In an AI-filtered world, that is no longer guaranteed.โ€

Patzerโ€™s AI Reality Checkโ„ข diagnostic incorporates the AI Suggestibility Scoreโ„ข, an Identity Collision Risk Scoreโ„ข, and other proprietary measures to show professionals how AI currently interprets them and whether the system is likely to recommend them, ignore them, or confuse them with someone else. The framework is designed for doctors, executives, authors, consultants, and other experts whose work depends on being accurately recognized and surfaced in digital environments.

โ€œExperts assume that because they exist, they are visible,โ€ Patzer said. โ€œWhat we are seeing in 2025 is that visibility is no longer automatic. It has to be engineered.โ€

Dr. Patzer describes her work as AI Identity Engineeringโ„ข, an emerging discipline that brings together AI behavior, journalism ethics, and digital trust. Her FirstAnswer Authority Systemโ„ข is built to help the right experts become the first answer AI delivers in their field, while aligning with the identity and integrity standards promoted by leading journalism and media organizations.

About Dr. Tamara Patzer

Dr. Tamara โ€œTamiโ€ Patzer is a Pulitzer Prizeโ€“nominated journalist and the founder of AI Identity Engineeringโ„ข and the FirstAnswer Authority Systemโ„ข. Her work sits at the intersection of AI visibility, expert verification, and journalism ethics. She is the creator of the AI Reality Checkโ„ข, Identity Collisionโ„ข, the AI Suggestibility Scoreโ„ข, and a suite of visibility metrics designed for professionals, corporations, and institutions that depend on accurate digital recognition.

LinkedIn: https://www.linkedin.com/in/tamarapatzer/

Recent Quotes

View More
Symbol Price Change (%)
AMZN  232.53
+0.46 (0.20%)
AAPL  273.08
-0.68 (-0.25%)
AMD  215.34
-0.27 (-0.13%)
BAC  55.28
-0.07 (-0.13%)
GOOG  314.55
+0.16 (0.05%)
META  665.95
+7.26 (1.10%)
MSFT  487.48
+0.38 (0.08%)
NVDA  187.54
-0.68 (-0.36%)
ORCL  197.21
+1.83 (0.94%)
TSLA  454.43
-5.21 (-1.13%)
Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the Privacy Policy and Terms Of Service.

Gift this article