Study says AI chatbots churn out 'racist' medical information

Artificial intelligence chatbots were found to return debunked medical stereotypes about Black people, according to the results of a Stanford University study.

A study found that artificial intelligence chatbots such as the popular ChatGPT return common debunked medical stereotypes about Black people.

Researchers at Stanford University ran nine medical questions through AI chatbots and found that they returned responses that contained debunked medical claims about Black people, including incorrect responses about kidney function and lung capacity, as well as the notion that Black people have different muscle mass than White people, according to a report from Axios.

The team of researchers ran the nine questions through four chatbots, including OpenAI's ChatGPT and Google's Bard, that are trained to scour large amounts of internet text, the report noted, but the responses raised concerns about the growing use of AI in the medical field.

ARTIFICIAL INTELLIGENCE HELPS DOCTORS PREDICT PATIENTS’ RISK OF DYING, STUDY FINDS: ‘SENSE OF URGENCY’

"There are very real-world consequences to getting this wrong that can impact health disparities," Stanford University assistant professor Roxana Daneshjou, who served as an adviser on the paper, told the Associated Press. "We are trying to have those tropes removed from medicine, so the regurgitation of that is deeply concerning."

William Jacobson, a Cornell University law professor and the founder of the Equal Protection Project, told Fox News Digital that immaterial racial factors making their way into medical decision-making has long been a concern, something that could worsen with the spread of AI.

"We have seen DEI and critical race ideology inject negative stereotypes into medical education and care based on ideological activism," Jacobson said. "AI holds out the potential of assisting in medical education and care that is focused on the individual. AI should never be the only source of information, and we would not want to see AI politicized by manipulating the inputs."

CLICK HERE FOR MORE US NEWS

Phil Siegel, the founder of the Center for Advanced Preparedness and Threat Response Simulation, told Fox News Digital that AI systems do not have "racist" models but noted biased information based on the information set it draws on.

"This is a perfect example of 'Pillar 3' of regulation that has to be managed for AI," Siegel said. "Pillar 3 is 'ensuring fairness' – to not allow current biases get hard-coded in the datasets and models that would cause unfair prejudice in areas such as health care, hiring, financial services, commerce and services. Obviously, some of that is occurring today."

Neither Google nor OpenAI immediately responded to a Fox News request for comment.

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.