The AI algorithms increasingly used to treat and diagnose patients can have biases and blind spots that could impede healthcare for Black and Latinx patients, according to research co-authored by a Rutgers-Newark data scientist. Fay Cobb Payton, a Mathematics and Computer Science professor, has researched how AI technology and algorithms often rely on data that can lead to generalizations about patients of color, failing to incorporate their cultural background and day-to-day living circumstances.

Payton, who is Special Advisor to the Chancellor on Inclusive Innovation at Rutgers-Newark,  recently co-authored findings on AI and healthcare inequities for The Milbank Quarterly, which explores population health and health policy. Additional authors were Thelma C. Hurd of the Institute on Health Disparities, Equity, and the Exposome at Meharry Medical College, and Darryl B. Hood of the College of Public Health at Ohio State University.

Payton is co-founder of the Institute for Data, Research and Innovation Science (IDRIS) at Rutgers, which combines interdisciplinary research in the fields of medicine, public health, business, cultural studies, and technology. Part of its mission is to find the best ways data can be used to serve community and uncover intersections of data, technology and society across fields.

The study co-authored by Payton found that because there is a lack of representation among AI developers and Black and brown patients are underrepresented in medical research, the algorithms can perpetuate false assumptions and can lack the nuances that can be provided by a more diverse field of developers and patient data. Health care providers can also play an important role in ensuring that treatment transcends the algorithm. To read the full story.