Generative AI could be game-changing for the world of medicine. It could help researchers discover new drugs and better match ailing patients with correct diagnoses.
But the World Health Organization is concerned about everything that could go wrong. The global health authority is formally warning countries to monitor and evaluate large language models for medical and health-related risks.
“The very last thing that we want to see happen as part of this leap forward with technology is the propagation or amplification of inequities and biases in the social fabric of countries around the world,” said WHO official Alain Labrique. This advice was issued as part of a larger guidance on AI in healthcare, a topic on which the WHO began advising in 2021.
Artificial intelligence systems are susceptible to bias, because the inclusion or absence of data could seriously affect its outputs. For example, if a medical AI model is trained solely on health data from people in wealthy nations, it could miss or misunderstand populations in poorer nations and do harm if used improperly.