Search
AI-powered search, human-powered content.
scroll to top arrow or icon

The WHO’s AI warning

The WHO’s AI warning

Illustration of a female healthcare worker wearing scrubs and a surgical mask created with Generative AI technology.

IMAGO/Nedrofly Stock via Reuters Connect
Contributing Writer
https://x.com/ScottNover
https://www.linkedin.com/in/scottnover/

Generative AI could be game-changing for the world of medicine. It could help researchers discover new drugs and better match ailing patients with correct diagnoses.

But the World Health Organization is concerned about everything that could go wrong. The global health authority is formally warning countries to monitor and evaluate large language models for medical and health-related risks.


“The very last thing that we want to see happen as part of this leap forward with technology is the propagation or amplification of inequities and biases in the social fabric of countries around the world,” said WHO official Alain Labrique. This advice was issued as part of a larger guidance on AI in healthcare, a topic on which the WHO began advising in 2021.

Artificial intelligence systems are susceptible to bias, because the inclusion or absence of data could seriously affect its outputs. For example, if a medical AI model is trained solely on health data from people in wealthy nations, it could miss or misunderstand populations in poorer nations and do harm if used improperly.