Search
AI-powered search, human-powered content.
scroll to top arrow or icon

Slapping nutrition labels on AI for your health

Female doctor in hospital setting.

Female doctor in hospital setting.

Reuters
Contributing Writer
https://x.com/ScottNover
https://www.linkedin.com/in/scottnover/
Doctors use AI to help make diagnoses, but machines can’t take the Hippocratic Oath. So how can Washington ensure AI does no harm? The US Department of Health and Human Services is on the case: It’s proposing “nutrition labels” to bring transparency for healthcare-related AI tools.

At a congressional hearing last week, Rep. Cathy McMorris Rodgers (R-WA) noted how AI can help detect deadly diseases early, improve medical imaging, and clear cumbersome paperwork from doctors’ desks. But she also expressed concern that it could exacerbate bias and discrimination in healthcare.

Patients need to know who, or what, is behind their healthcare determinations and treatment plans. This requires transparency, which is a key part of Biden's AI Bill of Rights, released last year.

The new rule, first proposed in April by the HHS’s health information technology office, would require developers to publish information about how AI healthcare apps were trained and how they should and shouldn’t be used. The rule, which could be finalized before January, aims to improve both transparency and accountability.