AI and Human Values

Typeresearch
AreaAIMedical
Published(YearMonth)2405
Sourcehttps://www.nejm.org/doi/full/10.1056/NEJMra2214183
Tagnewsletter
Checkbox
Date(of entry)

The data used to train AI models often encode societal values, which can become embedded in these models, influencing their outcomes. This phenomenon is evident across various applications, including medical imaging and healthcare resource allocation. Bias in training data can amplify existing societal biases, but AI can also help reduce disparities, as demonstrated by a study using deep-learning models to address unexplained pain disparities in knee radiographs. The rise of generative AI and large language models (LLMs) has heightened concerns about their application in medicine due to risks of confabulation and factual inaccuracy. The embedded "human values" in AI models pose significant challenges, as these values may not align with human goals and standards, even if the models are free from obvious biases. The integration of probabilistic models into medical decision-making has always required value judgments from their creators. This article explores the intersection of value judgments and predictive models, highlighting the importance of understanding individual values and risks, which is crucial for the thoughtful clinician, and identifies future challenges and opportunities in designing AI models.