RAISE 2025 panel statement on aligning AI to clinical values

In September 2025, I had the opportunity to attend the RAISE symposium on responsible and ethical AI for healthcare. Participants included a wide range of researchers and practitioners with varying backgrounds in AI research, medicine, ethics, among others. The symposium included a variety of panels and a few talks.

I was invited to speak on a panel on aligning AI to clinical values and what we can learn about alignment from other safety-critical AI applications. Apart from an engaging discussion, I also made a brief statement summarizing my thoughts on the topic. In this blog post I want to share this statement.

Aligning AI to clinical values

While I have worked on topics in safe and responsible AI, I don’t consider myself an AI alignment researcher. However, thinking more closely, in our latest work on physician oversight for AMIE, many of the questions we tackled are inherently alignment questions. For example, we asked how should a medical dialogue AI interact with the patient and take their history — how many questions are too many, how much empathy should the model show, etc. Similarly, we studied how our AI should summarize information for physician review — how detailed and long should these medical notes be, which structure should they have, etc. All of these questions are essentially aligning our system to patient and physician values.

Coming back to the panel’s question, we first need to consider whose and what “clinical values” we would want AIs to be aligned to. The quadruple aim gives us some options, stating that healthcare is ultimately about better patient care, better clinician experience, better population health, and lower cost for the healthcare system. In AI or ML research for medicine, we are doing a reasonable job in aligning with clinicians and the healthcare system. This is because health AI teams often collaborate closely with clinicians and deployment of any AI or ML technology in practice will have to align with workflows and billing of the particular healthcare provider. Ignoring population health (simply because it’s been out of scope of my work), this leaves the patient.

Patients are incredibly hard to align to due to various factors:

  • As an individual AI researcher, it is usually difficult to impossible to actually work directly with patients due to various regulatory and practical reasons. Even for AMIE, we have worked much more with patient actors rather than real patients. This makes it difficult to actually establish reliable channels for feedback and learning.
  • There is also some information asymmetry in that patients may not have the information or knowledge to actually express all of their concrete values. Of course, there are clear values that patients usually can express and we should align to — empathy, understanding, being involved in decision making, and so on. But in many situations, patients are not equipped to judge what’s best for them. At least this is something that we widely expect and I believe there is some truth to it.
  • Finally, there is no “one” patient. AI and ML researchers have long been battling with disagreement among physicians and specialists. However, this is much more pronounced in patients. Their background and experience exhibits much more diversity compared to clinicians. This is a problem that has recently seen some attention in the context of AI safety: addressing “pluralistic” alignment.

I still believe that we should try to primarily align with patient values. Offering better care and patient experience is at the heart of healthcare, will lead to larger adoption and benefits may trickle down to the other aims. Also, compared to lots of work on ML and AI for health in the past few decades, recent AI systems finally hold the promise of developing patient-facing applications where alignment with patient values is critical.

Understanding “whose” values we can align to is not sufficient though. We also need to decide “what” concrete values we want to consider. In the context of AI ethics, a recent report touches on the same problem and distinguishes between preferences, intentions, instructions, interests, among others. These can be expressed explicitly or implicitly. Patients may be able to express their preferences, but may also not be able to. Ultimately, I feel we should align to a more abstract (and still fairly vague) concept of the “patient’s best interest”. This clearly includes patient preferences such as empathy, access to information, being involved in decision making, but it also includes estimating patient outcomes and acting to improve patient well-being long-term.

What can we learn from other applications

Having worked on AI safety, I believe that the AI for health community can actually learn a lot from recent work on AI safety. In AI safety, people also came to the conclusion that there is a 4-way relationship between user, AI, developer and society. Based on this, researchers developed a fairly clear idea of what misalignment entails. The AI safety community is also spearheading the development of risk frameworks, guiding us how to capture, classify and ultimately address risks. Finally, AI safety tackles a much richer set of problems (between short-term and long-term risks) and the frameworks and methods developed will have to be much more general compared to a more narrow use case such as health. This holds the promise that methods and frameworks can be specialized for our purposes.

Another important learning from the AI safety literature is that users tend to develop quite complex relationships with AI systems. This is in contrast to the health AI community; the statement from the last RAISE workshop says that AI should be treated as a tool rather than a separate entity in the healthcare ecosystem. Aligning with patient values also means acknowledging the reality that patients (i.e., users of AI for health systems) may ultimately treat AI systems as separate entities and develop different relationships with, e.g., a personal health assistant of sorts.

The post RAISE 2025 panel statement on aligning AI to clinical values appeared first on David Stutz.

Liked Liked