Posted in

AI holds great promise for health care, and Trump’s action plan is a good first step

Physicians understand the transformative potential of AI to improve care delivery and patient outcomes — and realizing that promise will require physician-led innovation and thoughtful decisions made today to create the future for medicine we all want. 

The impact of health care technology hinges on the reliability of its underlying systems, whether algorithms, devices, or data, and how thoughtfully it is designed, deployed, and integrated. And for physicians to adopt innovation in health care, it must meet four key criteria: 

  • Demonstrable effectiveness in real-world clinical settings; 
  • Show clear value for patients, physicians and the broader health system; 
  • Ensure clear and appropriately allocated liability frameworks; 
  • Integrate seamlessly into clinical workflows. 

The Trump administration’s AI Action Plan, released last week, is an exciting and welcome development. It evinces close attention to building public and professional trust for AI technology through transparent and ethical oversite and to accelerate national standards for safety, performance and interoperability.

At the American Medical Association, we also believe in the importance of a broad, well-coordinated federal regulatory approach to AI design and integration, and the commitment to upskilling the clinician workforce. However, to ensure AI in health care reaches its full potential, strong physician representation must be present at every stage.

Our surveys on physician sentiments about augmented intelligence in health care show growing enthusiasm for the technology’s applications in the clinical world, especially in its potential to streamline workflows and create greater practice efficiencies. Still, two in five physicians remain equally excited and concerned about health AI. 

Truly building trust — on the part of patients and physicians — requires additional considerations that stand to enhance the action plan’s impact. Specifically, four key areas of opportunity include: 1) Physicians must be full partners at every stage of the AI lifecycle; 2) A coordinated, transparent whole-of government approach is necessary; 3) Secure data that is free from bias will enhance trust; and 4) Frameworks that appropriately apportion liability.

Involving physicians in every stage of the AI lifecycle means doctors are full partners at design, development, governance, rulemaking, post-market surveillance and clinical integration. Physician experts are uniquely qualified to judge whether an AI tool is valid, fits within the standard of care and supports the patient-physician relationship.  

Health care AI is unique in that its risks to the health and wellbeing of our patients are potentially high. Providing clarity and consistency for developers, deployers and end users — patients and physicians — is essential, and that will only come from a whole-of government approach that includes states. State and federal policymakers should work in coordination to avoid fragmentation that would stifle innovation. This approach will prioritize safety, accountability and public confidence in AI systems.

Trust in AI begins with trust in how data is used. Physicians and patients need assurances that data powering AI tools are secure, de-identified, bias-free and governed by strong consent frameworks. We need comprehensive privacy protections and governance structures that ensure patients understand how their data is used — and can control it. Additionally, bias in AI can cause real patient harm. Eliminating references to misinformation or diversity, equity and inclusion in risk frameworks may hinder our ability to address these issues. Efforts to mitigate bias must remain a cornerstone of any ethical AI strategy.

Finally, concerns over AI liability is a top issue for physicians. To further build trust and advance adoption, it will be imperative to ensure there are frameworks in place that protect physicians and appropriately apportion liability for AI errors and performance issues.

AI isn’t just the future of health care, it is very much the present. The government’s new AI Action Plan is an encouraging step forward to bring some of these issues to the forefront while AI health technology is still in its infancy. As physicians, we have an opportunity, and a responsibility, to work today to ensure that AI transform health care and not merely automate inefficiencies. We are excited to build on that momentum and work together to create a future where innovation enhances every patient encounter and augments every physician’s care. 

John Whyte, MD, MPH, is CEO and executive vice president at the American Medical Association. Margaret Lozovatsky, MD, is the AMA’s chief medical information officer and vice president of Digital Health Innovations.