Artificial intelligence (AI) is reshaping health care, including how medications are used. On November 13, 2025, PQA hosted PQA Convenes: Artificial Intelligence in Medication Use Quality to bring together PQA members, health technology leaders, and other stakeholders for a discussion on how AI and machine learning are being used to identify, understand, and engage patients in their medication use.
This is the final blog in a four-part series on the event, and it covers the closing session, “Ensuring Safe, Responsible and Patient-Centered Use of AI.” Moderator Randall Rutta of the National Health Council was joined by Laura Adams from the National Academy of Medicine, Samantha Burch from AHIP and Larry Holden from the Global Liver Institute.
The panel explored the use of artificial intelligence with the aim of minimizing the risks and harms to patients and their data, care and outcomes, and to expand access and improve medication use while ensuring that technology continues to strengthen provider relationships with patients.
Given the wide-ranging interactive panel discussion, no quotes or views are attributed to any panelists or organizations. The perspectives shared via this blog are intended to support continued dialogue about the role of AI in medication use quality.
The Promise of AI
Across all perspectives, AI is viewed as a powerful tool that can expand access, support clinical decision making, and help patients become more engaged in their own care. AI has the potential to support and create efficient processes, like streamlining the prior authorization workflow.
AI is one of the most transformative forces in health care, offering patients access to necessary tools that empower them to take charge of their medication use and treatment decisions. This could mean more opportunities for innovation, such as early disease detection. Technology can help identify patients more effectively and deliver care more directly and cost effectively.
Three areas of emerging AI use identified are consumer-facing, clinical applications, and administrative functions. The potential impact of AI use is exciting as it can help accelerate the delivery of the right information at the right time and provide patients with access to the care they need.
The Need for Guardrails
The panel stressed that AI must be implemented with strong protective oversight to ensure security and trust. Setting guardrails is essential to ensure that AI systems support patients rather than replace human judgment, thereby addressing key concerns about transparency and accountability.
By establishing clear boundaries, we can protect patients’ privacy and safety while allowing AI to be utilized to its full potential. Clear communication and ongoing education can contribute to transparency and trust-building needed to ensure patients also understand that AI is a support and not a substitute to the care they receive from providers.
We need to think about approaching the potential of AI use for the broader health care community with intentionality of its impact on patients. AI is a valuable tool, but like any tool, it must be used safely and effectively with the right precautions. With the right protections in place, AI can significantly improve patient experiences and outcomes.
Keeping Patients at the Center
The panel highlighted the critical need for accurate, complete, and inclusive data to inform AI systems. There are still massive gaps, as often the patient’s perspectives are underrepresented, excluded or not yet incorporated into available datasets. Without these insights, it is difficult to fully understand the full patient experience and the patient’s needs, which are essential for appropriate care.
AI should serve to enhance, not replace, the human element in healthcare. There is still an importance of including patients in the design and implementation process to ensure solutions are supportive yet practical. There should be a broader shift in power towards patients who are becoming more engaged and empowered.
Overall, AI can be a tool to help identify and address gaps in care, while ensuring patients remain a priority throughout the process. AI models are only as effective as the data they detect. The call for inclusive datasets and attention to real-world applicability is needed.
Key Takeaways
The success in utilizing AI in health care in a safe and responsible way relies on maintaining a patient-centered focus with robust oversight to preserve trust and meaningful engagement. Responsible AI use requires clear standards for privacy and accountability. With the right protections in place, AI can significantly improve patient experiences and outcomes and be not only life-changing but lifesaving. Approach AI with intentionality, balancing its promise with the inherent risks.
- Establish guardrails and boundaries to utilize AI responsibly.
- Collaborate to ensure AI-driven changes support patients while preserving clinical decision-making.
- Share and develop models that encourage collective data input.
- Keep patients at the table to ensure accountability and transparency.
PQA Convenes: Artificial Intelligence in Medication Use Quality was made possible by the generous support of Arine, Merck, Pfizer and PQS by Innovaccer. PQA does not endorse, recommend or favor any organization, or its products or services. PQA general funds also supported this event.
