Oct 12, 2025

Responsible AI in Healthcare: What We Should All Know

Responsible AI in Healthcare: What We Should All Know

Responsible AI in Healthcare: What We Should All Know

Responsible AI in Healthcare: What We Should All Know

blog

Artificial intelligence continues to shape the way we work, connect, and make decisions. From personalized shopping recommendations to chatbots to content creation, AI’s impact is everywhere–and it’s growing rapidly. 

But with great power comes great responsibility, as they say. And as these technologies become more powerful and more embedded, the question is no longer “what can AI do?” but “what should AI do?”

Much has already been written about the importance of responsible AI, with everyone from the United Nations to Google to academic journals weighing in. And there’s a pretty clear consensus on what is meant by the term “responsible AI.” It means using AI ethically, transparently, and with accountability—aligned with human values. It focuses on harnessing the technology in ways that are fair and inclusive, so that innovation doesn’t come at the cost of trust, safety, or social good. 

At Revisto, we work at the intersection of AI and life sciences, where the stakes are high. Our AI technology is used to help life sciences marketing teams and the medical, regulatory, and legal reviewers they count on ensure that the promotional materials seen by patients and providers are accurate, balanced, and compliant. Simply put, patient safety is on the line. 

Consider this example: An AI tool suggests a drug efficacy claim based on a retracted study. Without transparency and due diligence, MLR reviewers may miss it, spreading misinformation, risking company reputation and financial damage, and–worst of all–jeopardizing patient safety. 

Let’s dig into the responsibility of AI in life sciences, and more specifically in the MLR or promotional review process. Consider these best practices. 

  1. Citations and context always matter 

Just like humans presenting research, AI tools should always cite the sources—studies, guidelines, or past decisions—behind every recommendation. This is crucial for Regulatory and Legal teams who need to verify compliance. In addition, AI should explain why it made a suggestion. As an added benefit, this also speeds up reviews, thanks to less manual double-checking, and ensures confidence in AI outputs.

  1. General AI isn't enough—domain expertise matters

Life sciences companies should take care to avoid using general AI, opting instead to use specialized AI tools that have the depth and breadth of understanding required to fully address regulatory risk, competitive positioning, and patient safety implications. 

Just how risky is using general AI in the highly regulated, high-stakes world of pharma marketing? General AI could hallucinate to make the most appealing claim, even if it might not be fully supported by evidence, based on other materials that have made similar claims—even though its source materials may be related to a different drug or slightly different indications. At best, this hallucination leads to a lot of manual rework by the reviewers who catch the error. At worst, the error goes unnoticed and both the pharma companies reputation and patient safety are jeopardized. 

  1. Governance documents and practices are crucial 

As AI becomes omnipresent and the technology continues to evolve, it’s crucial for organizations to establish governance policies. Who approves AI outputs? How often is its performance checked? Another important part of an organization’s governance policies is keeping audit trails. Audit trails create a log that documents decisions and sources, which become invaluable during an audit or inquiry. 

  1. Humans should always retain control

While AI tools can improve efficiency and reduce human error, they cannot replace the nuanced judgment, contextual understanding, and ethical considerations that experienced reviewers bring to the table. Therefore, humans should always remain the final decision makers, ensuring that AI functions as a powerful tool in the toolkit, but not a “magic bullet” in replacing the expertise of skilled reviewers. 

Trust is paramount

"In healthcare, trust is essential,” says Ferry Tamtoro, CEO and Co-Founder of Revisto. “Patients trust that the information they receive about their treatments is accurate. Healthcare providers trust that promotional materials meet the highest standards of compliance and balance. At Revisto, we never forget that AI might seem like magic behind the scenes, but there's nothing magical about our responsibility. We're accountable for building tools that protect that trust, every single time."

Learn more about how Revisto is helping life sciences harness the power of AI responsibly at revisto.com.

Improve time to market

Improve time to market

Simplify compliance.

Simplify compliance.

Save time.

Save time.

Optimize your MLR workflow with Revisto

Optimize your MLR workflow with Revisto

Optimize your MLR workflow with Revisto