• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Pharmacovigilance Analytics

Your best resource for PV analytics news, content and innovation!

  • Home
  • Pharmacovigilance Analytics
    • Sources of data
    • Methods and tools
  • Signal Detection
  • News
  • Opinion
  • About
  • Glossary

XAI

Explainable Artificial Intelligence and Pharmacovigilance

May 6, 2023 by Jose Rossello Leave a Comment

When we read the intriguing text ‘explainable artificial intelligence’ for the first time, several thoughts came to our mind. It is known that artificial intelligence (AI) models and their results are difficult to interpret, and even more difficult to explain to others. Interpretability and/or transparency are frequently used as synonyms. Every effort to facilitate, make the results more explainable, is very welcome. But what is explainable AI about? Is it a set of guidelines for anyone to be able to navigate AI models and results more easily, or is it something more profound, a new concept? Let’s explore the use of explainable artificial intelligence for pharmacovigilance and patient safety.

What is explainable artificial intelligence?

According to Google Cloud, XAI is a series of tools and frameworks to understand and interpret predictions made by our machine learning models.

XAI is widely acknowledged as a crucial feature for the deployment of AI models.​1​ Explainability can facilitate the understanding of various aspects of a model, leading to insights that can be used by different stakeholders, such as data scientists, business owners, model risk analysts, regulators, and consumers.​2​

In traditional machine learning, complex models are built using large amounts of data and mathematical algorithms, making it difficult for humans to understand how the model arrived at its conclusions. XAI aims to make these models more transparent and interpretable, allowing humans to understand the decision-making process and to have confidence in the results.

The importance of XAI is growing as AI is being integrated into more and more aspects of our lives, including healthcare, finance, and criminal justice. XAI can help ensure that these systems are fair, unbiased, and transparent, and can help build trust between humans and AI.

Explainable Artificial Intelligence in Healthcare and Medicine

Explainability constitutes a major medical AI challenge. Omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individuals and public health.

There are several perspectives to explainability of artificial intelligence in healthcare, namely the technological perspective, the legal perspective, the medical perspective, the patient perspective, as well as the ethical perspective.​3​

Explainable Artificial Intelligence has significant potential to improve healthcare and medicine by helping clinicians and researchers better understand how AI systems make predictions or recommendations, which is crucial for ensuring their safety and effectiveness.

One area where XAI can be particularly useful is in medical diagnosis. AI systems can be trained on large amounts of medical data to make accurate predictions, but these predictions need to be explainable so that clinicians can understand why the system is making a particular diagnosis. This can help clinicians make more informed decisions and reduce the risk of errors or misdiagnoses.

In addition, XAI can be used to help identify biases in medical data and prevent them from influencing the predictions of AI systems. For example, if an AI system is trained on medical data that is biased against certain patient groups, it may make inaccurate or unfair predictions that could negatively impact those patients.

XAI can also be used to improve the transparency of clinical trials by helping researchers better understand the factors that contribute to treatment outcomes. This can help identify new treatments or interventions that are more effective, as well as identify potential side effects or risks associated with these treatments.

Overall, XAI has the potential to significantly improve the accuracy and safety of medical diagnoses and treatments, as well as increase the transparency and fairness of healthcare systems.

Pharmacovigilance and Explainable Artificial Intelligence

XAI can be used in pharmacovigilance by analyzing large amounts of medical data to identify potential adverse drug reactions (ADRs) or potential adverse events. This can be done using machine learning algorithms that are trained on large datasets of patient data, including electronic health records, social media posts, and other sources.​4​ XAI can help make these algorithms more transparent and interpretable, allowing researchers and clinicians to understand how the algorithm is making predictions and identify potential biases or errors.

In addition, XAI can be used to identify patterns and trends in ADRs that may not be immediately apparent to humans. For example, XAI can be used to analyze patterns in patient data that may indicate a particular drug is causing unanticipated adverse events, or to identify patient groups that are particularly susceptible to certain ADRs. XAI can be combined with other machine learning models, like knowledge graphs, helping identify biomolecular features that may distinguish or identify a causal relationship between an ADR and a particular compound.​5​

Explainable artificial intelligence may improve the accuracy and effectiveness of pharmacovigilance by helping researchers and clinicians better understand the data and algorithms used in the process. This can help identify potential safety issues more quickly and accurately, leading to improved patient outcomes and better drug safety.

Citations

  1. 1.
    Barredo Arrieta A, Díaz-Rodríguez N, Del Ser J, et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion. Published online June 2020:82-115. doi:10.1016/j.inffus.2019.12.012
  2. 2.
    Belle V, Papantonis I. Principles and Practice of Explainable Machine Learning. Front Big Data. Published online July 1, 2021. doi:10.3389/fdata.2021.688969
  3. 3.
    Amann J, Blasimme A, Vayena E, Frey D, Madai VI. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med Inform Decis Mak. Published online November 30, 2020. doi:10.1186/s12911-020-01332-6
  4. 4.
    Ward I, Wang L, Lu J, Bennamoun M, Dwivedi G, Sanfilippo F. Explainable artificial intelligence for pharmacovigilance: What features are important when predicting adverse outcomes? Comput Methods Programs Biomed. 2021;212:106415. doi:10.1016/j.cmpb.2021.106415
  5. 5.
    Bresso E, Monnin P, Bousquet C, et al. Investigating ADR mechanisms with Explainable AI: a feasibility study with knowledge graph mining. BMC Med Inform Decis Mak. 2021;21(1):171. doi:10.1186/s12911-021-01518-6
Jose Rossello
Jose Rossello

Filed Under: Artificial Intelligence Tagged With: Explainable artificial intelligence, XAI

Primary Sidebar

Subscribe in a reader

Featured News / Posts

Explainable Artificial Intelligence and Pharmacovigilance

When we read the intriguing text 'explainable artificial intelligence' for the … [Read More...] about Explainable Artificial Intelligence and Pharmacovigilance

Analysis Of Textual Data May Complement Traditional Pharmacovigilance

According to a well-written systematic review on the application of natural … [Read More...] about Analysis Of Textual Data May Complement Traditional Pharmacovigilance

Machine Learning and Pharmacovigilance

Machine learning (ML) is becoming increasingly available to everyone, including … [Read More...] about Machine Learning and Pharmacovigilance

clinical trial follow-up

Post-Randomization vs On Treatment Made All the Difference

The Food and Drug Administration (FDA) recently requested withdraw of Belviq® … [Read More...] about Post-Randomization vs On Treatment Made All the Difference

Pharmacovigilance Audits/Inspections and PV Analytics!

Disclaimer: This article is written by a Safety Physician to provide … [Read More...] about Pharmacovigilance Audits/Inspections and PV Analytics!

Artificial Intelligence in pharmacovigilance? What a challenge!

Abstract Pharmaceutical industry, and more particularly pharmacovigilance, … [Read More...] about Artificial Intelligence in pharmacovigilance? What a challenge!

Some Sample Triples

Knowledge Graphs, Semantic Web and Drug Safety

Second part of: Mining PubMed for Drug Induced Acute Kidney Injury When I … [Read More...] about Knowledge Graphs, Semantic Web and Drug Safety

Review of Safety in FDA Medical Reviews

Analysis of the latest review of safety sections for new drug applications (NDAs … [Read More...] about Review of Safety in FDA Medical Reviews

Mining PubMed for Drug Induced Acute Kidney Injury

Enhancing signal detection capabilities beyond regular literature … [Read More...] about Mining PubMed for Drug Induced Acute Kidney Injury

Top 7 Predictive Model Applications in Drug Safety and Pharmacovigilance

As drug safety and pharmacovigilance organizations develop more sophisticated … [Read More...] about Top 7 Predictive Model Applications in Drug Safety and Pharmacovigilance

  • The Pharmacovigilance of the Future: Prospective, Proactive, and Predictive
  • Disproportional Recording vs Disproportional Reporting
  • Deep Learning, Machine Learning, and Artificial Intelligence – What are the Differences?

RSS From Nature journal

  • Developmental and reproductive toxicity of a recombinant protein subunit COVID-19 vaccine (ZF2001) in rats
  • Cognitive test performance in chronic cannabis flower users, concentrate users, and non-users
  • Risk of secondary stroke subsequent to restarting aspirin in chronic stroke patients suffering from traumatic brain injury in Taiwan
  • Regorafenib inhibits EphA2 phosphorylation and leads to liver damage via the ERK/MDM2/p53 axis
  • Activation of the urotensin-II receptor by remdesivir induces cardiomyocyte dysfunction

Copyright © 2023 · News Pro on Genesis Framework · WordPress · Log in