Malami Muhammad Ladan

Malami Muhammad Ladan

Lancaster University Ghana | | Degree: Computer Science
The impact of trust in adopting artificial intelligence in healthcare

Introduction

Artificial Intelligence (AI) is becoming increasingly popular in healthcare due to its potential to improve patient outcomes and optimize clinical workflows. AI systems can assist in tasks such as diagnosing diseases, predicting outcomes, and improving treatment plans, which can ultimately lead to better patient care.However, the successful adoption of AI in healthcare systems depends on the level of trust users have in these systems. Without trust, healthcare professionals and patients may be hesitant to adopt and use AI technologies, which could limit their potential benefits.

Image settings

Original image (Max: 2mb)
Hover image (Max: 2mb)
Image display
Visibility

Purpose of the study

The purpose of this study is to explore the impact of trust on the adoption of artificial intelligence (AI) in healthcare. The paper aims to review existing literature to identify the factors that contribute to trust in AI systems, and how trust influences user acceptance, adoption, and utilization of AI technologies.

Factors that influence user's trust in healthcare AI.

Trust in AI is a significant but difficult problem. Establishing trust in AI is not an easy task because trust is highly situational and it is influenced by different factors like belief, culture, context and so on. Factors that play key roles in establishing trust in healthcare AI includes

  1. Reliability of the AI machine: healthcare is a fragile sector whereby a small mistake can cause a huge damage. An unreliable performance from an AI machine may cause people not trust it.
  2. Accuracy of the AI machine: ●inaccurate results in healthcare sector may lead to bias, inequality, or cause a great harm. People need to know that the result giving by an AI system is accurate before they can entrust their healthcare to AI machines.
  3. Attributes of the user: many studies shows that user attributes also influence trust in AI. an educated person is more likely to trust AI machine than uneducated one. Similarly, a young person is more likely to trust AI than an old one.
  4. AI's black-box nature: AI systems are often referred to black-box by the scholars. this is because, while their outputs and inputs are visible, the process of obtaining the output remains a mirage to its users. This lack of transparency from the AI system is another factor that influence user trust in AI machines in healthcare.
  5. Privacy and Security: Users need to be confident that their data is being protected and that the AI system is using it ethically.
  6. Ethical Considerations: Users need to trust that the AI system is being used ethically and that it is not causing harm to patients or unfairly biasing decisions.

 

 

Strategies that can help in building user trust.

  1. Transparency: AI systems should be transparent about their data sources, decision-making process, and algorithms. Users should be able to access this information easily and understand it.
  2. Explainability: AI systems should be able to explain how they arrived at their decisions in a way that users can understand. This is especially important in healthcare where decisions can have life or death consequences.
  3. Reliability: AI systems should be reliable and accurate, with their performance and accuracy validated and monitored regularly. Any errors or biases should be identified and addressed promptly.
  4. User Experience: AI systems should be designed with a user-centric approach, providing a positive and intuitive experience for users.
  5. Education and Training: Users need to be educated and trained in the use of AI systems in healthcare. This will increase their understanding of the capabilities and limitations of these systems, and ultimately, build trust.
  6. Collaboration: Collaboration between healthcare professionals and AI systems can help build trust. By working together, healthcare professionals can better understand how AI systems work, and how they can be used to improve patient outcomes.
  7. Ethical Considerations: AI systems should be developed and used ethically, with clear guidelines and regulations in place to ensure that they do not cause harm or unfairly bias decisions.

Image settings

Original image (Max: 2mb)
Hover image (Max: 2mb)
Image display
Visibility

Conclusion

Building trust in AI systems in healthcare requires a comprehensive and collaborative approach that prioritizes transparency, explainability, reliability, user experience, education and training, and ethical considerations. By focusing on these factors, healthcare organizations and policymakers can build trust in AI systems and ensure their successful integration into healthcare systems.

Image settings

Original image (Max: 2mb)
Hover image (Max: 2mb)
Image display
Visibility

Reference

Arrieta, A., Garcia-Serrano, A., & Rueda-Morales, G. (2019). A review of explainable artificial intelligence in healthcare. Journal of biomedical informatics, 88, 103153.

Benda, N. C., Novak, L. L., Reale, C., & Ancker, J. S. (2021). Trust in AI: why we should be designing for APPROPRIATE reliance. Journal of the American Medical Informatics Association : JAMIA, 29(1), 207–212. https://doi.org/10.1093/jamia/ocab238

Cinà, G., Röber, T., Goedhart, R., & Birbil, I. (2022). Why we do need Explainable AI for Healthcare. 1–11. http://arxiv.org/abs/2206.15363

 

Feldman, R., Aldana, E., Stein, K., & Feldman, R. C. (2019). Artificial Intelligence in the Health care Space: How We Can Trust Artificial Intelligence in the Health care Space: How We Can Trust What We Cannot Know What We Cannot Know Recommended Citation Recommended Citation HEALTH CARE SPACE: How WE CAN TRUST WHA. 399. https://repository.uchastings.edu/faculty_scholarshiphttps://repository.uchastings.edu/faculty_scholarship/1753

Friedman, B., Nissenbaum, H., & Slonim, N. (2019). Explainable AI for healthcare: a framework for responsible design. Journal of the American Medical Informatics Association, 26(5), 838-844.

Gille, F., Jobin, A., & Ienca, M. (2020). What we talk about when we talk about trust: Theory of trust for AI in healthcare. Intelligence-Based Medicine, 1–2(June), 100001. https://doi.org/10.1016/j.ibmed.2020.100001

Guidotti, R., Monreale, A., Ruggieri, S., & Turini, F. (2018). A Survey of Methods for Explaining Black Box Models. ACM Computing Surveys (CSUR), 51(5), 93.

Jermutus, E., Kneale, D., Thomas, J., & Michie, S. (2022). Influences on User Trust in Healthcare Artificial Intelligence: A Systematic Review. Wellcome Open Research, 7, 65. https://doi.org/10.12688/wellcomeopenres.17550.1

Malami Muhammad Ladan

Lancaster University Ghana | | Degree: Computer Science
The impact of trust in adopting artificial intelligence in healthcare

Abstract

The adoption of artificial intelligence (AI) in healthcare has the potential to transform patient outcomes and optimize clinical workflows. However, trust in AI systems is a critical factor in ensuring their successful integration into healthcare systems. This paper reviews the literature on the impact of trust in adopting AI in healthcare, exploring how trust influences user acceptance, adoption, and utilization of AI technologies. Additionally, the paper discusses the factors that contribute to trust in AI systems, including system transparency, explainability, and reliability, as well as the potential ethical concerns associated with AI adoption. The findings of this paper suggest that building trust in AI systems is essential for their successful integration into healthcare, and that policymakers and healthcare organizations need to prioritize efforts to foster trust in AI among healthcare professionals and patients alike.

 

Contact

Email
Facebook

Malami Muhammad Ladan