skip to content

1 in 5 Doctors in UK are using AI tools like ChatGPT to help with practice, daily tasks

A recent survey has revealed that many General Practitioners (GPs) in the UK are integrating artificial intelligence (AI) tools, such as ChatGPT, into their daily practice.

According to the findings published in BMJ Health and Care Informatics, one in five doctors reported using these AI tools to assist with various tasks, including writing patient letters after appointments.

The survey involved 1,006 GPs, who were asked about their experiences with AI chatbots, including ChatGPT, Bing AI, and Google’s Gemini.

Practical applications in clinical practice
The survey results highlighted several ways GPs leverage AI tools in their clinical work. Nearly a third of the respondents who used AI tools stated that they used them to generate documentation following patient consultations.

Additionally, 28% reported using these tools to explore alternative diagnoses, while a quarter used AI to suggest potential treatment options. These AI systems generate written responses to doctors’ queries, streamlining certain aspects of their practice.

The researchers behind the survey noted that AI tools appear to provide value, particularly in handling administrative tasks and supporting clinical decision-making. This suggests that while the integration of AI in healthcare is still in its early stages, it is already making a noticeable impact on how doctors manage their workloads.

Accuracy and patient privacy
Despite the benefits, the growing use of AI in clinical settings has raised concerns, particularly regarding accuracy and patient confidentiality. The researchers questioned the potential risks of how internet companies might handle the data used by generative AI tools.

They emphasized the importance of ensuring that these technologies comply with relevant regulations and do not compromise patient privacy.

Medico-legal advisers have echoed these concerns, pointing out that AI-generated responses, while often articulate, can sometimes contain inaccuracies or reference incorrect guidelines. This issue could become particularly problematic when AI drafts responses to patient complaints.

The importance of using AI ethically and in compliance with data protection laws has been stressed, with calls for greater awareness among current and future doctors about both the benefits and risks of AI in healthcare.

As the use of AI in clinical practice continues to evolve, it remains crucial for healthcare professionals to navigate these tools carefully, balancing the efficiency they offer with the responsibility to maintain high patient care and confidentiality standards.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed