ChatGPT at the point of care
The use of artificial intelligence (AI) is increasingly being explored in the medical field to improve clinical practice. One such AI tool is ChatGPT, an online chatting interface based on the Generative Pre-trained Transformer technology. This large language model is trained on the entire internet corpus and can generate, summarising, synthesising text, and can do way beyond the conventional imagination. While the public has shown great interest in ChatGPT, clinicians may not be aware of its practical applications in routine medical practice.
Recently, researchers published an essay in the New England Journal of Medicine highlighting some of the specific ways in which ChatGPT, specifically the latest iteration GPT-4, could be used. In one example, GPT-4 listens to a recorded clinical encounter and generates a clinical note that captures the medical facts and psychodynamics present in the interaction. This technology has the potential to alleviate the burden of clinical documentation in electronic medical records, replacing the need for scribes in many cases.
Furthermore, it could generate a summary of a patient's medical history before a visit, streamlining the pre-visit chart review process. Patients could also use a chat-based service to query their records and receive understandable information and feedback.
Another potential use of GPT-4 highlighted in the essay is 'curbside consultation'. In this scenario, a clinician asks GPT-4 what to look for when evaluating a dyspnoeic patient with Chronic Obstructive Pulmonary Disease for exacerbation. GPT-4 generates a concise discussion of diagnosis and management. While the advice given by GPT-4 may not always be superior to existing online sources of medical information, the authors suggest that exploring longer conversations between clinicians and GPT-4 about complex patients could be beneficial.
Although artificial intelligence, including GPT, offers many benefits for medical practice, there are still challenges that need to be addressed. One significant challenge is that large language models like GPT do not have an innate understanding of text. Instead, they synthesise text that appears intelligible but may "fill in the blanks" with information that is not necessarily grounded in truth.
This presents a potential risk, as GPT may reorganise and distil electronic health records in ways that misrepresent clinical reality. Furthermore, many electronic health record systems are poorly designed and contain clinically irrelevant, repetitive, disorganised, or outdated information. This issue highlights the importance of ensuring that GPT is used in a way that complements clinical expertise rather than replaces it.
The legal and ethical considerations surrounding the use of GPT in healthcare are also complex. The authors of a recent essay published in Journal of the American Medical Association (JAMA) distinguish between using GPT to augment clinician judgement and using it to replace it altogether. The former can provide clinicians with additional information and insights, while the latter poses risks to patient safety and autonomy. Additionally, the direct-to-consumer use of GPT for medical advice raises concerns about the quality of care provided and the impact on the patient-clinician relationship.
As healthcare moves towards a fully electronic health record system, the entire patient history will be accessible to artificial intelligence tools. This means that models that synthesise digital data and generate text could potentially alter any clinical encounter. Therefore, it is crucial to understand the potential benefits and drawbacks of these technologies and use them in a responsible and ethical way.
In conclusion, while there are many potential benefits to using GPT in healthcare, there are also significant challenges that need to be addressed. A thorough understanding of these issues is critical for using GPT in healthcare in a responsible and effective way.