U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Cover of Implications of ChatGPT on Radiology Workflow

Implications of ChatGPT on Radiology Workflow

CADTH Health Technology Review
Ottawa (ON): Canadian Agency for Drugs and Technologies in Health; .
Report No.: CM0009

Key Messages

  • The demand for medical imaging in Canada has risen due to factors, including an aging population, increased patient volumes, advancements in procedures and treatments, and shifts in standard of care. Notably, there has been a substantial increase in CT and MRI examinations.
  • Radiologists in Canada often have large patient volumes, work extended overtime, and manage prolonged wait-lists, all of which contribute to burnout. This burnout has an impact on productivity and staff turnover and may jeopardize patient safety. Burnout may also lead to radiologists reducing their work hours, seeking new employers, or leaving clinical practice.
  • ChatGPT could play a role in supporting radiologists in a variety of ways, including generating radiology reports, providing structured report templates, assisting with clinical history sections of radiology reports, and facilitating patient communication. ChatGPT may also aid clinical decision support by assisting in final diagnoses and cancer screening decisions, and optimizing clinical decision support.
  • ChatGPT’s limitations in radiology workflow include dependence on training data, potential inaccuracies in responses, ethical concerns about patient data privacy, and difficulties in handling complex radiology tasks.
  • While ChatGPT holds promise in enhancing radiology workflow and patient care, careful consideration is needed for its limitations and potential risks. Responsible implementation and ongoing research and development are vital to leveraging its benefits while upholding patient safety and ethical standards.

Context

Medical imaging plays a crucial role in diagnosing, staging, and monitoring various medical conditions. The rising demand for medical imaging in Canada results from multiple factors, including an aging population, increased patient volumes, advancements in procedures and treatments, and shifts in standard of care.1-3 For example, the demand for CT and MRI examinations in Canada has risen by 31% and 62%, respectively, between 2010 and 2020.4

Radiologists in Canada frequently face high workloads, extended overtime, large patient volumes, and lengthy wait-lists.4-6 The situation has gotten worse since the COVID-19 pandemic, requiring radiology departments to handle urgent cases while maintaining regular patient care and managing a growing backlog in a postpandemic era.4,7

While burnout is a common issue affecting health care workers in Canada, diagnostic radiologists have reported higher rates of burnout compared to other types of physicians.4,5,8,9 Some factors associated with burnout are heavy workload, productivity demands, workflow disruptions, lack of work-life balance, and complexities in the form of technological advancements.4,5,8,9 Dissatisfaction at work may lead radiologists to reduce working hours, change employers, or even leave clinical practice.4,10

Artificial intelligence (AI) has demonstrated benefits and a growing presence in the field of radiology over the years, despite various limitations.11 One example, introduced in November 2022 by OpenAI, is ChatGPT, a specific iteration of the Generative Pre-Trained Transformer, which is a language generation model.11-13 ChatGPT has been fine tuned to process and generate humanlike conversational language.14

ChatGPT has been trained on vast amount of data from various internet sources, including websites, books, and forums.15 The training data comprises more than 45 terabytes of text data,16 which enables it to generate various outputs, including simulated scientific content, codes, plays, stories, and poems.13,17 In contrast to conventional language models that depend on statistical methods to predict subsequent words in sentences, ChatGPT employs transformer-based models, which means it can process large volumes of data in parallel.11-13 While ChatGPT is freely available,18 it lacks the capability to conduct real-time searches on the internet or external databases and is susceptible to generating false information.19

ChatGPT may serve diverse roles in supporting radiology workflow, including expediting radiology report generation, facilitating diagnostic decision-making through clinical decision support tools, and aiding in other writing-intensive tasks.11-13 ChatGPT is intended as an AI tool to assist trained radiologists, rather than replace them.20,21

Objective

The main objectives of this report are to:

  • summarize some potential applications of ChatGPT for improving radiology workflow efficiency
  • identify strengths and limitations of ChatGPT in several areas of potential workflow applications.

About This Document

This report summarizes information identified through a literature search using various databases and online resources, including MEDLINE, Embase, Scopus, the Cochrane Database of Systematic Reviews, and the International HTA Database. This report may not provide an entirely comprehensive review of the role of ChatGPT in enhancing the radiology workflow because the literature search used to inform the findings was limited to English-language documents published between January 1, 2020, and June 7, 2023.

Results

Integration of ChatGPT Into Radiology Workflow

Specific workflow areas where ChatGPT has been described in the literature as supporting radiologists are discussed here, with a focus on radiology report writing and clinical decision support tools.

Supporting Radiology Report Writing

Drafting Radiology Reports

ChatGPT may assist in drafting an entire radiology report with only a few diagnosis keywords and some patient clinical information provided as input.20 It may also expedite summarizing the findings to a conclusion.22 The draft can be reviewed and edited by the radiologist for accuracy before finalization.20

In a study evaluating its performance for generating radiology reports, ChatGPT received an overall high appraisal in a scorecard-based assessment; however, it exhibited limitations in accurately when dealing with specific technical and medical terms, leading to misinterpretation and contextual inaccuracies. For instance, it misinterpreted “dorsovolar” as “dorsoplantar.”23

Using ChatGPT for drafting reports could potentially decrease turnaround times, ease workloads, and enhance efficiency by saving the time usually dedicated to dictating or typing reports. However, reports produced by ChatGPT without thorough expert review could result in misdiagnosis, eventually leading to patient harm.24

Providing a Structured Template

Radiologists often use a structured template to ensure consistency and enhance radiological workflow and communication.25,26 A potential application of ChatGPT may be converting unstructured free-text radiology reports into structured reports, thereby reducing workload for radiologists.26

Some researchers explored structured reporting for radiology reports using various language models. In a study published in May 2023, despite being instructed to produce a tabular format, ChatGPT presented a template with headings and bullets, as the version at the time did not support tabular format.26 However, upon posing the identical question in August 2023, ChatGPT delivered a tabular format report with similar content to May 2023 (Figure 1). This highlights the rapid evolution of ChatGPT.

Assisting With the Clinical History Section

Radiologists often review the reason for an imaging exam, prior imaging reports, and emergency medical records to create and report a patient’s clinical history. ChatGPT can summarize these data and generate a concise clinical history, potentially improving efficiency if integrated properly. Custom integration with other computer applications is possible by modifying existing algorithms and incorporating preexisting medical knowledge, such as electronic medical records, or fine-tuning the model for specific datasets.15,22,27

Improving Patient Communication

To support communication with patients, ChatGPT can deliver clear and concise information about radiological procedures and the purpose, risks, benefits, and expectations in lay language.20 For instance, a radiology report could be entered into the system with appropriate prompts for ChatGPT to provide questions that a patient could ask regarding the results of the report.22 ChatGPT may also be used to support the simplification of radiology reports, which often contain complex medical terminology, clinical facts, and uncertain statements that could be challenging for patients to understand.28,29

Alt-Text: Image shows a structured report template with the headings patient information, clinical history, imaging technique, findings, impressions, and recommendations

Figure 1

Example of a Structured Report for Total-Body CT Examination (Tabular Format).

Note: ChatGPT’s response to the prompt from Mallio et al., “Please provide me with an example of a structured report of a total-body CT examination; include as much detail as possible. The format must be tabular,” in August of 2023 (Canada).

An exploratory case study involving 15 radiologists assessed 45 simplified radiology reports generated by ChatGPT from 3 original reports. To address ChatGPT's text variability and ensure comprehensive coverage of its generative ability, the researchers asked ChatGPT to develop a simplified report for each of the 3 original reports 15 times, resulting in 45 distinct simplified reports. The study highlighted the completeness of the reports and ChatGPT’s ability to identify important aspects of complex medical content. Most radiologists found the simplified reports to be factually correct, complete, and safe for patients. Some instances of simplification were noted that may cause potential harm to patients,30 for example:

  • Misinterpretation of medical terms: In a simplified report, lymph nodes were erroneously conveyed as “might have cancer,” while the original report indicated “no evidence of recurrence or new lymph node metastases.”30
  • Imprecise language: A simplified report concluded “no evidence of the cancer spreading to other parts of the body,” which disregarded the original report’s indication of presence of pulmonary metastases.30
  • Missed findings: The conclusion of a simplified report omitted information about a lesion's growth, which was present in the original report.30

In another study examining 254 radiology reports, ChatGPT achieved better results when provided with additional context, such as specifying the user as a patient or requesting simplification at a seventh-grade reading level.29 When evaluating 62 low-dose lung CT cancer screening scans and 76 brain MRI metastases screening scans, ChatGPT successfully translated the reports into plain language with an average score of 4.27 in a 5-point system. Although some oversimplification and missing information were observed, this may be improved with more detailed prompts, for example:25

Your task is to translate a radiology report into plain language that is easy for the average person to understand. Your response should provide a clear and concise summary of the key findings in the report, using simple language that avoids medical jargon. Please note that your translation should accurately convey the information contained in the original report while making it accessible and understandable to a layperson. You may use analogies or examples to help explain complex concepts, but you should avoid oversimplifying or leaving out important details.

ChatGPT's Application Programming Interface (API) is a set of tools and protocols that allows developers to interact with and integrate ChatGPT into their own applications, software, or platforms. By integrating ChatGPT’s API into any custom software (e.g., PACS), radiologists could automatically generate a simplified report for a patient alongside the original report.31

Aiding Clinical Decision Support

Assisting Final Diagnosis

A team of researchers conducted a study to evaluate ChatGPT's accuracy in differential diagnosis, diagnostic tests, final diagnosis, and clinical management, considering patient age, sex, and case complexity. The researchers fed 36 clinical vignettes from the Merck Sharp & Dohme (MSD) Clinical Manual into ChatGPT, presented prompts to ChatGPT for each phase of the clinical workflow, and rewarded points for accurate responses that were consistent with the MSD Clinical Manual answers.32 ChatGPT achieved a 71.7% overall accuracy rate across all 36 cases.32 The accuracy dipped to 60.3% in the differential diagnoses category compared to 76.9% in formulating conclusive diagnoses and reflecting a substantial improvement with enhanced contextual details (such as the patient's history of illness, physical examination, and pertinent clinical data).32 The researchers noted instances where ChatGPT withheld diagnoses despite possessing relevant data or made dosing errors that underscored its limitation in comprehensive reasoning. They also noted that while sex and age were not significant predictors, the vignettes in this study depict typical disease scenarios, and deviations from the norm might introduce varying biases.32 Hence, further research should explore supplementary demographic factors and potential sources of systematic bias.

Cancer Screening

To promote the responsible use of radiology services, the American College of Radiology (ACR) has been publishing various appropriateness criteria since 1993. Radiologists frequently take the responsibility of interpreting these guidelines, which categorize patients into distinct demographic and risk groups.33

In an evaluation comparing ChatGPT’s response to the ACR Appropriateness Criteria, ChatGPT's responses for breast pain and breast cancer screening yielded moderate to good results. ChatGPT responded to prompts in both an open-ended (OE) format and a select all that apply (SATA) format. In the OE format, it was tasked with providing the single most appropriate imaging procedure, while in the SATA format, it was presented with a list of imaging modalities to assess. Scoring criteria were applied to determine whether the proposed imaging modalities aligned with the ACR guidelines. The study's outcomes showed that breast cancer screening achieved an average OE score of 1.83 out of 2, with a corresponding average correct rate of 88.9% for SATA. For breast pain, the average OE score was 1.125 out of 2, with an average correct rate of 58.3% for SATA.33

The study authors noted that ChatGPT’s accuracy may vary depending on the severity of the initial presentation and it recommends imaging even when unnecessary, taking a maximalist approach. The authors concluded that ChatGPT's performance is promising and may improve clinical workflow and reduce overuse of imaging.33

Clinical Decision Support Optimization

Clinical decision support (CDS) provides information and recommendations to health care professionals and patients at the point of care. Clinicians may override or disregard about 90% of alerts, citing valid reasons such as irrelevancy, poor timing, or incomplete characterization of the clinical condition.34

In a study that investigated the potential of ChatGPT to improve CDS logic, researchers presented CDS logic summaries to ChatGPT for suggestions. The study engaged 5 clinicians who assessed 36 ChatGPT-generated suggestions and 29 human-generated suggestions across 7 alerts. Among the top 20 suggestions, 9 originated from ChatGPT. These suggestions were noted for their high understandability and relevance, and for displaying moderate usefulness with minimal bias, inversion, and redundancy.34

ChatGPT-generated suggestions could play a role in optimizing CDS alerts by identifying enhancements to alert logic. However, the study's scope is restricted to evaluating the quality of ChatGPT-generated suggestions for improving CDS logic, while its impact on clinical outcomes remains uncertain.34

Other Uses

Supporting Writing-Intensive Tasks and Research Articles

ChatGPT is a valuable tool for proofreading and editing tasks, including structural formatting; spell-checking; correcting grammar, punctuation, and inconsistencies; refining sentences; and content for different audiences.11,27 Additionally, it can provide concise summaries of lengthy reading materials like papers, reports, and guidelines, saving time for radiologists for more complex tasks.27

ChatGPT may also aid in radiology research by identifying research topics; providing background information; proposing statistical analysis methods; and creating abstracts, introductions, and conclusion sections of a manuscript, but it cannot access external data for specific journal references.11,35 While ChatGPT has been shown to produce coherent research articles, these articles have been found to be factually incorrect and to contain fictitious references, as ChatGPT lacks the capability to search external databases for accurate literature reviews.16,17,21

Providing Technical Resources and Assisting in Radiology Training

ChatGPT has the potential to:

  • provide age-based normal values for quick reference during image interpretation20
  • help train residents by teaching procedural steps and identifying radiological features in different conditions20
  • specify normal criteria or classic signs in certain pathologies, providing fairly reliable information about common diseases.36

However, ensuring the reliability of ChatGPT's reference sources is essential.20 ChatGPT has been trained solely on publicly available information, not on radiology specifically, but as versions are trained on medical resources, its utility in radiology will grow stronger.37

Administrative Tasks

ChatGPT has the potential to reduce the burden on radiology departments by automating routine tasks.21,23 For example, it could automate appointment scheduling, billing, claims submissions, eligibility verification, and prior authorization requests; however, these functions are currently unavailable. To support these functions, ChatGPT would need to be incorporated as a chatbot into a hospital’s appointment and billing software.15,22

Main Challenges and Considerations Related to the Applications

There are several challenges related to the implementation and use of ChatGPT to consider when making decisions about its uptake to support radiology workflow, including:

  • ChatGPT’s effectiveness is dependent on the quality of the data it was trained on, which could have inherent biases due to imbalanced training data.30,31 ChatGPT lacks domain expertise and creativity, and may not have the same level of nuance and context as a human expert in a specialized field like radiology.11 Furthermore, radiology is an ever-evolving field, while ChatGPT is trained on data with a specific timestamp, limiting its ability to keep up with current developments.30
  • ChatGPT creates text by making predictions, but does not always give the most reliable responses, leading to inconsistent outputs despite the same input.25,30,38 It is vulnerable to hallucinations (meaning generating responses not based on factual information), which could potentially pose harm to patients. Additionally, it may handle rare pathologies less accurately than common ones.30
  • When faced with high demands, ChatGPT may deliver error messages and occasionally may crash during response generation. It may restrict the number of requests made within a certain time and cannot handle very long texts.11
  • The use of ChatGPT in radiology raises concerns about patient data protection, necessitating appropriate safeguards and further research to ensure responsible use.31 A potential short-term solution is upgrading to GPT-4, which was launched for limited users with a paid subscription and has demonstrated better performance than ChatGPT.39,40 GPT-4 includes a browser plugin that allows internet access, but privacy concerns persist, and it involves additional costs.35,39-41
  • To improve the use of ChatGPT in radiology, fine-tuning is essential; this could include incorporating medical images, reports, and domain-specific knowledge to enhance accuracy and relevance.11,42 Collaborating with publishers, institutions, and libraries is important to ensure that ChatGPT has access to a comprehensive and reliable database of radiology-related resources to increase the likelihood of providing authentic references.42
  • Automation bias poses a concern where humans tend to favour machine decisions over human decisions, even in cases of conflict.43 This is especially problematic in CDS tools, given that their responses may occasionally deviate from expert opinions.34 Additional research could focus on developing advanced models that integrate ChatGPT with existing CDS systems. These models could leverage the extensive medical literature, clinical guidelines, and patient data to assist physicians in making precise diagnoses, formulating treatment plans, and predicting patient outcomes.44 By merging the expertise of health care professionals with the capabilities of ChatGPT, a holistic and tailored decision support system could be established.44

Limitations of This Report

This report's limitations encompass data collection up until June 2023; the potential emergence of new insights due to the rapid growth of literature on ChatGPT; the exclusion of papers published in other languages; challenges in comprehensively analyzing all concepts, resulting in emphasis on crucial topics; varying evidence quality in the included materials; and the incorporation of preprints that were not peer reviewed.

Conclusion

This report delves into the potential applications of ChatGPT in the radiology field, focusing on its role in facilitating the generation of radiology reports, providing CDS, and other related tasks. While ChatGPT's capabilities offer promising solutions to address various challenges, it is crucial to acknowledge its limitations. Its reliance on training data, potential for generating inaccurate or fictitious information, and lack of domain expertise are among the most significant concerns. The implementation of ChatGPT should be approached cautiously, considering patient safety, ethics, and the potential for unintended consequences.

Further research and development are necessary to refine ChatGPT's performance in radiology-specific applications. The evolution of this technology could lead to a more effective and harmonious partnership between AI and health care providers, ultimately contributing to improved patient care and well-being in the radiology field.

References

1.
Radiology resilience now and beyond: report from the Canadian Radiology Resilience Taskforce. Ottawa (ON): Canadian Association of Radiologists; 2020: https://car​.ca/wp-content​/uploads/2020/10​/RAD_Resilience-Report​_2020_ENG_FINAL-2.pdf. Accessed 2023 Nov 2.
2.
Selivanov A, Rogov OY, Chesakov D, Shelmanov A, Fedulova I, Dylov DV. Medical image captioning via generative pretrained transformers. Sci Rep. 2023;13(1):4171. [PMC free article: PMC10010644] [PubMed: 36914733]
3.
Bouthillier A, Meleshko A, Wares P. Capital Health: forecasting demand beyond population growth of MRI / CT / ultrasound. Edmonton (AB): University of Alberta School of Business; 2007: https://www​.ualberta​.ca/business/media-library​/centres/ceo/documents​/studentprojects​/showcase7report.pdf. Accessed 2023 Aug 7.
4.
Cao DJ, Hurrell C, Patlas MN. Current status of burnout in Canadian radiology. Can Assoc Radiol J. 2022;74(1):37-43. [PubMed: 35938488]
5.
Zha N, Neuheimer N, Patlas MN. Etiology of burnout in Canadian radiologists and trainees. Can Assoc Radiol J. 2020;72(1):128-134. [PubMed: 32106709]
6.
Canada’s medical radiation technologists: a case for investment in health workforce. Ottawa (ON): CAMRT; 2022: https://www​.ourcommons​.ca/Content/Committee​/441/HESA/Brief/BR11654747​/br-external​/CanadianAssociationOfMedicalRadiationTechnologists-e.pdf Accessed 2023 Nov 2.
7.
Mohammed S, Rosenkrantz AB, Recht MP. Preventing burnout in the face of growing patient volumes in a busy outpatient CT suite: a technologist perspective. Curr Probl Diagn Radiol. 2020;49(2):70-73. [PubMed: 30803752]
8.
Spieler B, Baum N. Burnout: a mindful framework for the radiologist. Curr Probl Diagn Radiol. 2022;51(2):155-161. [PubMed: 34876307]
9.
Gabelloni M, Faggioni L, Fusco R, et al. Exploring radiologists' burnout in the COVID-19 era: a narrative review. Int J Environ Res Public Health. 2023;20(4):3350. [PMC free article: PMC9966123] [PubMed: 36834044]
10.
Molwitz I, Kemper C, Stahlmann K, et al. Work expectations, their fulfillment, and exhaustion among radiologists of all career levels: what can be learned from the example of Germany. Eur Radiol. 2023. [PMC free article: PMC9999063] [PubMed: 36897346]
11.
Lecler A, Duron L, Soyer P. Revolutionizing radiology with GPT-based models: current applications, future possibilities and limitations of ChatGPT. Diagn Interv Imaging. 2023;104(6):269-274. [PubMed: 36858933]
12.
Berland LL, Hardy SM. Fighting obsolescence: professional assessment in the era of ChatGPT. Appl Radiol. 2023;52(3):20-23.
13.
Alberts IL, Mercolli L, Pyka T, et al. Large language models (LLM) and ChatGPT: what will the impact on nuclear medicine be? Eur J Nucl Med Mol Imaging. 2023;50(6):1549-1552. [PMC free article: PMC9995718] [PubMed: 36892666]
14.
Barat M, Soyer P, Dohan A. Appropriateness of recommendations provided by ChatGPT to interventional radiologists. Can Assoc Radiol J. 2023.
15.
Dave T, Athaluri SA, Singh S. ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations. Front Artif Intell. 2023;6:1169595. [PMC free article: PMC10192861] [PubMed: 37215063]
16.
Bhayana R, Krishna S, Bleakney RR. Performance of ChatGPT on a radiology board-style examination: Insights into current strengths and limitations. Radiology. 2023:230582. [PubMed: 37191485]
17.
Ariyaratne S, Iyengar KP, Nischal N, Chitti Babu N, Botchu R. A comparison of ChatGPT-generated articles with human-written articles. Skeletal Radiol. 2023;14:14. [PubMed: 37059827]
18.
Sallam M. The utility of ChatGPT as an example of large language models in healthcare education, research and practice: systematic review on the future perspectives and potential limitations [non-peer reviewed preprint]. medRxiv. 2023;21.
19.
Ferres JML, Weeks WB, Chu LC, Rowe SP, Fishman EK. Beyond chatting: the opportunities and challenges of ChatGPT in medicine and radiology. Diagn Interv Imaging. 2023;104(6):263-264. [PubMed: 36925365]
20.
Biswas SS. Role of ChatGPT in radiology with a focus on pediatric radiology: proof by examples. Pediatr Radiol. 2023;53(5):818-822. [PubMed: 37106089]
21.
Currie G, Singh C, Nelson T, Nabasenja C, Al-Hayek Y, Spuur K. ChatGPT in medical imaging higher education. Radiography (London). 2023;29(4):792-799. [PubMed: 37271011]
22.
Elkassem AA, Smith AD. Potential use cases for ChatGPT in radiology reporting. AJR Am J Roentgenol. 2023;221(3):373-376. [PubMed: 37095665]
23.
Bosbach WA, Senge JF, Nemeth B, et al. Ability of ChatGPT to generate competent radiology reports for distal radius fracture by use of RSNA template items and integrated AO classifier. Curr Probl Diagn Radiol. 2023;17:17. [PubMed: 37263804]
24.
Klenske N. The Good, the Bad and the Ugly of using ChatGPT. RSNA News. 2023 Mar 6. https://www​.rsna.org​/news/2023/march/use-of-chatgpt-in-radiology. Accessed 2023 Aug 8.
25.
Lyu Q, Tan J, Zapadka ME, et al. Translating radiology reports into plain language using ChatGPT and GPT-4 with prompt learning: results, limitations, and potential. Vis Comput Ind Biomed Art. 2023;6(1):9. [PMC free article: PMC10192466] [PubMed: 37198498]
26.
Mallio CA, Sertorio AC, Bernetti C, Beomonte Zobel B. Large language models for structured reporting in radiology: performance of GPT-4, ChatGPT-3.5, Perplexity and Bing. Radiol Med (Torino). 2023;29:29. [PubMed: 37248403]
27.
Ebrahimi B, Howard A, Carlson DJ, Al-Hallaq H. ChatGPT: can a natural language processing tool be trusted for radiation oncology use? Int J Radiat Oncol Biol Phys. 2023;08:08. [PubMed: 37037358]
28.
Jaiswal A, Tang L, Ghosh M, Rousseau JF, Peng Y, Ding Y. RadBERT-CL: factually-aware contrastive learning for radiology report classification. Proc Mach Learn Res. 2021;158:196-208. [PMC free article: PMC9055736] [PubMed: 35498230]
29.
Doshi R, Amin K, Khosla P, Bajaj S, Chheang S, Forman H. Utilizing large language models to simplify radiology reports: a comparative analysis of ChatGPT-3.5, ChatGPT-4.0, Google Bard, and Microsoft Bing [non-peer reviewed preprint]. medRxiv. 2023 Jun 7. 10.1101/2023.06.04.23290786. Accessed 2023 Nov 2. [CrossRef]
30.
Jeblick K, Schachtner BM, Dexl J, et al. ChatGPT makes medicine easy to swallow: an exploratory case study on simplified radiology reports [non-peer reviewed preprint]. ArXiv. 2022;abs/2212.14882. [PubMed: 37794249]
31.
Ismail A, Ghorashi NS, Javan R. New horizons: the potential role of OpenAI's ChatGPT in clinical radiology. J Am Coll Radiol. 2023;20(7):696-698. [PubMed: 36972862]
32.
Rao A, Pang M, Kim J, et al. Assessing the utility of ChatGPT throughout the entire clinical workflow [non-peer reviewed preprint]. medRxiv. 2023:2023.2002.2021.23285886. [PMC free article: PMC10481210] [PubMed: 37606976]
33.
Rao A, Kim J, Kamineni M, Pang M, Lie W, Succi MD. Evaluating ChatGPT as an adjunct for radiologic decision-making [non-peer reviewed preprint]. medRxiv. 2023;07:07. [PMC free article: PMC10733745] [PubMed: 37356806]
34.
Liu S, Wright AP, Patterson BL, et al. Assessing the value of ChatGPT for clinical decision support optimization [non-peer reviewed preprint]. medRxiv. 2023:2023.2002.2021.23286254.
35.
Javan R, Kim T, Mostaghni N, Sarin S. ChatGPT's potential role in interventional radiology. Cardiovasc Intervent Radiol. 2023;46(6):821-822. [PubMed: 37127733]
36.
Saliba T, Boitsios G. ChatGPT, a radiologist's perspective. Pediatr Radiol. 2023;53(5):813-815. [PubMed: 37017719]
37.
Fishman EK, Weeks WB, Lavista Ferres JM, Chu LC. Watching innovation in real time: the story of ChatGPT and radiology. Can Assoc Radiol J. 2023:8465371231174817. [PubMed: 37138372]
38.
Ufuk F. The role and limitations of large language models such as ChatGPT in clinical settings and medical journalism. Radiology. 2023;307(3):e230276. [PubMed: 36880943]
39.
Ali R, Tang OY, Connolly ID, et al. Performance of ChatGPT, GPT-4, and Google Bard on a neurosurgery oral boards preparation question bank [non-peer reviewed preprint]. medRxiv. 2023;12. [PubMed: 37306460]
40.
Bhayana R, Bleakney RR, Krishna S. GPT-4 in radiology: Improvements in advanced reasoning. Radiology. 2023:230987. [PubMed: 37191491]
41.
Cheng K, Guo Q, He Y, Lu Y, Gu S, Wu H. Exploring the potential of GPT-4 in biomedical engineering: the dawn of a new era. Ann Biomed Eng. 2023;28:28. [PubMed: 37115365]
42.
Ray PP, Majumder P. ChatGPT in radiology: a deeper look into its limitations and potential pathways for improvement. Can Assoc Radiol J. 2023:8465371231177674. [PubMed: 37171079]
43.
Gampala S, Vankeshwaram V, Gadula SSP. Is artificial intelligence the new friend for radiologists? A review article. Cureus. 2020;12(10):e11137. [PMC free article: PMC7682942] [PubMed: 33240726]
44.
Liu J, Wang C, Liu S. Utility of ChatGPT in clinical practice. J Med Internet Res. 2023;25:e48568. [PMC free article: PMC10365580] [PubMed: 37379067]

Disclaimer: The information in this document is intended to help Canadian health care decision-makers, health care professionals, health systems leaders, and policy-makers make well-informed decisions and thereby improve the quality of health care services. While patients and others may access this document, the document is made available for informational purposes only and no representations or warranties are made with respect to its fitness for any particular purpose. The information in this document should not be used as a substitute for professional medical advice or as a substitute for the application of clinical judgment in respect of the care of a particular patient or other professional judgment in any decision-making process. The Canadian Agency for Drugs and Technologies in Health (CADTH) does not endorse any information, drugs, therapies, treatments, products, processes, or services.

While care has been taken to ensure that the information prepared by CADTH in this document is accurate, complete, and up-to-date as at the applicable date the material was first published by CADTH, CADTH does not make any guarantees to that effect. CADTH does not guarantee and is not responsible for the quality, currency, propriety, accuracy, or reasonableness of any statements, information, or conclusions contained in any third-party materials used in preparing this document. The views and opinions of third parties published in this document do not necessarily state or reflect those of CADTH.

CADTH is not responsible for any errors, omissions, injury, loss, or damage arising from or relating to the use (or misuse) of any information, statements, or conclusions contained in or implied by the contents of this document or any of the source materials.

This document may contain links to third-party websites. CADTH does not have control over the content of such sites. Use of third-party sites is governed by the third-party website owners’ own terms and conditions set out for such sites. CADTH does not make any guarantee with respect to any information contained on such third-party sites and CADTH is not responsible for any injury, loss, or damage suffered as a result of using such third-party sites. CADTH has no responsibility for the collection, use, and disclosure of personal information by third-party sites.

Subject to the aforementioned limitations, the views expressed herein are those of CADTH and do not necessarily represent the views of Canada’s federal, provincial, or territorial governments or any third-party supplier of information.

This document is prepared and intended for use in the context of the Canadian health care system. The use of this document outside of Canada is done so at the user’s own risk.

This disclaimer and any questions or matters of any nature arising from or relating to the content or use (or misuse) of this document will be governed by and interpreted in accordance with the laws of the Province of Ontario and the laws of Canada applicable therein, and all proceedings shall be subject to the exclusive jurisdiction of the courts of the Province of Ontario, Canada.

The copyright and other intellectual property rights in this document are owned by CADTH and its licensors. These rights are protected by the Canadian Copyright Act and other national and international laws and agreements. Users are permitted to make copies of this document for noncommercial purposes only, provided it is not modified when reproduced and appropriate credit is given to CADTH and its licensors.

About CADTH: CADTH is an independent, not-for-profit organization responsible for providing Canada’s health care decision-makers with objective evidence to help make informed decisions about the optimal use of drugs, medical devices, diagnostics, and procedures in our health care system.

Funding: CADTH receives funding from Canada’s federal, provincial, and territorial governments, with the exception of Quebec.

Questions or requests for information about this report can be directed to Requests@CADTH.ca

Copyright Notice

Copyright © 2023 - Canadian Agency for Drugs and Technologies in Health. Except where otherwise noted, this work is distributed under the terms of a Creative Commons Attribution-NonCommercial- NoDerivatives 4.0 International licence (CC BY-NC-ND).

Bookshelf ID: NBK599981PMID: 38320079

Views

Other titles in this collection

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to PubMed

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...