Show simple item record

dc.contributor.authorNguyen, Daniel
dc.contributor.authorSwanson, Daniel
dc.contributor.authorNewbury, Alex
dc.contributor.authorKim, Young H
dc.date.accessioned2024-01-18T17:59:02Z
dc.date.available2024-01-18T17:59:02Z
dc.date.issued2023-12-15
dc.identifier.citationNguyen D, Swanson D, Newbury A, Kim YH. Evaluation of ChatGPT and Google Bard Using Prompt Engineering in Cancer Screening Algorithms. Acad Radiol. 2023 Dec 15:S1076-6332(23)00618-9. doi: 10.1016/j.acra.2023.11.002. Epub ahead of print. PMID: 38103973.en_US
dc.identifier.eissn1878-4046
dc.identifier.doi10.1016/j.acra.2023.11.002en_US
dc.identifier.pmid38103973
dc.identifier.urihttp://hdl.handle.net/20.500.14038/52976
dc.description.abstractLarge language models (LLMs) such as ChatGPT and Bard have emerged as powerful tools in medicine, showcasing strong results in tasks such as radiology report translations and research paper drafting. While their implementation in clinical practice holds promise, their response accuracy remains variable. This study aimed to evaluate the accuracy of ChatGPT and Bard in clinical decision-making based on the American College of Radiology Appropriateness Criteria for various cancers. Both LLMs were evaluated in terms of their responses to open-ended (OE) and select-all-that-apply (SATA) prompts. Furthermore, the study incorporated prompt engineering (PE) techniques to enhance the accuracy of LLM outputs. The results revealed similar performances between ChatGPT and Bard on OE prompts, with ChatGPT exhibiting marginally higher accuracy in SATA scenarios. The introduction of PE also marginally improved LLM outputs in OE prompts but did not enhance SATA responses. The results highlight the potential of LLMs in aiding clinical decision-making processes, especially when guided by optimally engineered prompts. Future studies in diverse clinical situations are imperative to better understand the impact of LLMs in radiology.en_US
dc.language.isoenen_US
dc.relation.ispartofAcademic Radiologyen_US
dc.relation.urlhttps://doi.org/10.1016/j.acra.2023.11.002en_US
dc.rightsCopyright © 2023 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.en_US
dc.titleEvaluation of ChatGPT and Google Bard Using Prompt Engineering in Cancer Screening Algorithmsen_US
dc.typeJournal Articleen_US
dc.source.journaltitleAcademic radiology
dc.source.countryUnited States
dc.identifier.journalAcademic radiology
dc.contributor.departmentRadiologyen_US
dc.contributor.departmentT.H. Chan School of Medicineen_US
dc.contributor.studentDaniel Swanson


Files in this item

Thumbnail
Name:
Publisher version

This item appears in the following Collection(s)

Show simple item record