Article Text

Download PDFPDF
Clinical science
Capabilities of GPT-4 in ophthalmology: an analysis of model entropy and progress towards human-level medical question answering
  1. Fares Antaki1,2,3,4,5,
  2. Daniel Milad4,5,6,
  3. Mark A Chia1,2,
  4. Charles-Édouard Giguère7,
  5. Samir Touma4,5,6,
  6. Jonathan El-Khoury4,5,6,
  7. Pearse A Keane1,2,8,
  8. Renaud Duval4,6
  1. 1 Moorfields Eye Hospital NHS Foundation Trust, London, UK
  2. 2 Institute of Ophthalmology, UCL, London, UK
  3. 3 The CHUM School of Artificial Intelligence in Healthcare, Montreal, Quebec, Canada
  4. 4 Department of Ophthalmology, University of Montreal, Montreal, Quebec, Canada
  5. 5 Department of Ophthalmology, Centre Hospitalier de l'Universite de Montreal (CHUM), Montreal, Quebec, Canada
  6. 6 Department of Ophthalmology, Hopital Maisonneuve-Rosemont, Montreal, Quebec, Canada
  7. 7 Institut universitaire en santé mentale de Montréal (IUSMM), Montreal, Quebec, Canada
  8. 8 NIHR Moorfields Biomedical Research Centre, London, UK
  1. Correspondence to Dr Renaud Duval; renaud.duval{at}gmail.com; Mr Pearse A Keane; p.keane{at}ucl.ac.uk

Abstract

Background Evidence on the performance of Generative Pre-trained Transformer 4 (GPT-4), a large language model (LLM), in the ophthalmology question-answering domain is needed.

Methods We tested GPT-4 on two 260-question multiple choice question sets from the Basic and Clinical Science Course (BCSC) Self-Assessment Program and the OphthoQuestions question banks. We compared the accuracy of GPT-4 models with varying temperatures (creativity setting) and evaluated their responses in a subset of questions. We also compared the best-performing GPT-4 model to GPT-3.5 and to historical human performance.

Results GPT-4–0.3 (GPT-4 with a temperature of 0.3) achieved the highest accuracy among GPT-4 models, with 75.8% on the BCSC set and 70.0% on the OphthoQuestions set. The combined accuracy was 72.9%, which represents an 18.3% raw improvement in accuracy compared with GPT-3.5 (p<0.001). Human graders preferred responses from models with a temperature higher than 0 (more creative). Exam section, question difficulty and cognitive level were all predictive of GPT-4-0.3 answer accuracy. GPT-4-0.3’s performance was numerically superior to human performance on the BCSC (75.8% vs 73.3%) and OphthoQuestions (70.0% vs 63.0%), but the difference was not statistically significant (p=0.55 and p=0.09).

Conclusion GPT-4, an LLM trained on non-ophthalmology-specific data, performs significantly better than its predecessor on simulated ophthalmology board-style exams. Remarkably, its performance tended to be superior to historical human performance, but that difference was not statistically significant in our study.

  • Medical Education

Data availability statement

Data may be obtained from a third party and are not publicly available. All data produced in the present study are available on reasonable request to the authors. The specific question sets are proprietary of the BCSC Self-Assessment Program and OphthoQuestions and cannot be shared.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Data availability statement

Data may be obtained from a third party and are not publicly available. All data produced in the present study are available on reasonable request to the authors. The specific question sets are proprietary of the BCSC Self-Assessment Program and OphthoQuestions and cannot be shared.

View Full Text

Footnotes

  • PAK and RD are joint senior authors.

  • X @FaresAntaki, @pearsekeane

  • Contributors Overall responsibility/ guarantor (RD); Conception and design of the study (FA, MAC, PAK and RD); data collection (FA, DM, ST and JE-K); data analysis (FA and C-EG); writing of the manuscript and preparation of figures (FA and MAC); supervision (PAK and RD); review and discussion of the results (all authors); edition and revision of the manuscript (all authors).

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Linked Articles

  • Highlights from this issue
    Frank Larkin