RT Journal Article SR Electronic T1 Assessing the medical reasoning skills of GPT-4 in complex ophthalmology cases JF British Journal of Ophthalmology JO Br J Ophthalmol FD BMJ Publishing Group Ltd. SP 1398 OP 1405 DO 10.1136/bjo-2023-325053 VO 108 IS 10 A1 Milad, Daniel A1 Antaki, Fares A1 Milad, Jason A1 Farah, Andrew A1 Khairy, Thomas A1 Mikhail, David A1 Giguère, Charles-Édouard A1 Touma, Samir A1 Bernstein, Allison A1 Szigiato, Andrei-Alexandru A1 Nayman, Taylor A1 Mullie, Guillaume A A1 Duval, Renaud YR 2024 UL http://bjo.bmj.com/content/108/10/1398.abstract AB Background/aims This study assesses the proficiency of Generative Pre-trained Transformer (GPT)-4 in answering questions about complex clinical ophthalmology cases.Methods We tested GPT-4 on 422 Journal of the American Medical Association Ophthalmology Clinical Challenges, and prompted the model to determine the diagnosis (open-ended question) and identify the next-step (multiple-choice question). We generated responses using two zero-shot prompting strategies, including zero-shot plan-and-solve+ (PS+), to improve the reasoning of the model. We compared the best-performing model to human graders in a benchmarking effort.Results Using PS+ prompting, GPT-4 achieved mean accuracies of 48.0% (95% CI (43.1% to 52.9%)) and 63.0% (95% CI (58.2% to 67.6%)) in diagnosis and next step, respectively. Next-step accuracy did not significantly differ by subspecialty (p=0.44). However, diagnostic accuracy in pathology and tumours was significantly higher than in uveitis (p=0.027). When the diagnosis was accurate, 75.2% (95% CI (68.6% to 80.9%)) of the next steps were correct. Conversely, when the diagnosis was incorrect, 50.2% (95% CI (43.8% to 56.6%)) of the next steps were accurate. The next step was three times more likely to be accurate when the initial diagnosis was correct (p<0.001). No significant differences were observed in diagnostic accuracy and decision-making between board-certified ophthalmologists and GPT-4. Among trainees, senior residents outperformed GPT-4 in diagnostic accuracy (p≤0.001 and 0.049) and in accuracy of next step (p=0.002 and 0.020).Conclusion Improved prompting enhances GPT-4’s performance in complex clinical situations, although it does not surpass ophthalmology trainees in our context. Specialised large language models hold promise for future assistance in medical decision-making and diagnosis.Data are available on reasonable request. All data produced in the present study is available on reasonable request to the authors. JAMA Ophthalmology’s Clinical Challenges are proprietary and access is available to be distributed through their website.