PT - JOURNAL ARTICLE AU - Fowler, Thomas AU - Pullen, Simon AU - Birkett, Liam TI - Performance of ChatGPT and Bard on the official part 1 FRCOphth practice questions AID - 10.1136/bjo-2023-324091 DP - 2024 Oct 01 TA - British Journal of Ophthalmology PG - 1379--1383 VI - 108 IP - 10 4099 - http://bjo.bmj.com/content/108/10/1379.short 4100 - http://bjo.bmj.com/content/108/10/1379.full SO - Br J Ophthalmol2024 Oct 01; 108 AB - Background Chat Generative Pre-trained Transformer (ChatGPT), a large language model by OpenAI, and Bard, Google’s artificial intelligence (AI) chatbot, have been evaluated in various contexts. This study aims to assess these models’ proficiency in the part 1 Fellowship of the Royal College of Ophthalmologists (FRCOphth) Multiple Choice Question (MCQ) examination, highlighting their potential in medical education.Methods Both models were tested on a sample question bank for the part 1 FRCOphth MCQ exam. Their performances were compared with historical human performance on the exam, focusing on the ability to comprehend, retain and apply information related to ophthalmology. We also tested it on the book ‘MCQs for FRCOpth part 1’, and assessed its performance across subjects.Results ChatGPT demonstrated a strong performance, surpassing historical human pass marks and examination performance, while Bard underperformed. The comparison indicates the potential of certain AI models to match, and even exceed, human standards in such tasks.Conclusion The results demonstrate the potential of AI models, such as ChatGPT, in processing and applying medical knowledge at a postgraduate level. However, performance varied among different models, highlighting the importance of appropriate AI selection. The study underlines the potential for AI applications in medical education and the necessity for further investigation into their strengths and limitations.No data are available.