Article Text

Download PDFPDF

At a glance
Free
  1. Frank Larkin, Editor in Chief
  1. Cornea and External Diseases Service, Moorfields Eye Hospital, London, UK
  1. Correspondence to Frank Larkin; f.larkin{at}ucl.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Capabilities of GPT-4 in ophthalmology: an analysis of model entropy and progress towards human-level medical question answering (see page 1371)

The large language model Generative Pre-trained Transformer 4 (GPT-4, Open AI) demonstrated strong performance in answering simulated ophthalmology board-style examination questions, surpassing its predecessor GPT-3.5.

Performance of ChatGPT and Bard on the Official Part 1 FRCOphth Practice Questions (see page 1379)

ChatGPT, an AI model, outperformed human candidates and Google’s Bard in a Part 1 FRCOphth multiple choice question examination, highlighting both AI’s potential in medical education but also the necessity for robust safety and ethical guidelines.’

Unveiling the clinical Incapabilities: a Benchmarking study of GPT-4V(ision) for ophthalmic Multimodal image analysis (see page 1384)

The capabilities of a chatbot based on the large language model GPT-4V(ision) in handling queries related to slit-lamp, fundus and ophthalmic ultrasound images was evaluated. Only 30.6%, 21.5%, and 55.6% of the responses about ophthalmic multimodal images generated by GPT-4V(ision) were considered accurate, highly usable, harm-free, respectively. GPT-4V is not yet suitable for clinical decision-making and patient consultation in ophthalmology.

Development and evaluation of a large language model of Ophthalmology in Chinese (see page 1390)

One of the limitations of large language models (LLM) is a lack of LLM training with non-English languages. In response to the issues of inadequate Chinese conversational abilities and insufficient medical specialisation in existing LLMs, we have developed a Chinese-specific ophthalmic LLM capable of adapting to various clinical scenarios.

Assessing the medical reasoning skills of GPT-4 in complex Ophthalmology cases (see page 1398)

GPT-4 demonstrated strong diagnostic and decision-making accuracy in complex ophthalmology cases. This study indicates potential for integrating AI in clinical decision support systems in patient care.

Deep learning model for extensive Smartphone-based diagnosis and triage of cataracts and multiple corneal diseases (see page 1406)

As corneal diseases and cataracts can be imaged clearly using a smartphone camera without specific attachments, the authors developed an AI-assisted diagnosis and triage system using smartphone-captured images. The performance of this artificial intelligence model exceeded that of board-certified ophthalmologists.

Creating realistic anterior segment optical coherence tomography images using Generative adversarial networks (see page 1414)

This study developed and validated a generative adversarial network (GAN) capable of generating realistic high-resolution anterior segment optical coherence tomography (AS-OCT) images that were indistinguishable from reality and suitable for machine learning tasks.

Digital Ray: enhancing Cataractous fundus images using style transfer Generative adversarial networks to improve retinopathy detection (see page 1423)

This study proposed digital ray, an intelligent system that could generate realistic post-operative fundus images for cataract patients, with enhanced quality and accuracy for evaluation of retinopathies.

Using AI to improve human performance: efficient retinal disease detection training with synthetic images (see page 1430)

A teaching method using AI-generated synthetic images significantly enhanced the diagnostic accuracy of trainee orthoptists, allowing them to perform on par with state-of-the-art AI models in less than an hour on average. This is an example of generative AI’s potential to empower rather than replace human skills.

Validation of a deep learning model for automatic detection and quantification of five OCT critical retinal features associated with neovascular AMD using deep learning (see page 1436)

Deep learning may be applied to structural OCT images to quantify features associated with neovascular AMD.

Comparing Generative and retrieval-based Chatbots in answering patient questions regarding age-related macular degeneration and diabetic retinopathy (see page 1443)

We compared the performance of generative vs retrieval-based chatbots in answering patient inquiries on retinal disorders. ChatGPT-4 and ChatGPT-3.5 demonstrated superior performance, followed by Google Bard and OcularBERT. Generative chatbots can answer domain-specific questions outside their training.

ICGA-GPT: report generation and question answering for Indocyanine green angiography images (see page 1450)

This indocyanine green angiography (ICGA)-GPT model demonstrated satisfactory performance in both report generation and subsequent question answering, as evidenced by quantitative and qualitative assessments. This pipeline underscores the potential of large language models in aiding ocular image interpretation.

Exploring AI-Chatbots’ capability to suggest surgical planning in Ophthalmology: ChatGPT versus Google Gemini analysis of retinal detachment cases (see page 1457)

Presented with retinal detachment case descriptions, ChatGPT and Google Bard outlined coherent surgical plans which had good concordance with planning by vitreoretinal specialists. ChatGPT made fewer mistakes and outperformed Bard in terms of subjective quality score.

Large language models: a new frontier in Paediatric cataract patient education (see page 1470)

Large language models can generate high-quality, understandable, and accurate patient education materials on paediatric cataract, and they can improve the readability of existing patient education materials without compromising quality or accuracy.

Ethics statements

Patient consent for publication

Ethics approval

Not applicable.

Footnotes

  • Contributors N/A.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; internally peer reviewed.

Linked Articles