Article Text
Abstract
As the healthcare community increasingly harnesses the power of generative artificial intelligence (AI), critical issues of security, privacy and regulation take centre stage. In this paper, we explore the security and privacy risks of generative AI from model-level and data-level perspectives. Moreover, we elucidate the potential consequences and case studies within the domain of ophthalmology. Model-level risks include knowledge leakage from the model and model safety under AI-specific attacks, while data-level risks involve unauthorised data collection and data accuracy concerns. Within the healthcare context, these risks can bear severe consequences, encompassing potential breaches of sensitive information, violating privacy rights and threats to patient safety. This paper not only highlights these challenges but also elucidates governance-driven solutions that adhere to AI and healthcare regulations. We advocate for preparedness against potential threats, call for transparency enhancements and underscore the necessity of clinical validation before real-world implementation. The objective of security and privacy improvement in generative AI warrants emphasising the role of ophthalmologists and other healthcare providers, and the timely introduction of comprehensive regulations.
- Public health
Data availability statement
Data sharing not applicable as no data sets generated and/or analysed for this study. Not applicable.
Statistics from Altmetric.com
Data availability statement
Data sharing not applicable as no data sets generated and/or analysed for this study. Not applicable.
Footnotes
YW and CL are joint first authors.
Contributors YW and CL developed the concept of the manuscript. YW, CL and KZ drafted the manuscript. XH and TZ made critical revisions of the manuscript. XH and TZ supervised the entire work. All authors approved the completed version. We used ChatGPT-4 only for proofreading and grammar checking of this review, but have not relied on the tool for any thinking or material presentation in this paper.
Funding This study was funded by the National Natural Science Foundation of China (82000901, 82101171), the Global STEM Professorship Scheme (P0046113), the Fundamental Research Funds of the State Key Laboratory of Ophthalmology (303060202400201209), the Outstanding PI Research Funds of the State Key Laboratory of Ophthalmology (3030902113083).
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.