Displaying 1-9 letters out of 456 published
Re:Reproducibility of aberrometry-based intraoperative refraction during cataract surgery, Statistical issues
Dear Sirs, We are grateful to Sabour and Ghassemi for their interest in our recent article1. In our understanding, they query why we did not use intra class correlation (ICC) as a measure for precision. Our test-retest reliability (absolute agreement ICC) is derived from the maximum likelihood (LM) estimates of the one-way random effects model of the form: yij=?+ri+?ij, where yij is the measurement of the ith eye by the jth measurement (say spherical equivalent measured on the first (second or third) occasion), ? is the mean rating (say mean spherical equivalent), ri is the eye random effect and ?ij is a random error. As described in Rabe- Hesketh and Skrondal2, the reliability is calculated as p ?=? ?/(? ?+? ?), where ? is a between-subject variance and ? is a within-subject variance. Here we assume that n eyes are randomly selected from the population of potential eyes. Each one is measured by a different set of k measurements, randomly drawn from the population of potential measurements. The consistency of agreement is not defined in this case, as each eye is evaluated by a different set of measurements. Thus, there is no between- measurements variability in this model. We agree that clinical judgment is paramount, which is why, indeed, we state in our paper (p 5) that "the clinical interpretation of the agreement range (...) is vital (underlining added)". It is before the background of such clinical interpretations in the paper and of the above explanations that we derived our conclusions in, as we trust, an appropriate way. Yours respectfully,
1. Huelle JO, Katz T, Druchkiv V, et al. First clinicial results on the feasibility, quality and reproducibility of aberrometry-based intraoperative refraction during cataract surgery. Br J Ophthalmol 2014. 2. Rabe-Hesketh S, Skrondal A. Multilevel and Longitudinal Modeling Using Stata, Second Edition: Stata Press, 2008. 3. McAlinden C, Khadka J, Pesudovs K. Statistical methods for conducting agreement (comparison of clinical tests) and precision (repeatability or reproducibility) studies in optometry and ophthalmology. Ophthalmic Physiol Opt 2011;31(4):330-8.
Conflict of Interest:
An additional explanation for azithromycin's effecacy in treating meibomian gland dysfunction
We would like to congratulate Dr. Kashkouli et al for their recently published paper "Oral azithromycin versus doxycycline in meibomian gland dysfunction: a randomized double masked open label clinical trial". The authors found that azithromycin induced a significantly better overall clinical response than doxycycline, and attributed this effect to the antibacterial and anti-inflammatory effects of azithromycin. However, we would like to suggest an additional explanation for their results. We have discovered that azithromycin can directly increase lipid accumulation and promote terminal differentiation of human meibomian gland epithelial cells in vitro.1 This effect may be due to azithromycin's cationic amphiphilic structure and an associated phospholipidosis.2 We have also discovered that the stimulatory action of azithromycin on human meibomian gland epithelial cells is unique, and cannot be duplicated by exposure to doxycycline, minocycline or tetracycline.2 3 In effect, this lipid- promoting activity of azithromycin may improve the quality of meibomian gland secretions, alleviate the evaporative dry eye, and attenuate such additional signs as conjunctival redness and ocular surface staining. Overall, this ability of azithromycin to promote human meibomian gland epithelial cell function may account for its greater efficacy, as compared to doxycycline, in alleviating the signs and symptoms of human meibomian gland dysfunction.
1. Liu Y, Kam WR, Ding J, et al. Effect of azithromycin on lipid accumulation in immortalized human meibomian gland epithelial cells. JAMA ophthalmol 2014;132(2):226-8. 2. Liu Y, Kam WR, Ding J, et al. One man's poison is another man's meat: using azithromycin-induced phospholipidosis to promote ocular surface health. Toxicology 2014;320:1-5. 3. Liu Y, Kam WR, Ding J, et al. The effect of macrolide and tetracycline antibiotics on lipid expression in human meibomian gland epithelial cells. ARVO abstract 2014.
Conflict of Interest:
A provisional patent has been filed around the technology mentioned in our paper. The intellectual property for the application is owned by the Schepens Eye Research Institute/Massachusetts Eye and Ear.
Reporting of harms in clinical trials: why do we continue to fail?
O'Day and colleagues describe in their recent paper the inadequate reporting of harm in randomized controlled trials of intra-vitreal therapies for diabetic macular oedema(O'Day et al., 2014). At first glance, the results are alarming. An average of only six recommendations of the 2004 CONSORT guidelines extension covering harms were met. Ophthalmologists are not alone in their inadequate reporting, however. Several other studies have found similar, and often worse examples of heterogenous and selective reporting of harm in RCTs in psychological medicine(Jonsson et al., 2014), asthma(Ntala et al., 2013) and cancer(Sivendran et al., 2014). Why, then, are we falling so far short of these internationally agreed standards, and who is to blame?
Some might argue that there are too many recommendations. The CONSORT guidelines have 25 points, of which one is harm reporting (with ten recommendations)(Schulz et al., 2010). Whilst most published RCTs do not report them all, many do report on half or more (interquartile range 5-7 for the ophthalmic studies cited)(O'Day et al., 2014). In any case, they remain "recommendations", not "requirements" In order to address this, the recommendations might be adapted to suggest full reporting be made publically available elsewhere to limit the length of published reports.
Perhaps the authors of the RCTs published are to blame? Could it be that they simply do not the data needed to fulfill the ten requirements? Whilst it is possible, and in some cases may highlight the need for investment in training, most authors could likely fulfill more of the ten recommendations than they currently do, for example the number of patients withdrawn due to an adverse event (only 36% in ophthalmic trials)(O'Day et al., 2014), essential data that most investigators surely can trace(Ioannidis JA and Lau J, 2001).
Finally, it might be argued that journal editors are to blame for failing to implement these standards. This may not be an option for all except editors of the minority of most desirable publications. Raising the bar too high may lead to authors taking their work elsewhere where they know it will be accepted.
How, then, to move forward? Like many "culture change" challenges faced in modern medicine, the adequate adoption of such standards requires a concerted effort by all parties involved in the research-publication pathway. John Kotter's 8-step change model provides some insights as to how, including establishing a sense of urgency by highlighting the harms associated with poor reporting, and forming powerful coalitions(Kotter, 1996). Most importantly, however, we as researchers must champion the improvement of reporting standards in our own work and demand this of our peers. Only then can we hope for change.
Ioannidis JA, and Lau J (2001). Completeness of safety reporting in randomized trials: An evaluation of 7 medical areas. JAMA 285, 437-443. Jonsson, U., Alaie, I., Parling, T., and Arnberg, F.K. (2014). Reporting of harms in randomized controlled trials of psychological interventions for mental and behavioral disorders: a review of current practice. Contemp. Clin. Trials 38, 1-8. Kotter, J.P. (1996). Leading Change (Harvard Business Press). Ntala, C., Birmpili, P., Worth, A., Anderson, N.H., and Sheikh, A. (2013). The quality of reporting of randomised controlled trials in asthma: a systematic review. Prim. Care Respir. J. J. Gen. Pract. Airw. Group 22, 417-424. O'Day, R., Walton, R., Blennerhassett, R., Gillies, M.C., and Barthelmes, D. (2014). Reporting of harms by randomised controlled trials in ophthalmology. Br. J. Ophthalmol. 98, 1003-1008. Schulz, K.F., Altman, D.G., and Moher, D. (2010). CONSORT 2010 Statement: Updated Guidelines for Reporting Parallel Group Randomized Trials. Ann. Intern. Med. 152, 726-732. Sivendran, S., Latif, A., McBride, R.B., Stensland, K.D., Wisnivesky, J., Haines, L., Oh, W.K., and Galsky, M.D. (2014). Adverse event reporting in cancer clinical trial publications. J. Clin. Oncol. Off. J. Am. Soc. Clin. Oncol. 32, 83-89.
Conflict of Interest:
Perceptual learning in visual acuity and contrast sensitivity: continuous improvement or discovery effect?
In their paper "Repetitive tests of visual function improved visual acuity in young subjects" Otto and Michelson  assessed effects of practice on visual acuity, using the Freiburg Visual Acuity Test "FrACT" developed by one of us [2,3]. At first glance they seem to confirm our findings , which showed a marked increase of visual acuity after visual training, more than 0.1 logMAR. At closer inspection, discrepancies emerge: During the first 7 sessions, Otto and Michelson's found only a random variability. Then, suddenly, acuity improved in most subjects, and inter-subject variability decreased markedly. In contrast, we found a continuous improvement starting already during the first session (one session comprised 14 acuity test runs) . When we provided feedback by displaying the correct orientation after the response, we found a marked additional step increase in performance already between the first and second session. (Otto and Michelson do not mention whether or not they employed feedback.) The shape of the time course has theoretical implications: A sudden increase of visual acuity like that reported by Otto and Michelson would suggest a "discovery effect" rather than a "fluency effect" .
Otto and Michelson's data on contrast sensitivity are also challenging to understand, not only because the authors used the terms contrast threshold and contrast sensitivity interchangeably (the one is the reciprocal of the other). Looking at figure 1B and supplementary figure 2B, we suspect that the stated effect size of 45% (derived only from the last data point in the graph) can be attributed to random fluctuations, given the non-monotonous change of average contrast sensitivity over sessions. Otto and Michelson seem to share our reservation, since they write in the Discussion "the progress was not consistent enough to show a significant percentage development in one direction". Hopefully, this number 45% will not stick with readers, who could miss the fact that the p values where not derived from the same comparisons as the effect size.
The reason why the learning curves of Otto and Michelson's subjects are different from those of Heinrich et al's subjects remains speculative. The methods appear to be similar. It is, however, unclear whether Otto and Michelson used feedback and whether or not they presented the optotypes separately or in rows. Furthermore, the participants underwent a practice scheme that included several different visual tasks, so it is difficult to attribute improvement of performance to any single task or combination of tasks. Clearly, future careful studies in this exciting field promise further insights and clinical applications.
1 Otto J, Michelson G. Repetitive tests of visual function improved visual acuity in young subjects. Br J Ophthalmol 2014;98:383-6. doi:10.1136/bjophthalmol-2013-304262
2 Bach M. The Freiburg Visual Acuity Test - Automatic measurement of visual acuity. Optom Vis Sci 1996;73:49-53.
3 Bach M. Homepage of the Freiburg Visual Acuity & Contrast Test ('FrACT'). 2009. http://michaelbach.de/fract.html
4 Heinrich SP, Krueger K, Bach M. The dynamics of practice effects in an optotype acuity task. Graefes Arch Clin Exp Ophthalmol 2011;249:1319 -26. doi:10.1007/s00417-011-1675-z
5 Kellman PJ, Garrigan P. Perceptual learning and human expertise. Phys Life Rev 2009;6:53-84. doi:10.1016/j.plrev.2008.12.001
Conflict of Interest:
Reproducibility of aberrometry-based intraoperative refraction during cataract surgery, Statistical issues
We were interested to read the paper by Huelle JO and colleagues published in the May 2014 issue of BJO. The authors aimed to provide the first clinical data in determining the feasibility, quality and precision of intraoperative wavefront aberrometry (IWA)-based refraction in patients with cataract. Precision (reproducibility) and measurement quality was evaluated by the 'limits of agreement' approach, regression analysis, correlation analysis, Analysis of variance (ANOVA) and ORs for predicting measurement failure. Wavefront map (WFM) quality was objectivised and compared with the Pentacam Nuclear Staging analysis.1 They have reported high consistency across repeated measures were found for mean spherical equivalent (SE) differences in aphakia with -0.01D and pseudophakia with - 0.01D, but ranges were high (limits of agreement +0.69 D and -0.72 D; +1.53 D and -1.54 D, respectively). With increasing WFM quality, higher precision in measurements was observed.1 Why did the authors not use (agreement and not consistency) Intra class correlation coefficient (ICC) to assess the precision (reproducibility)? 2-4 Regarding inter-observer reliability or agreement, it is good to know that statistics cannot provide a simple substitute for clinical judgment. They also reported that IWA refraction in aphakia, for instance, appears to be reliable once stable and pressurised anterior chamber conditions are achieved. Such conclusion can be misleading due to inappropriate use and interpretation of statistical tests to evaluate relaibility.2-4
Siamak Sabour, MD, PhD1 Fariba Ghassemi, MD2 1 Shahid Beheshti University of Medical Sciences, Tehran, Iran 2 Farabi Hospital, Eye Research Centre, Tehran University of Medical Sciences, Tehran, Iran
1- Huelle JO, Katz T, Druchkiv V, et al. First clinicial results on the feasibility, quality and reproducibility of aberrometry-based intraoperative refraction during cataract surgery. Br J Ophthalmol. 2014 May 30. pii: bjophthalmol-2013-304786. doi: 10.1136/bjophthalmol-2013- 304786. [Epub ahead of print]
2- Epidemiology, biostatistics and preventive medicine, Jeckel, 1st edition, 2008
3- Modern Epidemiology, K. Rothman, 3 rd edition, 2010 4- Epidemiology beyond the basics, Moyses Szklo and F. Javier Nieto, 2nd edition, 2007
Conflict of Interest:
FRACTURED OZURDEX IMPLANT DURING THE PROCEDURE
In the recent article published in British Journal of Ophthalmology, Agrawal et al1
reported two cases of desegmentation of Ozurdex implant in vitreous cavity. In this report, the authors comment that Allergan confirmed that fractured implants in the applicator have not been found to date during the quality control process. We recently reported2
also two cases of implant fragmentation in response to a Roy et al3 . These authors reported a broken implant at the end of the injection. They postulate that the reason for breakage could be friction at the tip of the needle or some drug loading problem Our first case showed similar fundus image, dexamethasone implant fragments within vitreous cavity one month after injection In the second case, the implant broken after the ejection during an instructional wet-lab, outside surgical conditions. This event was recorded in a video that is available with our report. Therefore, this video proves that for a break to happen during the implantation is utterly feasible and .taking into account the comment of Allergan, reinforces that the friction at the tip of the needle during de ejection must be the reason for breakage. We agree with the authors that these patients with defragmented implants should be followed up carefully to monitor for unexpected complications, even more in patients with zonular dehiscence.
REFERENCES: 1. Agrawal R, Fernandez-Sanz G, Bala S, et al. Desegmentation of Ozurdex implant in vitreous cavity: report of two cases. Br J Ophthalmol. 2014;98:961-3 2. Cabrerizo J, Garay-Aramburu G. Re: intravitreal dexamethasone implant fragmentation.Can J Ophthalmol. 2013;48:343. 3. Roy R,HegdeS. Split Ozurdex implant: a caution. Can JOphthalmol. 2013;48:e15-6
Conflict of Interest:
Is the term dry eye a misnomer?
As an ophthalmologist for many years, I continue to find the diagnosis of dry eye difficult unless it is severe. I have spoken to colleagues who have the same experience. As detailed in this paper there are many things going on in the pathophysiology of which dryness may be one. Can I suggest it might be helpful to our understanding and approach to sore eyes, to be less dogmatic about attributing dryness as the reason for discomfort in these eyes. The discomfort may be caused by a host of factors which may or may not include dryness.
Conflict of Interest:
Significant design flaws bias the results in favour of aflibercept
Dr Cho and colleagues present data on a very small cohort of patients with wet AMD that have switched treatment from either bevacizumab or ranibizumab to aflibercept. Of note, this subgroup comprised approximately 8% of the total number of patients switched to aflibercept.
Any retrospective review is likely to be heavily biased by the anticipated 'treatment benefit' of a new therapy particularly if, as in this case, the readers of retinal optical coherence tomography (OCT) scans have the ability to manually correct and alter data that were originally generated by semi-automated methods. In this study, the magnitude of change observed in central foveal thickness was of marginal clinical relevance (7.8% reduction from Baseline) after 1 injection and was further attenuated by 6 months; these results suggest that the retinal OCT scan reader was an important source of bias. This view is further supported by the observation that visual acuity, which may be less liable to investigator related bias, remained unchanged throughout.
Retrospective reviews are of scientific value when conducted in a rigourous and independent manner. Selective reporting of data from this study inevitably undermines any clinical conclusions regarding the relevance of switching patients from anti-VEGF therapies to aflibercept.
Conflict of Interest:
I have consulted for a number of pharmaceutical companies including Novartis, the MAH of ranibizumab in the EU.
The long-term psychosocial impact of correction surgery for adults with strabismus
We read with interest Jackson and Morris's response to our letter.
The author's indicated that it was not possible to conduct a repeated measures ANOVA using SPSS. However, SPSS provides several ways to analyze repeated measures ANOVA through the general linear model command. There are several excellent texts that illustrate how to conduct an ANOVA using a repeated measures design in the SPSS environment.
Second, they posed a question about whether it was reasonable to assume that the data collected 18 months post surgically was specifically related to data collected previously. To answer, yes, any time several measurements are collected over time on the same subject, the data points within each subject are related. Therefore the use of statistical procedures that account for this clustering must be used. The fact that the study was exploratory in nature does not preclude the application of basic statistical principles. On the other hand, the authors correctly noted that they had also analyzed the data using a 2x3 design. This approach is reasonable. Unfortunately, the actual p-values were not provided for the readers in the original article or in their response to our editorial.
Conflict of Interest:
Register for free content
This recent issue is free to all users to allow everyone the opportunity to see the full scope and typical content of BJO.
View free sample issue >>
Don't forget to sign up for content alerts so you keep up to date with all the articles as they are published.