Skip to content

ARVO 2021 Blog Post Pt 3

Introduction

This post is the third in our series of notes from ARVO.  It follows an initial introductory post about deep learning-based segmentation and a follow-up post that covered prediction models in AI.  This post brings attention back to the more traditional image processing and computer vision methods that underpin automated analyses available today.  Again, it is by no means comprehensive, but hopefully of interest to those who could not attend the meeting or indeed our Orion user community.  Deep learning contributions dominated, so this, unfortunately, is brief.

More Traditional Analyses

Reading centers and pharma companies alike rely on proven biomarkers that are available clinically for many trials.  In ophthalmology the standard of care is OCT and for clinical trials this dominates.  Together this means that layer thicknesses, as measured using OCT, is the most used biomarker in ophthalmology and clinical trials.

All existing software packages for these endpoints use traditional methods of image processing, with smattering of machine learning under the hood (note, this could be as simple as learning the means of tissue intensities for priors, so do not assume AI!).  Of those, there are algorithms from the manufacturers that are device specific.  But if you plan to use these, then consider the following:

  1. They are not necessarily best in class.
  2. If there is an error, the reading center has to deal with their editing software.
  3. If the trial involves more than one device you have to deal with 1) and 2) above with more than one software application (one per device to be device).

In an ideal world, you’d want to do best in terms of automated segmentation, have excellent editing tools to correct any possible errors and validate the reported result, and do all of this from the same software application.  I am, of course, describing Orion, but as is the case with other centers, you should evaluate the software for such applicability first.  In the poster “Assessing the validity of a cross-platform retinal image segmentation tool in normal and diseased retina” a group from UCSD, covering two ophthalmic centers, Shiley and Jacobs – do just that.  In this case they are comparing the Heidelberg Spectralis algorithms using Heyex version 2.0 to Orion.  Their findings were:

  • Orion was significantly better than Heidelberg in the segmentation of NFL and INL layers in normal eyes (p < 0.05).
  • Orion was significantly better than Heidelberg in the segmentation of GC_IPL, INL and OPL layers(p < 0.05) in DME eyes when compared to the ‘gold standard’ of manual segmentation.
  • Overall, the Orion cross platform software segmented normal and diseased retina more accurately.

They conclude with the following summary: “The findings in the present study suggest that the cross-platform retinal layer segmentation software can be used reliably to study the retinal layers and that it compares well against manual segmentation and the commonly used proprietary software for retinal segmentation of normal eyes and in particular in diseased eyes.”

All centers evaluate software before incorporating into a trial or for their clinical research and this is a nice example as they have chosen to publish it.  The bottom line is that, if you have to report any layer thickness for any device, please consider doing the same.

In neurodegenerative diseases where the retinal architecture is not disrupted but instead thins progressively, the Spectralis software can work well.  This was evidenced in a poster that showed how, in asymptomatic relatives of Alzheimer’s disease patients, subtle changes in retinal thickness can be detected and possibly used as an early biomarker of the disease.  Another interesting poster again looking at Alzheimer’s looked at changes in the choroid thickness, as measured manually, and showed how this could also be an early biomarker for the diagnosis and follow-up of this disease.

Automated choroidal thickness was reported using deep learning for swept source OCT (SS-OCT) by Vupparaboina et al.  They used a U-Net type network and showed that, relative to ground truth assessment using the Dice coefficient, they achieved a score of 0.98 on their test set.  Using data from the same Zeiss PlexElite device, De Sisternes et al. reported how the manufacturers are doing with their own data.  The method used was not reported (assume proprietary), but while the correlation of automated to manual grading was good, this does not report on accuracy and in the single example result that was given, there was significant differences in average thicknesses in each extended ETDRS sector (up to 159um!).  The dice coefficient is pretty standard reporting in such algorithms as was given when we reported on 3d choroidal segmentation, so we encourage the manufacturers to do the same.

Next Up: “Do not fret! We can support your DICOM standard today and tomorrow.”

Images and their formats is an issue in ophthalmology.  We, like others, wish to promote the use of standard formats, such as DICOM.  Based on a special interest group meeting at ARVO – “Lost in translation: why we need image standards in ophthalmology and how we can get there” – the hope is, we will get there.  In the next post we comment on that as the community wants such standardization and this is most definitely a position we support.