Trending topics

Image interpretation: The next hurdle

Michael D. Abràmoff

Once upon a time, our ability to image the eye and its structures was not very good. Fundus images were low resolution, optical coherence tomography (OCT) was limited in depth and dimensions and fluorescein angiography was still considered an effective method to image the vascular system despite its laborious and qualitative nature.

Then came the 21st Century. Fundus images saw dramatic improvements in quality and clinical value. OCT advanced into the fourth dimension revealing multiple millimeters below the surface of the retina. Quantitative data now replace qualitative interpretations.

With an increasingly impressive suite of imaging tools, the challenges facing the vision research and clinical communities shifted. Instead of not having enough imaging information to enable confident clinical decisions, we now have too much imaging information to interpret in a timely fashion.

Today’s technology-driven problem of too much data is being answered by a technologically-driven field of research — artificial intelligence (AI). In the areas of diabetic retinopathy (DR), glaucoma and age-related macular degeneration (AMD), ARVO members are developing a new suite of image analysis tools to quickly identify which images need further human attention from those that are less of a priority.

Bringing fundus imaging to the front lines
Improved imaging tools only benefit those who have access to them. Ideally, those staffing the “front lines” of medicine that see the highest volumes of patients, like primary care staff, should be equipped with the ability to take and interpret fundus images. But current imaging devices still require expert interpretation of the images taken, a resource not found in a primary care setting.

“If we could incorporate AI diagnostics into retinal imaging platforms, that would enable the imaging and interpretation of images at the front lines of care,” says Michael D. Abràmoff, MD, PhD, the Robert C. Watzke Professor of Ophthalmology and Visual Sciences at the University of Iowa. Abramoff is also founder of IDx, a company focused on developing AI-driven software to rapidly analyze various retinal images to identify patients showing signs of disease.

By using “foundational data,” which is the data used in clinical trials that informed current preferred practice patterns, IDx’s AI software can analyze fundus images to determine if a patient shows signs of diabetic retinopathy. The company’s first product, IDx-DR, is available in Europe and currently under expedited review by the U.S. Food and Drug Administration (FDA).

“We believe that our software can bring transformative improvements to healthcare by increasing its quality, affordability and accessibility,” says Abràmoff.

Simplifying glaucoma diagnosis
Michaël J.A. GirardGlaucoma is a complex disease. And, so is its diagnosis. Advances in OCT have enabled clinicians to measure a patient’s retinal nerve fiber layer (RNFL), with thinning of the layer acting as a primary indicator of glaucoma. Yet, glaucoma also induces complex changes in the connective tissue surrounding the optic nerve head (ONH). Observing those changes currently requires “manual segmentation” of the individual tissue layers by expert (human) image readers, which is not feasible clinically. “To date, there exists no clinical parameters or diagnostic tests that reflect the changes in both neural and connective tissues simultaneously,” says Michaël J.A. Girard, PhD, assistant professor of biomedical engineering at the National University of Singapore and co-head of the Bioengineering & Devices Research Group at the Singapore Eye Research Institute.

In a recent IOVS paper, Girard, Devalla, Thiery et al. developed an AI algorithm that simultaneously highlighted six neural and connective tissues in OCT images of the ONH. The automated analysis requires only a few seconds, comparing favorably to the several minutes it takes to manually segment one image.

Providing an accurate segmentation of both ONH neural and connective tissues automatically has the potential to simplify and improve the diagnosis of glaucoma. But Girard readily points out his lab’s algorithm is not yet ready for the clinic. “We have trained and tested our algorithm with images from only one device (Spectralis), thus its performance when tested with images from other devices remains unknown. While we have reached an accuracy of 94% (with respect to manual segmentation), in some instances, we also observed false predictions in certain tissues. This could be fixed with a more advanced AI network that is currently under development in our labs.Nevertheless, we offer a proof of principle of ONH digital staining that could offer a framework to automatically extract key structural information about the ONH. In the near future, we believe that this information could be used to improve the diagnosis and management of glaucoma,” says Girard.

Predicting AMD progression
Hrvoje BogunovicWhen it comes to AMD, AI may be able to do something humans cannot — predict the future.

All patients with dry AMD have drusen deposits under their retina. Dry AMD patients that progress to severe AMD always experience drusen regress prior to the disease’s development. But, not all drusen regressions lead to severe AMD.

“Clinicians can make predictions, but they are subjective,” says Hrvoje Bogunovic, PhD, research scholar in the Department of Ophthalmology at the Medical University of Vienna. “Algorithms can sift through large quantities of data quickly, building internal knowledge from a number of OCT images that exceeds what any single clinician has ever seen.”

AI-facilitated prediction of AMD progression — as with DR and glaucoma — would add significant value to patients, clinicians and researchers. Patients could receive guidance on when they may develop severe AMD, reducing the psychological stress associated with not knowing the future condition of their vision. Clinicians will be better able to optimize patient scheduling by reducing the frequency with which low-risk patients visit the clinic. Furthermore, researchers developing new drugs aimed at slowing the progression to severe AMD will be able to enroll much smaller sized clinical trials, consisting of just patients with a high risk of progression, which will lower costs.

A matter of trust
The idea of taking medical decision-making out of the hands of a trained practitioner and into the control of computer software is sometimes uncomfortable. In response to his work on AI, Abràmoff was given the nickname “the Retinator,” invoking the threat posed by hostile AI portrayed in “the Terminator” movies. “While amusing, it shows a deep unease about whether clinicians and their patients can trust these AI-based systems,” says Abràmoff.

Part of that unease stems from the “black box” problem, where algorithms perform well, but the rationale behind their performance is inexplicable. “If used judiciously, AI can be used to create solutions that are explainable,” says Abràmoff.

Bogunovic agrees that it would help human observers understand the algorithm’s decision-making and build trust, “if an algorithm can refer to its retrospective database and provide evidence in the form of showing similar patients and their outcomes on which its decision is based.”

“As with any technical advancement, [the field of AI] needs clinicians who are early adopters and who will generate success stories,” explains Bogunovic.

With a growing patient population, eye care providers need all the tools they can get to efficiently deliver care to those who need it. Having developed the tools to generate clinically useful images, vision researchers are now creating new, automated tools to quickly analyze the wealth of imaging data available. While automated image analysis may sound fanciful now, at some point in the not too distant future, a world without AI-assisted diagnosis tools may seem like a fairy tale. MW