Alfredo Dubra: Good morning. Today I will be briefly describing technological advances of our collaboration between Stanford University, the University of Pennsylvania, University College London, the Medical College of Wisconsin, the New York Eye and Ear Infirmary, and the University of Wisconsin Madison. Please see here my commercial disclosures, none of which created a conflict of interest for this project. The work to be described next has been supported by the National Eye Institute through the Audacious Goal Initiative and the R01 grants listed here as well as Research to Prevent Blindness. Conventional ophthalmoscopes provide views of large portions of the living retina by imaging through a small portion of the pupil of the eye. The lack of detail on the magnified retinal images shown here illustrates the poor transverse resolution of this approach, which is in the order of 10 to 20 microns. Adaptive optics ophthalmoscopy, on the other hand, captures images of much smaller retinal areas, but using a larger portion of the pupil. In this way, after wavefront correction, this approach provides almost a full order of magnitude higher resolution, as illustrated by the detail on the images from the right side of the slide. By detecting minute retinal changes, adaptive optics ophthalmoscopy can contribute to improving eye care by facilitating the development and evaluation of novel therapies by improving disease management to personalized treatment and also by mitigating vision loss through earlier diagnosis of eye disease. There are technical barriers to the translation of adaptive optics ophthalmoscopy into a clinical tool, including involuntary eye movement, optical system performance, limited transverse resolution, structural image contrast, and functional image contrast, which Dr. Morgan is going to describe in the following presentation. So let me show you for a moment, a diagram of an adaptive optics ophthalmoscope which in many regards is similar to a confocal microscope. Let us now focus on the component highlighted on the bottom right, namely the pupil tracker, which does not interfere with the optical setup. So the first step towards the effective correction of involuntary eye motion is the precise measurement of eye movement with minimal latency. In the current pupil tracking paradigm, the captured pupil images that transfer to a computer and the processing to find a pupil location are all performed in sequence as the top diagram shows. In our paradigm, a specialized computing technology called field-programmable gate array, or FPGA, processes the pupil images as they're being downloaded from the camera, resulting in lower latency. Our pupil tracker described in a manuscript in preparation has been implemented with an optical setup for normal involuntary eye movement that achieves high precision that is in the order of a micron. And alternative optical setup was also tested for subject with nystagmus which allows higher frame rates on the order of 900 to 3000 frames per second. These required novel line by line, or one dimensional image processing algorithms, rather than the traditional two dimensional algorithms, plus a robust method for discovering outliers, shown here as red dots on the left images. With these new processing paradigm and algorithms, we were able to deliver latencies between 0.7 and 2.5 milliseconds dominated by the time required to download the images from the camera to the computing device. When viewing the retina very high magnification, it is challenging to accurately record the exact location of retinal images. This represents a practical barrier to the longitudinal imaging that is essential for monitoring the success of regenerative therapies. To address this, we developed a fundus camera with a long working distance to facilitate integration with any AO ophthalmoscope. This camera uses 914 nanometer light to avoid interference with AO psychophysical experiments and functional imaging. This camera allows light from the AO ophthalmoscope to leak into the fundus image creating an accurate record of the retinal location being imaged as the small bright rectangles on the left panels show. After nearly two decades of empirically designing reflective ophthalmoscopes, we used nodal aberration theory to rigorously describe the performance of the afocal telescopes that are their fundamental building blocks. The insight from this theory allowed us to develop better ophthalmoscopes such as the one we used to capture the images shown here from mouse photoreceptors. Also it allowed us to better understand the limitations of this optical relay which led us to proposing and demonstrating a 4-element pupil relay with almost double the spectacle prescription range. This relay can be used in both scanning and non-scanning ophthalmoscopes, greatly increases the fraction of the population that can be successfully imaged. This relay also allows for the use of larger fields of view and the beam steering needed for eye movement compensation. The foveal cone photoreceptor images shown here on the left side illustrate the near diffraction limited performance of this instrument as the field of view is steered across a four-degree window. Let us now look at the retina scanner for a second, where we have identified a source of image blur that was not measured by current wavefront sensing techniques. This blur is caused by the distortion of the surface of this mirror. The gradient resolution by as much as a factor of two in any imaging modality, and in particular, in confocal imaging channels, this blur can lead to a reduction of signal as high as 80 to 90%. We propose the mitigation of this distortion by using custom mirrors made of materials only recently introduced to the optics industry. By reaching out to the broad AOSLO community, we were able to collectively order these mirrors, which we could otherwise not afford separately. Even with these new mirrors residual distortion will remain and to correct for it, we will modify the optical setup to introduce equal and opposite static but field-varying aberrations. These aberrations will cancel those dynamic aberrations introduced by the mirror deformation. Just like each eye has unique monochromatic wavefront aberrations that introduce image blur when using a single wavelength of light, each eye has unique longitudinal chromatic aberration and cylindro-chromatism. In order to correct for these ocular chromatic aberrations, we developed a novel type of tunable lens, which we refer to as the chromatic Alvarez lens. This lens consists of two optical elements that when transversely moved as depicted on the left column diagrams. It will introduce wavelength-varying wavefronts, shown here on the three adjacent columns. By correcting these chromatic aberrations, we will be able to use multiple wavelengths of light simultaneously to probe the health of individual retinal cells. As we grow older, the maximum size of our pupils, when using dilating drops decreases. Therefore, for adaptive optics ophthalmoscopes that use pupils larger than six millimeters, image resolution degrades with age. To overcome this, we did a systematic study that showed that confocal imaging using an effective detector size smaller than the Airy disk provides up to a 20% increase in resolution, as shown by the images on the left side of the slide. This simple modification was rarely used in AOSLO due to the incorrect belief that high and thus less safe light levels will be required. Since our study was published, we and others have been using this detection scheme either regularly or exclusively. As mentioned earlier, one of the remaining technical challenges is to increase the contrast of cellular images in AO ophthalmoscopy. To this end, we further investigated split-detection AOSLO imaging, a technique we developed a few years ago. We did so by first comparing confocal and non-confocal split-detection which revealed two different image contrast mechanisms. We found the non-confocal annulus that provides the highest cone photoreceptor contrast and also proposed a combination of split detection images that creates direction-independent contrast. We hope that this novel image type will facilitate the automated interpretation and quantification of retinal structures. A critical component of successful AO ophthalmoscopy is wavefront sensing. This technology, although easy to implement and operate, it is not yet mature, to the point that non-experts can obtain high quality images consistently. Moreover, it is not even clear that even expert operators can deliver the full potential of this technology consistently. During the past six years we've made the following contributions to improve ophthalmic wavefront sensing specifically for AO retinal imaging. First we identified the source of error in partially illuminated Shack-Hartmann lenslets and corrected improving wavefront correction and making AO control loops more stable. This will also benefit other applications such as ground based astronomical telescopes. Second, we quantified the wavefront error introduced by the Stiles-Crawford effect and illustrated its mitigation through the use of smaller lenslets. Then we proposed a centroiding approach that mitigates wavefront errors due to backscattering from multiple retina layers and/or pathology. We then derived analytical formulae for more accurate wavefront reconstruction and finally we formalized the definition of Shack-Hartmann dynamic range that can be used to design instruments with almost an order of magnitude higher sensitivity. The advances developed during this project are already being used for natural history studies as well as clinical trials of gene therapies and neuro-protecting agents. Today, most of our dissemination efforts have been within our AGI collaborators, as shown by the green cells in this table. Over the next two years, we will be devoting ourselves to sharing, and COVID-permitting, even deploying instrumentation and training at various institutions, as shown by the orange hatched cells in this table. So the work towards the AGI goals have not ended in our lab. In fact, they have taken over our lab completely. So in addition to working on dissemination, we will focus on developing distortion correction and calibration methods to improve rigor and reproducibility across all labs and I'm proud to report that over 20 external labs and companies have already agreed to participate in this initiative. We will also refine our latency pupil tracking evaluate lower cost technologies to explore a potential commercialization of this product. And also demonstrate real time eye movement correction. Now that we have optics capable of imaging a very large fraction of the population, we also need to address the next major limitation of AO ophthalmoscopy which is increasing its field of view. This is essential for using this technology to systematically screen subjects for early diagnosis. Finally, we have created a website, inspired by the success of Webvision to share theory, context, online calculators, and code to users and developers of AO ophthalmoscopy. We see this resource as developing over the coming years as part of a community effort. Thank you for your attention. Jessica Morgan: Good afternoon. It's my pleasure to be here today to talk to you about our project for platform technologies for microscopic retinal imaging assessing retinal function. My name is Jessica Morgan and I'm at the University of Pennsylvania. I'd like to thank the NIH for organizing this symposium, and for supporting our work through the Audacious Goals Initiative. Before I begin, I'd like to acknowledge the people on my team that make this work happen in particular my collaborators David Brainard, Robert Cooper, William Tuten, and Alfredo Dubra. Already today you've seen how adaptive optics technology enables microscopic imaging of the living retina by first measuring and then compensating for the optical aberrations of the eye. Shown here is an image of the living human parafoveal cone mosaic. For each bright spot in the image is a single cone photoreceptor. Indeed, investigators have been using adaptive optics technology to study single cells in the retina, both in health and disease for a number of years. But I'd like to impress upon you the idea that just because we observe a cell is present within our images, it does not mean that that cell is functional. So our role in the Audacious Goals Initiative was to develop a high throughput cellular resolution method for assessing retinal function. We did this in a number of ways but the technique I'm going to talk to you about today is called Optoretinography. Optoretinography is a new and emerging field. And it is a technique for measuring an optical signal that arises from the retina in response to a visible stimulus. Its corollary is electroretinography which has been around for a number of years and which measures an electrical signal that arises from the retina in response to a stimulus. In order to begin this endeavor my team and others made the observation that near infrared photoreceptor reflectance varies in response to a stimulus. Shown here is an image acquired with adaptive optics scanning laser ophthalmoscopy. Each cone has its own reflectance trace throughout this video and we image that reflectance with near infrared, but then we stimulate the photoreceptors across the full imaging field. In this case, using a red light. Demarcated here by the red bar. If we then look at each photoreceptors reflectance before, during and following the stimulus, we can observe how the reflectance changes in response to that light. For instance, the cone outlined here in orange and enlarged on the left with its own reflectance trace audit below it. Here you see the cone starts with an initial reflectance the visible stimulus turns on and the cone's reflectance increases and then decreases in response to that visible stimulus. If we look at a second cone, for instance, the one outlined by the magenta square, we see that this cone's reflectance also starts at an initial value, the stimulus turns on and the cone reflectance decreases in response to that stimulus. Indeed, if we looked at each cone throughout this image, we would find that each cone has its own unique reflectance trace and that the signal is highly heterogeneous across cones. So we asked ourselves, to what extent is this change in near infrared reflectance the optoretinogram and does it match known properties of cone function. In order to do this we needed to first find a way to quantify our heterogeneous signal. We settled on a way that standardizes the cone's reflectance in the pre- stimulus condition and looks at how the cone reflectance diverges from that initial state. Shown here is a video where the cone's reflectance begins in that standardization process. And when the visible stimulus turns on marcated by the red bar. We observe how the standardized reflectance diverges from its initial state. We can summarize this behavior by taking the standard deviation across all cones within the population. To arrive at our optoretinogram response, which has a low value in the pre- stimulus case, the stimulus turns on marcated by that white dashed line, and we see the optoretinogram response in response to that stimulus. We then asked, Does this signal follow known properties of the visual system? For instance, if we increase the stimulus irradiance, does the signal also increase? Indeed, we found that it does, testing over four different stimulus of radiances over multiple log units showed that the optoretinogram signal did indeed increase across those increasing irradiances. We then looked at varying stimulus wavelength and found that the action spectrum of the optoretinogram response mirrors the human photopic luminosity function or the CIE function shown by the white dashed line. And so we're very confident now that the optoretinogram response that we're measuring is indeed related to cone function. All of our measurements so far that I've told you about have been overpopulation of cones and so we then wanted to know whether we could measure an optoretinogram across individual cones. What I've already shown you that the change in near infrared reflectance such as those highlighted by the four cones outlined here, is heterogeneous across cones. Well it turns out That the change in near infrared reflectance following a stimulus is also heterogeneous across trials. So we can now pull signal across multiple trials of the same cone, such as the multiple trials shown here for the red cone, rather than pulling across multiple cones within the population. In this way, we can then determine an individual cone optoretinogram and assign each cone in the mosaic its own functional response based upon the cones change in near infrared reflectance following the physical stimulus. Shown here is an image again of the parafoveal cone mosaic, where each cone has now been assigned it's optoretinogram amplitude and color coded accordingly. This has high potential now for translational studies where we aim to understand retinal disease and its treatment, in particular as regenerative medicine aims to restore function to cones that are structurally present, we can now measure that function. In the aim of translating to clinical studies, we have also looked at the optoretinogram in disease states. Here I'm showing you an example of choroideremia, where we compared four choroideremia subject images to five control images, and found that the amplitude of the optoretinogram was greatly reduced in our Choroideremia subjects in comparison to controls. So in summary, the cone optoretinogram is measurable across a population of cones and at the individual cellular level. Optoretinography shows high potential as an objective biomarker for assessing cone function. Thank you for your time. Please feel free to reach out to me at jwmorgan@pennmedicine.upenn.edu. Thank you.