Austin Roorda: Okay, I first want to point out the optoretinogram, which is in the title of my presentation, is a term that we did not adopt in our original proposal, but one that we adopted since the grant was awarded and one that's now actually being used by many groups and just to define what it is, the Optoretinogram encompasses any all optical non invasive measurement of the human retina that informs us about its function. So humans, the current state of the art for objectively measuring retinal function, is the electroretinogram or ERG. And this technique is limited. It's shown here on the left technique is limited because it involves contact electrodes because the resolution is poor. In fact, a single electrode measurement Is is a measurement of an aggregate response from tens of thousands of cones, or neurons, at the very best, more practically hundreds of thousands of neurons in one measurement. And because the associations of the. It's also limited because the associations of the electrical signal to specific retinal locations over cell type is very crude. So in an effort to measure neural function. Why can't we employ the growing arsenal of tools that the neuroscientists use as is shown on the right. These tools are great. Electrode recordings, for example, have been used for decades, and the bread and butter of the of the neuroscientist, and more recently flourescent indicators of neural activity are being used as will be shown by David Williams. And these tools are great, but they're invasive and not approved for human use, possibly never ever being approved for diagnostic purposes in humans. So as good as these tools are they can only be used in explanted tissue and animal models. So what we need is live human measurements, human retinal measurements with the same specificity and resolution scale as is currently used for animals and explanted tissue. And our audacious proposal then was to measure structure and function on a cellular scale and human eyes using all optical methods. And essentially our methods that we're using are based on interferometry, measures that leverage the wave like properties of light and principles of superposition, that you may have learned in your high school physics class. These are measurements that are capable of detecting changes of objects on the scale of nanometers. Our basic premise then was that neurons, when they undergo changes in their activation state, will also have nanometer scale physical changes in their shape and these may be caused by osmotic changes or membrane tension changes. as we'll discuss. so fortunately most of the building blocks for deploying this type of technology in the eye is already available. So on the left shows adaptive optics. And we've been using adaptive optics for almost 25 years to resolve single cells in the human eye. Also OCT or optical coherence tomography, which is form of interferometry is already a hugely successful in ophthalmology. And our team member Hyle Park from UC Riverside has spent most of his academic career leveraging the interferemetric capabilities of OCT as illustrated in this example on the right, where he used phase resolved OCT to measure properties of the mouse brain. So these along with continued improvements in lasers, computers and imaging technology gave us confidence that we would succeed. However, what we did not know was what are the physical or optical path-length changes that are associated with neural activity. And as useful as electrode and fluorescent functional indicators have been to the neuroscientist, they offer no insight into these physical changes that we're aiming to measure. So in effect, we are in unknown territory proposing to build exquisitely sensitive devices to look for optical signals that might or might not exist. So it was clear that if we wanted to succeed in measuring signals in living eyes, we need to invent new ways to quantify the physical changes in neurons. And we wanted to do this in an ex vivo environment where we had them under control. And this is where Daniel Palanker and his team at Stanford comes in. So his first aim was to develop interferemetric methods to look at explanted and cultured neurons in a microscope and characterize how they move when they fire action potentials and This illustration shows the rig that he developed, it's a full field imaging system And the lower part shows what we call a common path interferometer and so it uses interferometry to measure small changes in the image and importantly, a sample in the microscope, as shown here by the inset, the sample is placed on the multi electrode array which is a conventional electrical recording system that will allow more traditional recording from the same cells that are being measured optically. And by synchronizing in time the measured electrical events from the multi electrode array with the optical events measured by the interferometer and this is a technique called spike triggered averaging, he can attain high signal-to-noise templates or measurements of the physical changes that neurons undergo during action potentials. When he applied this to a set of cultured neurons which are more comparable to retinal ganglion cells, he could record this movie of the neurons as shown on the left. So here we see a spike triggered movie from a collection of connected neurons. When he quantified the magnitude of the change he found that they were extremely small and brief as is shown on the plot on the right. It shows that the time course is brief, one millisecond or less, and the magnitude of the iextent is one nanometre orr less, and the magnitude of the extent is one nanometre or less. But despite these small and brief events, he demonstrated in this paper that if you have a good template, like what's shown here, then you can still extract individual action potentials by correlating this template with the noisy optical signal and this is called template matching. Another clear advantage of his image based approach became apparent because far more information was revealed than just the action potential. Through imaging, he was able to reveal activity within and between cells on an unprecedented scale. He could use this imaging not only to measure the physical changes, but to explain why they occur. So in the same cells that he measured optically, he also determined the 3D structure with confocal microscopy. Then he applied a biomechanical model to explain how voltage differentials across the cell membrane might affect the shape of the cells. And what Daniel learned was that depolarization from an act as lateral repulsion between ions along the cell membrane, which in turn causes the cell to become more spherical. When that happens, some parts of the cell get thinner, as indicated by the blue colors, and other parts of the cell get thicker, as indicated by the red colors. And his model which is shown in the lower right. matched the measured events on the left side very well, both in timing and in magnitude and I'll discuss another specific application of this biomechanical modeling later with regard to photoreceptors. So now over the course of this grant we were fortunate to learn of some excellent work taking place in Don Miller's lab at Indiana, where he used his adaptive optics OCT system to record volumes at high speed and around the same time Gereon Huttmann at the University of Lubeck developed the full field OCT system. And in both cases, both labs succeeded in detecting physical changes in cones in response to light stimulation, and this gave us confidence that our two unique approaches would also work. Ramkumar Sabesan built a line scanning OCT system and he chose this approach because it struck an optimal balance between the high resolution of Don Miller's system with the parallel and speed advantages of Huttmann's full field system. And at Berkeley, we aim to build the highest resolution and fastest ORG system we could envision which involved using an AOSLO platform we already have in the lab to image and track the retina and using that eye motion to steer a phase resolved OCT probe. So, put simply, our aim was to build a non invasive optical version of a single electrode. In Ram's system, he starts with a basic spectral domain OCT platform. And then he adds optics to project a line on the retina instead of a point, a scanner to scan the line across the retina, adaptive optics to sharpen the image and increase the signal, and a line scan camera to record a line scan image for additional imaging information. And then finally, importantly, he adds a system for retinal stimulation to elicit the changes that he wants to measure in the optoretinogram. These volume images from Ram's lab shows that he can achieve cellular resolution in three dimensions with this system. Now I'll describe how he applies the system for ORG measures. At this time he's focused on photoreceptors signals. And in the OCT system images that he that he acquires with the system, he is focused on measuring physical changes in the optical path length of the cone outer segment in response to light stimulation. In this first example, he delivers a three bar illumination pattern as is indicated on the left image on the retina. And as you can see from the results on the right, the outer segments of only the stimulated cones are lengthening in response to that stimulation. And it's important to note here that these results as shown here were accomplished without using adaptive optics and this is important for clinical translation, which I'll discuss later. When he uses adaptive optics, he's able to measure ORG responses of individual cone cells and as you can see from this spot where the cone responses are color coded by their robust or magnitude of the response, not all cone cells respond the same. And this is expected because cones come in three types-- sensitive to long middle and short wavelengths of the visible spectrum so called L M and S cones and when he stimulates the cones with red light as shown on the upper left, the L cones respond most robustly followed by the M cones plotted in green, followed by the S cones plotted in blue. And these individual cone signals can be effectively used to classify and map the cone mosaic as is shown here in the figure. Note here on the plot on the left, the time course of the changes that he's looking at are on the scale of seconds. And we believe that these particular components of the ORG reflect osmotic changes in the cell in response to light stimulation. One of the main reasons Ram built the line scanning system was because was so that he could record at high frame rates and in this example is imaging at 120 volumes per second. And in this case, not only did it detect path length increases with light stimulation shown on the plot, but he also saw a brief early response consisting of a very robust and brief shrinkage of the cones, as shown by the inset. This change matches very well with what is known as the early receptor potential short term low latency electrical event as early as 1964. And like the increase in outer segments this early change was found to be proportional to the level of excitation of cones so that when Ram stimulated with 528 nanometer light, the L and M cones would respond robustly as indicated by the orange traces, but the S cones which are not sensitive to 528 nanometer light, do not. To explain the physical changes in cones, we applied Daniel Palankers model again and he worked with Ram to explain the physical changes, the short and long term physical change, that the cones in response to light stimulation. And their model predicted that the disc membranes when they're hyper polarized flatten and extend and it's a tiny amount-- one 100th of a nanometer but when you add it up over the thousand or so discs that comprise a cone outer segment, it becomes measurable with the ORG and as you can see by the plot on the right, which also includes the longer time course increase caused by osmotic swelling, the model and the data match very well. So not only did we succeed in measuring nanometers scale changes itself, but we're able to explain why they occur and it provides important information that will advise us on what look for in all retina neurons. The final system is the active tracking and point scanning system that we're building at Berkeley. This shows a picture of the system in the lab. Hyle Park's team from Riverside built the engine for the OCT and in the system we use an AOSLO to image and track the retina and the eye motion signals from the AOSLO are sent to the OCT system which actively steers it's scanning mirror to keep that beam on target. Both the AOSLO and the OCT system use the same adaptive optics, so they both benefit with cellular level resolution. So these two videos illustrate how the system works. The left side, left's video shows a raw AOSLO video. In-real time we measure and correct for the eye motion and the software stabilized video confirms this. The same eye emotion information that allows us to render that stabilized video is sent to the scanning mirrors and the OCT are to maintain it on target. So on the right, we see the stabilized movie, but we also see a yellow line indicating the location of the OCT B-scan and as you can see the OCT B-scan records from the same location over the time course of the video. The real time tracking makes it possible for us to record from the same location over time, and measure from the same location over and over to record multiple sequences and the average the five average B-scans shown on the left from five separate videos, all show the same structure. And an analysis of the changes n the outer segment length, just like Ram had done in response to red light stimulation can reveal the cone types, as is shown by the optical path link changes in the plot on the right. System is designed for maximum resolution design so that the and designed so that AOSLO can focus on the photoreceptors to enable continuous tracking while the OCT beam can be independently focused on any other layer. And here we show OCT images of structure in the inner retina from the system. These two panels, are two en face sections at different depths in the inner retina. The upper section shows nerve fiber bundles and we are beginning to see the mosaic of retinal ganglion cells as shown in the lower images. So the reason we're excited about seeing the ganglion cells is because it sets us up for our ultimate aim, which will be to record neural activity in individual cells, Effectively building an all optical replacement to the electrode. So this is a cartoon showing the basic steps. So first we lock our phase resolved OCT probe at a fixed location, focusing on the layer of interest. Then we can measure over time monitoring the intensity and phase signals from that location. And while recording, we can use the AOSLO beam to deliver flashes of light to targeted retinal locations and we can record the reflectance and phase changes in response to that light stimulation. The axial resolution will allow us to resolve these changes through depth and the temporal resolution will allow us to measure the cascade of reactions from the photoreceptors up to the nerve fibers. So, our aim was to develop technology specifically for the human eye which we've shown, which we continue to work on. Now I want to discuss the prospects of translating this technology to the clinic and over my career, I've learned that systems with adaptive optics, scanning, eye tracking, etc., are large, expensive and cumbersome and translation of systems like that into routine clinical use is a real challenge. But but one of the most encouraging outcomes of this AGI project was when Ram Sabesan demonstrated that he could measure ORGs in a non-adaptive optics lines scanning system. And that technology by comparison to most, most of the technology we developed is relatively simple, compact, cheap and robust. And importantly, he and Daniel Palanker have filed a patent for the technique. You'll be pleased to see that crucial support from the NIH is acknowledged in that patent application. And it's also important to point out that patents are an important first step towards commercialization and dissemination of the technology. So here's an example of Ram's early investigations of use of ORG for eye disease. This is his patient with Retinitis Pigmentosa a large image shows a fundus camera image, smaller image is overlay a microscopic overlay taken with his adaptive optics scanning laser ophthalmoscope. Zooming in on the left, we see a typical profile for Retinitis Pigmentosa which comprises a central zone with intact photoreceptors surrounded by a transition zone with diseased photoreceptors outside of which there's complete photoreceptor loss. And Ram measured the ORG in the central and transition zone regions and using his non AO line scanning system. He could measure ORG's which are shown on the lower right plot. Each line on that plot is an average of four traces, where the optical path length is measured in the outer segment of the of the photoreceptors. So the ORG in the central region of the RP patient looks very similar to normal. The blue line matches the green line quite well, whereas the ORG in the transition zone shows virtually no response. So this result represents the first ORG measurements ever made in a clinical population without using adaptive optics and we're really encouraged to move forward on this. And in fact, our team is quite convinced that ORG's will become the gold standard for objective measures of retinal function in the not so distant future. So finally, I'd like to thank all the talented postdocs students and staff at Berkeley, Stanford, UC Riverside, and Washington without you we would have not gotten anywhere and also very much like to thank NEI for taking the risk on this audacious project. Thank you.