Skip to content

How eye imaging technology could help robots and cars see better

Researchers are applying lessons learned from decades of perfecting eye-imaging technologies to tomorrow’s autonomous systems sensor technologies
March 29, 2022
3D colorized images of open and closed hand, and a face

Duke researchers have shown that a new approach to LiDAR can be sensitive enough to capture millimeter-scale features such as those on a human face. Image credit: Ruobing Qian, Duke University

Even though robots don’t have eyes with retinas, the key to helping them see and interact with the world more naturally and safely may rest in optical coherence tomography (OCT) machines commonly found in the offices of ophthalmologists.

One of the imaging technologies that many robotics companies are integrating into their sensor packages is Light Detection and Ranging, or LiDAR for short. Currently commanding great attention and investment from self-driving car developers, the approach essentially works like radar, but instead of sending out broad radio waves and looking for reflections, it uses short pulses of light from lasers.

Traditional time-of-flight LiDAR, however, has many drawbacks that make it difficult to use in many 3D vision applications. Because it requires detection of very weak reflected light signals, other LiDAR systems or even ambient sunlight can easily overwhelm the detector. It also has limited depth resolution and can take a dangerously long time to densely scan a large area such as a highway or factory floor. To tackle these challenges, researchers are turning to a form of LiDAR called frequency-modulated continuous wave (FMCW) LiDAR.

In a paper appearing March 29 in the journal Nature Communications, a Duke team demonstrates how a few tricks learned from their OCT research can improve on previous FMCW LiDAR data-throughput by 25 times while still achieving submillimeter depth accuracy.