Spiders inspired researchers to create a depth sensor

Washington D.C. [USA]: Researchers have developed a wearable device, capable of carrying microbots by taking inspiration from spiders.

The research is published in — Proceedings of the National Academy of Sciences (PNAS).

Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have made a compact and efficient depth sensor that could be used onboard microrobots, in small wearable devices, or in lightweight virtual and augmented reality headsets.

“Evolution has produced a wide variety of optical configurations and vision systems that are tailored to different purposes,” said Zhujun Shi, a PhD candidate in the Department of Physics and co-first author of the paper.

“Optical design and nanotechnology are finally allowing us to explore artificial depth sensors and other vision systems that are similarly diverse and effective,” added Shi.

Many of today’s depth sensors, such as those in phones, cars, and video game consoles, use integrated light sources and multiple cameras to measure distance.

Face ID on a smartphone, for example, uses thousands of laser dots to map the contours of the face.

This works for large devices with room for batteries and fast computers, but what about small devices with limited power and computation, like smartwatches or microrobots? Humans measure depth using stereo vision, meaning when we look at an object, each of our two eyes is collecting a slightly different image.

“That matching calculation, where you take two images and perform a search for the parts that correspond, is computationally burdensome,” said Todd Zickler, the William, and Ami Kuan Danoff Professor of Electrical Engineering and Computer Science at SEAS and co-senior author of the study.

“Humans have a nice, big brain for those computations but spiders don’t,” added Zickler.

Jumping spiders have evolved a more efficient system to measure depth. Each principal eye has a few semi-transparent retinae arranged in layers, and these retinae measure multiple images with different amounts of blur.

For example, if a jumping spider looks at a fruit fly with one of its principal eyes, the fly will appear sharper in one retina’s image and blurrier in another. This change in blur encodes information about the distance to the fly.

In computer vision, this type of distance calculation is known as depth from defocus.

But so far, replicating Nature has required large cameras with motorized internal components that can capture differently-focused images over time.

Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering at SEAS and co-senior author of the paper, and his lab have already demonstrated metalenses that can simultaneously produce several images containing different information.

Building off that research, the team designed a metalens that can simultaneously produce two images with different blur.

“Instead of using layered retina to capture multiple simultaneous images, as jumping spiders do, the metalens splits the light and forms two differently-defocused images side-by-side on a photosensor,” said Shi, who is part of Capasso’s lab.