Washington :Scientists have developed a new algorithm that could turn off-the-shelf digital cameras and smartphones into high-quality 3D scanners.
“One of the things my lab has been focusing on is getting 3D image capture from relatively low-cost components,” said Gabriel Taubin, a professor at Brown University in US.
“The 3D scanners on the market today are either very expensive, or are unable to do high-resolution image capture, so they can’t be used for applications where details are important,” said Taubin.
Most high-quality 3D scanners capture images using a technique known as structured light. A projector casts a series of light patterns on an object, while a camera captures images of the object, the researchers said.
The ways in which those patterns deform over and around an object can be used to render a 3D image. However, for the technique to work, the pattern projector and the camera have to be precisely synchronised, which requires specialised and expensive hardware.
The algorithm developed by the researchers enables the structured light technique to be done without synchronisation between projector and camera, which means an off-the-shelf camera can be used with an untethered structured light flash.
The camera just needs to have the ability to capture uncompressed images in burst mode (several successive frames per second), which many DSLR (digital single-lens reflex) cameras and smartphones can do.
The problem in trying to capture 3D images without synchronisation is that the projector could switch from one pattern to the next while the image is in the process of being exposed. As a result, the captured images are mixtures of two or more patterns.
Another problem is that most modern digital cameras use a rolling shutter mechanism. Rather than capturing the whole image in one snapshot, cameras scan the field either vertically or horizontally, sending the image to the camera’s memory one pixel row at a time.
As a result, parts of the image are captured a slightly different times, which also can lead to mixed patterns.
“That’s the main problem we’re dealing with,” said Daniel Moreno, a graduate student at Brown University.
“We can’t use an image that has a mixture of patterns. So with the algorithm, we can synthesise images – one for every pattern projected – as if we had a system in which the pattern and image capture were synchronised,” Moreno said.
After the camera captures a burst of images, algorithm calibrates the timing of the image sequence using the binary information embedded in the projected pattern.
Then it goes through the images, pixel by pixel, to assemble a new sequence of images that captures each pattern in its entirety.
Once the complete pattern images are assembled, a standard structured light 3D reconstruction algorithm can be used to create a single 3D image of the object or space.