Structured light 3D scanning of projector pixels (stage 1: calibration)
I’ve been working on this method for a bit of time now….
The concept is:
- Make something like a litescape/wiremap/lumarca, but instead of ordered thin (1px) vertical elements, use any material in any arrangement
- Use a scattering field of ‘stuff’ to project onto (e.g. lots of of ribbon)
- Use a projector to shine loads of pixels into the stuff
- Scan where all the pixels land in 3D
- Reimagine the pixels as 3D pixels, since they know have a 3D location in space
- Project 3D content constructed from these 3D pixels
Since then I’ve thought of a few other decent uses for having scannable projection fields
Early prototypes were in VVVV, then moved to openFrameworks for it’s speed with pixelwise operations and accuracy with framewise. I started writing the scanning program on the bus between Bergamo airport and studio dotdotdot in Milan (which was just over a year ago). After lots of procrastinating and working on other projects, I’m finally getting some progress with this.
Also along the way I realised that a lot of people were doing similar things. When i started to project out the patterns out I realised I was doing something similar to Johnny Chung lee with his projection calibration work, wherein I found out about ‘Structured Light’. Also there’s Kyle McDonald’s work with democratising 3D scanning (particularly with super-fast 3-phase projection methods). Then more recently some things hit closer to home, such as Brett Jones’ interactive projection system.
So the first stage is calibrate the cameras:
Here we have 2 cameras at one end of the rails. Then the monitor is on a ‘train’ which can move forwards and backwards. Each point on the screen then has a 3D position (2D on the screen, 1D on the rail). We scan in using greycode XY structured light where each pixel is on the screen within each camera image.
Then if we run a correlation on this, we can try to make a relationship between the 4D position (2x2D) on the cameras, and the 3D position in the real world. This gives us a stereo camera, specifically built to scan in the location of projector pixels. Here’s what the correlation looks like at 4th order power series polynomial, triangular bases.
The next steps are to:
- Implement a Padé polynomial for accuracy at low orders
- Scan in a 3D scene
- Test different arrangements of scattering fields for aesthetic quality and ‘projectability’
The code for all this is available on our google code:
Please get in touch if you’re planning to use this for your projects! Code there’s released under a modified version of the MIT license. See the google code page for details (opinions on that license are also very welcome).
General thanks to Dan Tang.