Structured light 3D scanning of projector pixels (stage 1: calibration)

I’ve been working on this method for a bit of time now….

The concept is:

  1. Make something like a litescape/wiremap/lumarca, but instead of ordered thin (1px) vertical elements, use any material in any arrangement
  2. Use a scattering field of ‘stuff’ to project onto (e.g. lots of of ribbon)
  3. Use a projector to shine loads of pixels into the stuff
  4. Scan where all the pixels land in 3D
  5. Reimagine the pixels as 3D pixels, since they know have a 3D location in space
  6. Project 3D content constructed from these 3D pixels

Since then I’ve thought of a few other decent uses for having scannable projection fields

Early prototypes were in VVVV, then moved to openFrameworks for it’s speed with pixelwise operations and accuracy with framewise. I started writing the scanning program on the bus between Bergamo airport and studio dotdotdot in Milan (which was just over a year ago). After lots of procrastinating and working on other projects, I’m finally getting some progress with this.

Also along the way I realised that a lot of people were doing similar things. When i started to project out the patterns out I realised I was doing something similar to Johnny Chung lee with his projection calibration work, wherein I found out about ‘Structured Light’. Also there’s Kyle McDonald’s work with democratising 3D scanning (particularly with super-fast 3-phase projection methods). Then more recently some things hit closer to home, such as Brett Jones’ interactive projection system.

So the first stage is calibrate the cameras:

YouTube Preview Image

Here we have 2 cameras at one end of the rails. Then the monitor is on a ‘train’ which can move forwards and backwards. Each point on the screen then has a 3D position (2D on the screen, 1D on the rail). We scan in using greycode XY structured light where each pixel is on the screen within each camera image.

Then if we run a correlation on this, we can try to make a relationship between the  4D position (2x2D) on the cameras, and the 3D position in the real world. This gives us a stereo camera, specifically built to scan in the location of projector pixels. Here’s what the correlation looks like at 4th order power series polynomial, triangular bases.

YouTube Preview Image

The next steps are to:

  1. Implement a Padé polynomial for accuracy at low orders
  2. Scan in a 3D scene
  3. Test different arrangements of scattering fields for aesthetic quality and ‘projectability’

The code for all this is available on our google code:

http://code.kimchiandchips.com

Please get in touch if you’re planning to use this for your projects! Code there’s released under a modified version of the MIT license. See the google code page for details (opinions on that license are also very welcome).

General thanks to Dan Tang.

Tags:

3 Responses to “Structured light 3D scanning of projector pixels (stage 1: calibration)”

  1. Chris Says:

    Hey

    Have you seen the projected dots from the Kinect camera too?

    http://www.youtube.com/watch?v=nvvQJxgykcU

    Chris

  2. elliot Says:

    thanks for the link.
    so i guess that given any section of approximately flat surface, you’ve got a dot pattern projected which can be rectified to produce a unique position (in projection space) for that section. but i presume there’s something more clever than that going on.
    I mean, rectifying an image is difficult enough. But barcodes obviously work even when distorted, so maybe it’s something like that.
    Or my thinking is too pixelgrid based and it’s something more about frequency and phase of dots in any region..

  3. Kimchi and Chips' blog » Blog Archive » Structured light 3D scanning of projector pixels (stage 2: proof of concept) Says:

    […] been working with Structured Light to create a 3D scan of where every pixel of a projector lands on a field of ‘stuff’. We […]

Leave a Reply