Archive for November, 2010

openFrameworks <3’s VVVV

Monday, November 29th, 2010
YouTube Preview Image

Sharing graphics assets between openFrameworks and VVVV using shared memory.

Should also be possible to share DirectShow Video assets from VVVV back to openFrameworks using the same method.

Proof of concept code available at as ofxSharedMemory

What makes your day?

Tuesday, November 9th, 2010

What Makes Your Day? a short film on happiness by Kingston University animation student Napatsawan Chirayukool


Sunday, November 7th, 2010

We are making an interactive projection mapping installation for Design Korea 2010.  Here’s first mockup with cardboard boxes.
디자인코리아 2010을 위해 작업하고 있는 인스톨레이션의 첫번째 목업작업.

Found the error!

Saturday, November 6th, 2010

Turns out that dataset 0.35m was broken.
And get pretty much a perfect fit without it at low orders.
YouTube Preview Image

Here’s the dirty scan:

it should look something like this:

It seems to have missed a data frame. This should have come up in the error checking…

hmm. anyway.,..

Structured light 3D scanning of projector pixels (stage 1: calibration)

Saturday, November 6th, 2010

I’ve been working on this method for a bit of time now….

The concept is:

  1. Make something like a litescape/wiremap/lumarca, but instead of ordered thin (1px) vertical elements, use any material in any arrangement
  2. Use a scattering field of ‘stuff’ to project onto (e.g. lots of of ribbon)
  3. Use a projector to shine loads of pixels into the stuff
  4. Scan where all the pixels land in 3D
  5. Reimagine the pixels as 3D pixels, since they know have a 3D location in space
  6. Project 3D content constructed from these 3D pixels

Since then I’ve thought of a few other decent uses for having scannable projection fields

Early prototypes were in VVVV, then moved to openFrameworks for it’s speed with pixelwise operations and accuracy with framewise. I started writing the scanning program on the bus between Bergamo airport and studio dotdotdot in Milan (which was just over a year ago). After lots of procrastinating and working on other projects, I’m finally getting some progress with this.

Also along the way I realised that a lot of people were doing similar things. When i started to project out the patterns out I realised I was doing something similar to Johnny Chung lee with his projection calibration work, wherein I found out about ‘Structured Light’. Also there’s Kyle McDonald’s work with democratising 3D scanning (particularly with super-fast 3-phase projection methods). Then more recently some things hit closer to home, such as Brett Jones’ interactive projection system.

So the first stage is calibrate the cameras:

YouTube Preview Image

Here we have 2 cameras at one end of the rails. Then the monitor is on a ‘train’ which can move forwards and backwards. Each point on the screen then has a 3D position (2D on the screen, 1D on the rail). We scan in using greycode XY structured light where each pixel is on the screen within each camera image.

Then if we run a correlation on this, we can try to make a relationship between the  4D position (2x2D) on the cameras, and the 3D position in the real world. This gives us a stereo camera, specifically built to scan in the location of projector pixels. Here’s what the correlation looks like at 4th order power series polynomial, triangular bases.

YouTube Preview Image

The next steps are to:

  1. Implement a Padé polynomial for accuracy at low orders
  2. Scan in a 3D scene
  3. Test different arrangements of scattering fields for aesthetic quality and ‘projectability’

The code for all this is available on our google code:

Please get in touch if you’re planning to use this for your projects! Code there’s released under a modified version of the MIT license. See the google code page for details (opinions on that license are also very welcome).

General thanks to Dan Tang.