Posts Tagged ‘MapTools’

ofxPolyfit

Thursday, May 5th, 2011

Just a quick note to introduce ofxPolyfit. It’s an openFrameworks extension which allows you to correlate between any 2 numerical datasets of the same or different dimensionality. e.g. a stereo cam is 2x2D, and you want to get 3D, so you could use ofxPolyfit to create your 4D->3D correlation.

ofxPolyfit also encompasses Padé calibration for projection mapping, but this isn’t easily accessible as yet. I’m likely going to be working on this over the next couple of days.

Latest news is that I’ve added a simple RANSAC implementation which lets you filter out good data from poor datasets. This is to be used with our tree projection experiments to filter out bad voxels.

Also in the future I’ll be looking into making the code much more ‘chunky’ by identifying and classifying data sets, coefficient results, basis shapes, etc.

ofxPolyfit is available at http://code.kimchiandchips.com

Elliot

Structured light 3D scanning of projector pixels (stage 2: proof of concept)

Sunday, April 17th, 2011

We’ve been working with Structured Light to create a 3D scan of where every pixel of a projector lands on a field of ‘stuff’. We are now trying this method with projecting onto trees to create a new type of Projection Mapping / 3D Media Technology.

By determining the 3D location of where every pixel from the projector lands on a tree, each usable pixel becomes a ‘voxel’, i.e. a pixel with a 3D position (note: unlike many voxel systems, we do not expect our voxel arrangement to be homogenous / regular).

We therefore create a set of voxels to display 3D images in the real world, i.e. a volumetric 3D display system.

Using our structured light scanning system built in openFrameworks called MapTools-SL (discussed here) and our low cost scanning setup involving a pair of webcams and a hacked tripod from Argos:

We create the following scan:

YouTube Preview Image

We then feed these known 3D points into a shader patch we wrote in VVVV,  which outputs relevant colour data to each voxel, therefore creating graphical shapes within the voxels.

In this video, we can see a sphere which travels through the tree. The sphere can move parallel to the beam of the projector, which indicates that the system is correctly resolving depth.

YouTube Preview Image

This second video demonstrates this effect from different angles, and also a preview of what video data is being sent to the projector:

YouTube Preview Image

This is a proof of concept that the mechanism actually works.

However there is much more yet to do. The video documentation is not clear, relevant content needs to be imagined/developed/tested, there is a lot of noise on the scan set and there are only a small percentage of voxels that are working.

If you would like to see this project in person, then you will be able to visit an ‘in progress’ prototype at FutureEverything 11th-14th May 2011 in Manchester.

 

Kinect + Projector experiments

Wednesday, January 12th, 2011

Using Padé projection mapping to calibrate Kinect’s 3D world with a projector.

  1. Using the kinect camera, we can scan a 3D scene in realtime.
  2. Using a video projector, we can project onto a 3D scene in realtime.

Combining these, we re-project images onto geometry to create a new technique for augmented reality

Previous videos (for process)

YouTube Preview Image YouTube Preview Image YouTube Preview Image

The pipeline is:

  1. Capture Depth at CameraXY (OpenNI)
  2. Convert to image of WorldXYZ
  3. Pade transformation to create WorldXYZ map in ProjectorXY
  4. Calculate NormalXYZ  map in ProjectorXY
  5. Guassian Blur X of NormalXYZ in ProjectorXY
  6. Guassian Blur Y of NormalXYZ in ProjectorXY
  7. Light calculations on NormalXYZ, WorldXYZ maps in ProjectorXY

Link at Design Korea 2010

Tuesday, December 14th, 2010

Here’s our most recent project, closed 2 days ago at Design Korea 2010, Seoul.

Structured light 3D scanning of projector pixels (stage 1: calibration)

Saturday, November 6th, 2010

I’ve been working on this method for a bit of time now….

The concept is:

  1. Make something like a litescape/wiremap/lumarca, but instead of ordered thin (1px) vertical elements, use any material in any arrangement
  2. Use a scattering field of ‘stuff’ to project onto (e.g. lots of of ribbon)
  3. Use a projector to shine loads of pixels into the stuff
  4. Scan where all the pixels land in 3D
  5. Reimagine the pixels as 3D pixels, since they know have a 3D location in space
  6. Project 3D content constructed from these 3D pixels

Since then I’ve thought of a few other decent uses for having scannable projection fields

Early prototypes were in VVVV, then moved to openFrameworks for it’s speed with pixelwise operations and accuracy with framewise. I started writing the scanning program on the bus between Bergamo airport and studio dotdotdot in Milan (which was just over a year ago). After lots of procrastinating and working on other projects, I’m finally getting some progress with this.

Also along the way I realised that a lot of people were doing similar things. When i started to project out the patterns out I realised I was doing something similar to Johnny Chung lee with his projection calibration work, wherein I found out about ‘Structured Light’. Also there’s Kyle McDonald’s work with democratising 3D scanning (particularly with super-fast 3-phase projection methods). Then more recently some things hit closer to home, such as Brett Jones’ interactive projection system.

So the first stage is calibrate the cameras:

YouTube Preview Image

Here we have 2 cameras at one end of the rails. Then the monitor is on a ‘train’ which can move forwards and backwards. Each point on the screen then has a 3D position (2D on the screen, 1D on the rail). We scan in using greycode XY structured light where each pixel is on the screen within each camera image.

Then if we run a correlation on this, we can try to make a relationship between the  4D position (2x2D) on the cameras, and the 3D position in the real world. This gives us a stereo camera, specifically built to scan in the location of projector pixels. Here’s what the correlation looks like at 4th order power series polynomial, triangular bases.

YouTube Preview Image

The next steps are to:

  1. Implement a Padé polynomial for accuracy at low orders
  2. Scan in a 3D scene
  3. Test different arrangements of scattering fields for aesthetic quality and ‘projectability’

The code for all this is available on our google code:

http://code.kimchiandchips.com

Please get in touch if you’re planning to use this for your projects! Code there’s released under a modified version of the MIT license. See the google code page for details (opinions on that license are also very welcome).

General thanks to Dan Tang.

Padé approximant projection mapping

Tuesday, October 5th, 2010

This is a quick and scruffy video demonstrating a new method for calibrating projection mapping that I’ve been workin gon

Basic overview:

  1. Mesh of scene inside VVVV and on iPad app
  2. Create correspondences between points in world space (XYZ) and points in projector space (XY)
  3. Use the iPad to select the world space points, and to control a cursor to input the projector space points

We’re going to put together a clearer video about this as soon as we can (involving Mimi’s communication skills!). So hold tight if you can’t quite figure out from this phone camera clip. Also we’ll be going into more detail about all this at our workshop at Node 10 festival, presented by myself and Chris Plant in mid November. Code and documentation will also be made available around that time. I’m not entirely certain how to release the iPad/iPhone app yet.

The end aim of all this is that you can very accurately calibrate a projector for projection mapping within 5 or 10 minutes.

This method can also be extended for use with structured light with light sensors (either embedded or external). More on that shortly!