Structured light 3D scanning of projector pixels (stage 2: proof of concept)

April 17th, 2011

We’ve been working with Structured Light to create a 3D scan of where every pixel of a projector lands on a field of ‘stuff’. We are now trying this method with projecting onto trees to create a new type of Projection Mapping / 3D Media Technology.

By determining the 3D location of where every pixel from the projector lands on a tree, each usable pixel becomes a ‘voxel’, i.e. a pixel with a 3D position (note: unlike many voxel systems, we do not expect our voxel arrangement to be homogenous / regular).

We therefore create a set of voxels to display 3D images in the real world, i.e. a volumetric 3D display system.

Using our structured light scanning system built in openFrameworks called MapTools-SL (discussed here) and our low cost scanning setup involving a pair of webcams and a hacked tripod from Argos:

We create the following scan:

YouTube Preview Image

We then feed these known 3D points into a shader patch we wrote in VVVV,  which outputs relevant colour data to each voxel, therefore creating graphical shapes within the voxels.

In this video, we can see a sphere which travels through the tree. The sphere can move parallel to the beam of the projector, which indicates that the system is correctly resolving depth.

YouTube Preview Image

This second video demonstrates this effect from different angles, and also a preview of what video data is being sent to the projector:

YouTube Preview Image

This is a proof of concept that the mechanism actually works.

However there is much more yet to do. The video documentation is not clear, relevant content needs to be imagined/developed/tested, there is a lot of noise on the scan set and there are only a small percentage of voxels that are working.

If you would like to see this project in person, then you will be able to visit an ‘in progress’ prototype at FutureEverything 11th-14th May 2011 in Manchester.

 

森の木琴

April 4th, 2011
YouTube Preview Image

예술은 다양한 각도에서 그 진정성이 평가되지만, 그 자체가 하나의 이질감도 없이 자연에 묻혀있을때 사람의 마음을 터치하는 강한 힘을 발휘한다.  도코모의 새로운 모바일을 홍보하기 위해 제작된 커머셜 동영상. 아이디어를 뿜어내고 이를 만들어낸 사람들도 질투가 나지만, 그러한 아이디어를 받아주고 이해해준 클라이언트가 있다는게 더 질투가 난다…

designed by http://www.drill-inc.jp/

Evaluation of Logitech C910 webcam for Computer Vision use

April 1st, 2011

I’ve recently been using a pair of PlayStation 3 Eye’s for reading structured light patterns projected onto objects. These particular cameras have had a lot of attention from hackers due to their value/performance.

The PS3Eye is a camera built for machine vision, and can provide ‘lossless’ 640×480 RGB frames at 60 frames per second with low latency, and is therefore particularly relevant for realtime tracking applications (e.g. multi-touch, 3-phase scanning). For OSX there is Maccam driver, and for Windows there is the fully featured CLEye driver from Alexp which supports programatic control of multiple cameras with full support for all camera features and also gives camera identity (through a GUID), and is free for 1 or 2 cameras per system.

But for a recent project, it became apparent that I needed resolution rather than framerate. My first instinct was to move to DSLR’s, and I began working with developing a libgphoto2 extension for openFrameworks called ofxDSLR. This route had the following issues:

  • Relatively expensive (compact cameras do not support remote capture, meaning I would have to use DSLR’s, with the cheapest compatible options at around £350 with lens – Canon 1000D+18-55mm lens)
  • Requires external power supply / recharging
  • Heavier than machine vision cameras
  • Flaky libraries (libghoto2 isn’t really built with CV in mind, and I found it was taking a lot of time to get results. And lack of solutions for Windows)
  • Slow capture (several seconds between send capture command and receive full result)

A DSLR offers:

  • Fantastic resolution
  • Great optics
  • Programmatic control of ISO, Focus, Shutter, Aperture
  • More than 8bits per colour

Due to the above issues, I decided to explore other options. This led me to the Logitech C910 which supports continuous capture at roughly 20x as many pixels as the PS3eye but at 1/120th of the frame rate.

Without further ado, here’s the video documentation (I recommend you chose either 720p or 1080p for viewing).

YouTube Preview Image

Notes:

Capture

Driver

  • UVC device (capture supported on all major desktop OS’s)
  • As of 1st April 2011, there is no way to programatically control the C910 from OSX, but this is likely to come soon (see here and here)
  • Programatic control from Windows through DirectShow. I recommend Theo Watson’s videoInput class for c++, which is included with openFrameworks or available as standalone.
  • I haven’t yet seen a way to uniquely identify a camera (Each PS3Eye can report its GUID identity, which is useful for recognising individual cameras in a multicam system)

Compression options

  • YUY2 (YUV 4:2:2) = lossless luminance, half resolution colour
  • MJPG = lossy, but higher frame rates supported than YUY2, since lower bandwidth required.

Programatic control of

  • Motorised focus
  • Shutter speed (aka exposure)
  • Gain (aka brightness)
  • ‘Hacky’ Region of Interest {ROI} (through digital Zoom, Pan, Tilt)

Focus

  • ~12 discrete focus steps (i.e. focus control is NOT continuous)
  • Furthest focus point is ~70cm, beyond this is classed as ‘infinity’
  • With sharpening turned off (i.e. getting more of the ‘raw’ image), we see a general lack of focus on surfaces other than at discrete steps
  • Closest macro focus at 3.5cm

Focus table [control value 0-255 / distance (cm)]

  • 255 / 3.5
  • 238 / 3.8
  • 221 / 4
  • 204 / 4.3
  • 187 / 5.3
  • 170 / 6.4
  • 153 / 8
  • 136 / 10.5
  • 119 / 15
  • 102 / 25
  • 85 / 40
  • 68 / 51

See also

Kinect + Projector experiments

January 12th, 2011

Using Padé projection mapping to calibrate Kinect’s 3D world with a projector.

  1. Using the kinect camera, we can scan a 3D scene in realtime.
  2. Using a video projector, we can project onto a 3D scene in realtime.

Combining these, we re-project images onto geometry to create a new technique for augmented reality

Previous videos (for process)

YouTube Preview Image YouTube Preview Image YouTube Preview Image

The pipeline is:

  1. Capture Depth at CameraXY (OpenNI)
  2. Convert to image of WorldXYZ
  3. Pade transformation to create WorldXYZ map in ProjectorXY
  4. Calculate NormalXYZ  map in ProjectorXY
  5. Guassian Blur X of NormalXYZ in ProjectorXY
  6. Guassian Blur Y of NormalXYZ in ProjectorXY
  7. Light calculations on NormalXYZ, WorldXYZ maps in ProjectorXY

Nice new UI ideas

December 16th, 2010
YouTube Preview Image

via http://procrastineering.blogspot.com/

Ishihara

December 16th, 2010


A story about being colourblind. Beautiful

Link at Design Korea 2010

December 14th, 2010

Here’s our most recent project, closed 2 days ago at Design Korea 2010, Seoul.

openFrameworks <3’s VVVV

November 29th, 2010
YouTube Preview Image

Sharing graphics assets between openFrameworks and VVVV using shared memory.

Should also be possible to share DirectShow Video assets from VVVV back to openFrameworks using the same method.

Proof of concept code available at http://code.kimchiandchips.com as ofxSharedMemory

What makes your day?

November 9th, 2010

What Makes Your Day? a short film on happiness by Kingston University animation student Napatsawan Chirayukool

Mockup

November 7th, 2010

We are making an interactive projection mapping installation for Design Korea 2010.  Here’s first mockup with cardboard boxes.
디자인코리아 2010을 위해 작업하고 있는 인스톨레이션의 첫번째 목업작업.