BAM Maps – Introduction

August 25th, 2012

A general problem with projecting onto objects (i.e. non-planar scenes) is the correct assignment of pixel brightness to every surface section on the object.

Here’s a short presentation regarding a technical idea we are experimenting with to automatically manage this scenario (Suggest pausing the video on longer slides):

YouTube Preview Image

The BAM map render pass finds the total available brightness for each surface element, allowing us to normalise how much brightness is actually presented to each surface element by modulating the brightness sent from each projector for that surface element.

Features include:

  • Automatic edge blending (handover between 2 or more projectors)
  • Tearing reduction (kill projector areas that are tearing, fill in with projectors that are not tearing)
  • Shadow filling (where projectors are blocked from covering a region, other projectors fill in the shadow)
  • Lighting model (rationally amplify light sent to obtuse normals and far away surfaces)
BAM stands for ‘Brightness Assignment Map’, however currently it’s more of a ‘Brightness Availability Map’. The next development step (following more testing and refining of the existing model) is to perform projector assignments for each pixel at the BAM render pass (for example, using an output bit mask). This would allow for more active projector prioritisation (e.g. more active assignment of brightness between projectors when 1 is markedly better than the other for a given surface element).

Elliot

Point and Line to Plane

July 24th, 2012

It’s not necessarily inevitable to adopt a theory for your art.
However, it enables you to give an answer to “why” question sometimes.
(It also gives you an opportunity to pretend superficial though…)
This book is known as the best introduction to abstract art and composition.
But personally, it’s more interesting to see artist’s personal, emotional and non-scientific way of observation to understand a basic rule of visual form.

by Wassily Kandinsky.

 http://bit.ly/K9ydCj

ScreenLab 0x01

June 25th, 2012

Today we’re finally releasing the full results of the ScreenLab 0x01 that I was involved in curating at The University of Salford, MediaCityUK campus. For full details, check the article on The Creators Project.

VVVV.Tutorials.Mapping.3D

May 22nd, 2012

Take a quick 3D Scan of some stuff with ReconstructMe then projection map onto it with CalibrateProjector.

YouTube Preview Image

Requires:

Quiet but busy

May 22nd, 2012

At the kimchips’ HQ we’ve been working on a long term intensive project, quietly resulting in a backlog of useful code projects that need documenting. Here’s a list of some of what’s been released since we last talked:

  • VVVV.Nodes.Image [src] [alpha release] – nodes for threaded image processing in VVVV with support for OpenCV, OpenNI, FlyCapture (Point Grey), CLEye (PlayStation 3 Eye), FreeImage, DirectShow (via Theo Watson’s videoInput class). Along with preliminary support (to be completed in future) for MS Kinect SDK, RGBDToolkit, GStreamer.
  • ofxCvGui2 [src] – a more object oriented (internally) and more procedural (usage) and altogether cleaner (visually, usage and internal code) rewrite of ofxCvGui for CV like apps written in openFrameworks (more to come).
  • ofxGraycode [src] – a very easy to use implementation of graycode structured light in openFrameworks (no other addons / libs required).
  • ofxTSP [src] – solve the Travelling Salesman Problem and other route-finding tasks (currently using slow brute force methods, useful for <10 nodes, but can be expanded to more optimised methods).
  • VVVV.Externals.StartupControl [src] [release] – automate unattended startups in Windows (designed specifically for interactive installations).
  • VVVV.Nodes.TableBuffer [src] – spreadsheet-like functionality in VVVV for viewing, editing (user + patch), saving/loading/autosaving spread data.
  • VVVV.Nodes.GL [src] – test ground for OpenGL functionality within VVVV (renderer, some primitives and textures implemented).
  • ofxRay [src] – openFrameworks addon for ray mathematics, hit testing, triangulation, projector simulation.
  • ofxInteractiveNode [src] – ‘Unity-like’ editing of ofNode objects in openFrameworks during runtime (to be used in conjunction with ofxGrabCam).

ofxGrabCam

November 10th, 2011

Updated video:

YouTube Preview Image

A simple camera for openFrameworks I threw together in transit. Name suggestions welcome!

ofxGrabCam is a camera for browsing your 3D scene. It ‘picks’ the xyz position under the cursor (using the depth buffer to get Z).
Rotations and zoom (left and right mouse drag) are then performed with respect to that xyz position
Inspired by Google SketchUp

P.s. this is probably not much use for point clouds / other sparse datasets where there’s nothing to ‘grab’ onto.

Update: now if it doesn’t find anything under the cursor, it automatically spirals outwards until it does find something, so sparse objects / inaccurate clicks are now more workable.

Available on github at http://github.com/elliotwoods/ofxGrabCam

You should also check out https://github.com/Flightphase/ofxGameCamera by the ever obvious jim.

ZOTAC DisplayPort to dual HDMI for multi-projector media installations

November 9th, 2011

I’ve had this sitting in my suitcase for a while, finally here’s the results of trying it out: (i hope to reword some of this later when I haven’t just got off of a 12 hour flight)

This adapter belongs in the same cupboard as a Matrox DualHead2Go (part of their GXM product line) but is made by Zotac who are new to this type of device. This device being: Take 1 video output socket on your computer, plug in one of these, and get 2 outputs (in our case for 2 projectors).

When you have everything connected, the 2 outputs appear to the computer as 1 large output (e.g. if you have 2 XGA projectors attached to the Zotac, then the computer will see a 2048*768 video head attached its output). This way, you can send separate signals to the 2 projectors (the left side goes to projector 1, the right side to projector 2).

At the moment I prefer to use HDMI because:

  • Sharp, consistent signal / ‘Lossless’ (We can use the terms HDMI and DVI interchangeably when discussing signal, they’re generally the same thing but with different ends on the cable. However, HDMI can support higher frequencies whilst DVI can support dual-channel)
  • Competitive market of HDMI products (repeaters, cables, adapters. Long DVI cables are expensive whilst generally offering the same performance)
  • Decent physical connector (I find D-Sub which is used for VGA/DVI clunky and easily damaged)

Test setup

Here’s an image of the test setup:

  • ZOTAC mini DisplayPort to dual HDMI (ZT-MDP2HD)
  • Shuttle X58 XPC
  • XFX ATI 6770 Eyefinity 5 Mini-DP single slot video card (5 outs on one card for under £100!)
  • 2 x Optoma EX539 projectors (native XGA, support 120Hz)
  • 2 x 15meter HDMI straight cable

Initialisation

When you plug everything together, nothing happens. It’s only when you switch the projectors on that the computer starts to recognise that there is a display attached. In fact, if you turn on 1 projector then you get an XGA output at the computer, only when both projectors are turned on does the 2*XGA output appear in the PC settings.

This is in contrast to the Matrox which offers you the relevant resolutions directly on connection of the Matrox to the computer (the connection state of the projectors isn’t generally reported to the computer’s graphics card). This is advantageous for reliability as the state of the system from the PC’s point of view remains constant.

The behaviour of the Zotac would generally require you to turn on the projectors before turning on the PC when running an installation which starts on boot.

Supported modes

The specification quotes 2*HD is supported (3840*1080), however it was not offered to me even though the projectors support it (I’ve tested and used 1920*1080 on these projectors before on 15meter signal length). Only the native resolution was offered to the PC for the dual modes (XGA). For single head modes more resolutions were offered.

The Zotac should be able to support 120Hz XGA (XGA@120Hz ~= HD@60Hz in terms of bandwidth), but this was not supported / offered.

Selecting XGA@120Hz resulted in the signal being passed through to 1 projector only, this gives the same behaviour as an ordinary Mini-DP to DVI adapter when used with this projector.

In fact, the adapter works perfectly well as a single HDMI signal adapter. This somewhat explains the strange initialisation (it seems to switch personalities between a single and dual head adapter). Since it’s only a few more £’s than getting an active Mini-DP>DVI>HDMI adapter chain, this becomes very attractive.

Conclusion

I like it!

Advantages over Matrox:

  • Cost
    • The Zotac is £40 vs the Matrox at £100 / £150 / £250 (VGA in VGA out / VGA in – DVI or VGA out / DVI in – DVI out)
    • You save on the cost of adapters (Apple computers and ATI EyeFinity graphics cards commonly have DisplayPort sockets, projectors commonly don’t, you need adapters. For EyeFinity, DVI/HDMI adapters generally must be of the more expensive ‘active’ type).
  • Elegant
    • One small tidy device splits the signal into 2 HDMI feeds
    • Doesn’t require USB bus power
  • Signal strength
    • I generally found that the Matrox’s (tested on TH2G-Digital) can’t push DVI signal over a 15meter cable, active Mini-DP to DVI adapters generally can, this Zotac dual adapter can (in my non-noisy environment).
Disadvantages:
  • HDMI only
    • Matrox offers VGA, DVI and DisplayPort outputs (depending on model)
  • Strange initialisation
    • Could be a problem with long term installations that need to be started up every day by different people
  • Less mature
    • The Matrox devices have lots of hours clocked up, a large user base + lots of software updates
  • Only the native dual mode listed
    • It is somewhat of an advantage that it looks up the native mode and offers that, but in some situations it’s vital to send non-native signals
  • No software interface
    • If you want it!
Unknowns:
  • Supported by EyeFinity span modes? (should be equal to Matrox)
  • Can connect more than 2 of these to 1 EyeFinity card without active/passive adapter issues (should be fine)
  • Long term performance / reliability
  • Latency / motion artefacts
  • Support for all resolutions (1400*1050, 1680*1050, 1280*800, 2048*1080)

Notes from Art&&Code : Calibrating Projectors and Cameras: Practical Tools

November 3rd, 2011

Photo by Kyle McDonald

Overview

The Kinect device inputs a realtime 3D scan of a world scene.

A projector outputs a realtime 2D projection onto a 3D world scene.

Using OpenCV’s CalibrateCamera function, we are able to calculate the intrinsics (focal length, lens offset) and extrinsics (projector position, rotation) of a projector relative the 3D scan of the Kinect.

We project a 3D virtual world scene onto a 3D real world scene by presuming that they are geometrically consistent (thanks to the Kinect) and knowing the intrinsics and extrinsics of the projector.

We can think of this as either:

  • Calibrating a virtual camera inside the computer against the scanned 3D scene, such that the virtual camera exactly aligns with the real projector or
  • Calibrating a real projector in the real world against a real 3D scene using the Kinect to take measurements

Demo

YouTube Preview Image

Walkthrough

YouTube Preview Image

VVVV Patches

Patches and plugins are available at http://www.kimchiandchips.com/files/workshops/artandcode/ArtAndCode-CalibrateProjector.zip

(the old link went to github, but there seems to be some bugs with their download system at the moment https://github.com/elliotwoods/artandcode.Camera-and-projector-calibration/downloads)

Inside is a plugin which wraps EmguCV and OpenNI (you’ll need to have a recent version of OpenNI installed).

Also there are 2 patches:

  1. CalibrateCamera
  2. CalibrateProjector (WARNING : Renderer will open fullscreen on 'second' screen to right of main screen e.g. projector)

Wiki

Workshop notes are available here

Github

openFrameworks code here (will be adding / amending / breaking / creating in that repo. You might want to checkout the artandcode-end tag).

김치앤칩스에서 어시스턴트를 모십니다.

October 12th, 2011

일시 : 현재 부터 – 2012년 1월 30일.

 

김치앤칩스에서 내년 1월 오픈할 인스톨레이션 프로젝트에 함께 할 어시스턴트를 구합니다.

처음 스텝부터 모든 프로세스를 함께 고민하고 풀어나갈 크리에이티브한 사람을 찾고 있는데요, 조금 더 구체적으로 말씀드리자면…

  • 영어 / 한국어 구사가 가능한 사람 
  • 영어는 서툴지만 성격이 좋아서 엘리엇과 잘 소통할 수 있는 사람
  • 3D 프로그램을 잘 다루는 사람.
  • 누구보다도 예술을 사랑하고 즐 길 수 있는 사람
  • 커다란 관심으로 테크놀러지의 세계에서 잘 적응 할 수 있는 사람
  • 청계천, 세운상가에 쇼핑나들이 가는것을 좋아하는 사람
  • 그곳의 아저씨들과 친해질 수 있는 사람
  • 뚝딱뚝딱 만들고 부수기를 즐기는 사람
  • 능력은 있지만 운발이 안맞아 현재 시간이 많은 사람 – full time job
  • 부천으로 출퇴근 가능한 사람
  • 현장에서 설치물 셋업하기에 충분히 힘이 쎈 사람

 

Register your interest

VVVV + MapTools / projection mapping workshop at MadLab Manchester

September 12th, 2011

Come with us, and learn how to project onto buildings, branches and bullets.

We’ll be presenting and distributing some pre-release bits of MapTools, our open source projection mapping toolkit. Our aim is to revolutionise (rather than simply facilitate) projection mapping as a technique. The full toolset is open source and contains features that are not publicly available today:

  • MapTools-Remote: Remote control mapping for accurate and speedy mapping. Quicker and more accurate by a factor of 10x.
  • MapTools-Pade: True 3D geometry mapping (still distorting 2D shapes onto things? you’re missing out!). We present our own Padé system which produces the quickest and most accurate 3D projection calibration out of all tools available.
  • MapTools-SL : Advanced structured light techniques which we’ve been working on and had published for the last 2 years supporting true 3D scanning working with a range of cameras and operating systems.
The main part of the workshop will be covering VVVV essentials, and then move onto using MapTools on that platform.
Some details on MapTools are available on Create Digital Motion.
For more workshop details, check here