Using Kinect for real-time mapping – Part I: Introduction

Just to be on the safe side: Mapping is the art of limiting, … well let’s say a video projection for the beginning. Let’s say you do a projection on an orb. Some part of the (rectangular) projection will she up on the orb, but some will show up on the wall behind it. So you mask the latter parts of your projection so that only the orb is lit. That’s what we call mapping.

Mapping is rather easy for a projection on some kind of fix object, there are a lot of Tools for it. If you want to map something on a moving object it may get rather complicated: I may have a moving object for an art installation and do want to map it. Standard would be to program the object to move and to program the mapping on basis of the same data. It can get quite a lot of math then. Also it will result in errors if maybe a motor is not 100% accurate and the errors add up. And it is impossible this way if you do not know exactly how the object will move (let’s say wind influences your Object).

A rather cool approach is to use a kinect. With its ability to get an Image with distance Information for each pixel (“depth Image”) you can easily capture the silhouette of an object in maaaany settings. You can so create real-time image and use this as a mask to map your video.

In the following Posts I try to collect examples of how I did this in detail.