Page 1 of 1

MAPPING THROUGH LIVE CAMERA INPUT

Posted: Tue Mar 01, 2011 6:57 pm
by lamepantallas
Hello M8ers,

I am in the middle of brainstorming techniques for a show I have and was wondering if it is possible to map objects and moving objects for that matter through a live video camera input (camera placed on top of the projector) and applying for example MASK LAYER module in order to create the transparency for the video to be visualized onto the object.

Since I don´t have a pro camera with FW800 output to my mac, I can´t experiment. Anyone?

Gracias!

gabbo

Posted: Tue Mar 01, 2011 8:04 pm
by deepvisual
delay
i just checked mine today!
6 frames.

Posted: Tue Mar 01, 2011 9:22 pm
by VjKg
I am working on the same thing and my problem has been avoiding video feedback. When you project on an object then place a camera on that object video feedback makes the image unrecognizable. My path has lead me to the xbox kinect with Quartz composer and Syphon. I got it to work but the Kineme kinect patch does not give you control of the depth on the sensor. The Tuio kinect app has an option for setting the front and back distance but this is for control of an object like a mouse and does not go into QC.

The 1024_Architecture guys have a more advanced Kinect patch that allows control called the "_1024_KinectPrimitivePoint QCplugin". I believe the "Z min" and "Z Max" setting should allow you to only capture object within the zone you select. There will still be some feedback but it will be limited to the moving object and not the entire background.