. i have the kinect for the original xbox, so it's usb 2.0 and needs it's own power supply.
. afaik, it sends two data streams, one is 640x480 color webcam feed (bayer filter) the other stream is depth data (which is often converted to grey scale or color) - from what i gather about the depth stream, it can be 'fine tuned' to be more or less precise.
. here's a screen cap from cocakinect i took -
http://www.npav.com/firstKinect.jpg - a very basic program someone uploaded. my thought is that these could easily be converted into their own video feed.. probably via macam or something like that.
. the tuiokinect app i downloaded (
http://code.google.com/p/tuiokinect/ ) splits it into 4 screens (depth, b/w cam, 'zone' and x/y of zone -- check out the video). so for this test, i was capturing that window into my machine running modul8 and moving/zooming each window into place.
. there isn't a huge amount of lag from my experiments - a lot of the sluggishness in the video is me not cropping the input video (so using a processor intensive effect like IscFlame1 - it was 'flaming' the entire video, even though I was zoomed in on one small part) and i was capturing the video from modul8 which is always a little stutter prone.
. i'm currently trying to get the code to compile (the author of tuiokinect graciously provides an Xcode project of the app) and reduce the windows to 320x240 for easier capture and add MIDI out to the x/y coordinates that could control the center of some effects. after getting more familiar w/TUIO libraries i might try some more interesting things
. i don't foresee a 'minority report' like vj interface, but i'm drawn to the depth sensing ( which is IR so it works in zero light conditions ) for all sorts of neat effects. adding a x/y/(z?) to a setup could be pretty sweet as well.... but it's been a while since i dug into anything xCode
-james
(a nomad. )