I’ve tested the Kinect in touch, and it’s awesome. Especially the skeleton tracking. So easy. I also made an OSC patch that take every info and send them to Max/Msp.
I didn’t succeed to do anything with the depth map, but it’s fully functional.
Now, i’m trying to import a 3d model and animate it with the kinect. Any advice ?
I’m adding support for near mode right now. However note that near mode with the Kinect 1.0 runtime doesn’t support skeletal tracking, only full body (one position per person).
looks like the new Kinect for Windows 1.5 runtime, around end of May, will allow some types of skeletal tracking in near mode, including one for seated use. also it’s meant to be backwards compatible with applications still using 1.0
I assume you would use the chroma top to select the distance range you want to isolate, then feed the result to a blob detect TOP which is then linked to an info Chop or Dat. If memory serves me right, that should give you the number of blobs and the centroid(?) of each?
And a ps question, is there an easy-ish way of generating a pointcloud out of images? I’ve converted the kinect depth to camera-space positions, but havn’t found a way to construct points besides doing copies or scatters…
having huge fun. gotta remember to sleep and get work done but the kinect with touch so far is so easy. so now I’m getting greedy. how would I get to the speech recognition stuff with the kinect? I assume that this uses the stuff in the OS, not in the kinect hardware itself? I like the idea of kids yelling as well as leaping around in front of this thing…
yeah, I know, bed time…
[url]https://vimeo.com/42034276[/url]
(might be half an hour before vimeo put it up)
it’s actually two-player but everyone’s sensibly in bed.
in the above .toe, kinect_imagespace.13.PAINT_two_player.12.toe , there are a bunch of operators in /project1/p1 colored blue that I want to ask people about to see if there’s a better way to do it…
the channels from the kinect have names like p1/hand_r:tx p1/hand_r:ty etc. then also some p1_hand_l_tracked and p1:ty
For the component, I want to make each instance the same inside so p2 can be a clone of p1.
So, I need to strip out the p1/ or p2: or p2_ from the channel name. I’m also replacing the last ‘:’ with an underscore for tidiness.
so far, I’m converting to DATs then using a chain of evaluate DATs to clean things up.
This makes it quite readable but I’m wondering if there’s a more efficient way to do it to do it all in one hit, say, by using an expression in a rename CHOP for example?
… also, I notice that, periodically, /project1/p1/chopto1 glitches to add a second row.
any idea what’s causing that? I’m using select3 to get rid of it but it might be a bug when converting hundreds of channels? I don’t see anything in the chop that inputs the chopto DAT
The xbox kinect has slightly different hardware and firmware and won’t work with the k4w SDK and runtime that’s used for Touch’s support of the kinect.
I bought the new one and it’s really great. I will sell off my old xbox one while there are Mac and linux folk still happy to get one secondhand.
If you know students or teachers in the USA, they can get the k4w unit half price.
however, there are a few ways to get the xbox one going with Touch.
If you have touch FTE commecrcial or Pro (needed for the shared-memory capability), there is some rough software that will feed the kinect’s depth image in via shared memory, just search the touch forums to find it.
to just get the skeleton data via OSC messages, look for something called OSCkeleton via google.
if you hunt around for alternative windows drivers for the kinect, there are example applications that display the camera and depth map on the screen. you can use TouchDesigner’s screengrab TOP to bring it into Touch as long as you have it open in a window somewhere. It’s fugly but it works.
Generally, I reckon it’s worth saving time to just pony up the cash and buy the new one, especially if you are or know a student or teacher in the USA.
well I guess I just really wanted it to make use only of the skeleton (and I already use Pure Data to send signals to TD via other comuter on network to avoid to much chops on my machine. But I’m sure they are tons of other applications i’ll be missing, thanks a lot for you post though ! very instructive
m$oft are saying that the 1.5 one is backwards compatible so it ‘should’ be safe to install and still use the 1.0=based touch stuff. however, I guess checking for a definitive answer from Malcolm et al would be safest.
I assume the new features won’t be available until Touch gets updated for them.
new features look cool though. I’m looking forward to exploring close-range skeletal tracking - especially because current;y, debugging is a pain in the butt because I have to get up and leap around the room to see if it’s working.