kinect integration

hi everyone, so far liking touch designer as i dive deeper into it.

can anyone comment if there is a proven workflow for getting basic kinect skeleton data into my projects?

ideally, i would like simple access to right and left hand positions and movement, without needing to ‘initialize’ with the ‘cactus pose’…

enlighten me! thanks…

alan

–Edit–
Since the release of the official Kinect SDK and the now build in native Kinect Support in TouchDesigner077, I have removed my component.
–Edit–

I was just working on a simple solution via the Microsoft SDK and with help from Rob and Malcolm. Basically I took the SkeletalViewer Example and injected code to send the video images as Shared Memory as well as the Joint Positions as UDP data.

To get the Image Stream you will need a TouchDesigner FTE Commercial License, the joint information comes in fine also in TouchDesigner FTE.

Attached is a zip file with the SkeletalViewer.exe as well as a simple TouchDesigner controlpanel that prepares the data for you.

You will have to install the Microsoft SDK beta 2 for this to work…

Cheers
Markus

markus, i can’t thank u enough. fantastic! i will dive into this in the a.m. :smiley:

alan

Here’s a link for the SDK…
[url]http://www.microsoft.com/download/en/details.aspx?id=27876[/url]

(microsoft’s download links don’t work for me at present though)

thanks heaps guys, I am going to get sooooo distracted here…

rod.

Wow, fantastic, so much cleaner and faster skeleton recognition that with the Primesense libraries . . . thank you so much for creating this.

One question, can’t see the shared memory TOPs . . . latest build and Pro license. Error is “warning: Shared Memory Segment named touchSHM1 not found (/project1/sharedmemin1)”

Ah, my fault - the Shared Memory TOPs should be looking for touchSHM and touchSHMdepth instead of the names currently in the Shared Memory TOP. Please change this in the file or just use the updated version attached to this message…

Yep, working great now.

RFE - is it possible to get the depth map in with the user color assigned as in the skeletal viewer ? Would be great to have user id in the map.

Oh, one more thing, not sure what the “k” is for, but selecting it causes the skeletal viewer to crash.

Thanks again, a really great pipeline !

Hey does anyone have the correct link for the appropriate SDK? I can reach the page at microsoft downloads but the actual download links throw an error for me. if it works for you guys, maybe it’s locked to north american IP addresses for now?
In which case, could someone please send me the 64 bit version and the readme file? :slight_smile:

rod.

hey Rod,

the link you sent worked fine for me, ~38MB, email me directly, jeff at eyevapor dot com and we’ll set up a way to get it to you . . . I can post on my ftp or whatever.

Jeff

no worries jeff, I tried the link from the office and it came down fine. must be my ip address at home for some reason.

looking forward to plaaaaying with this!

rod.

Hey Snaut,
Love your work. This is a really cool shared memory app.

I was able to spread the “RealDepth” values over the r g & b channels like so -

[code]RGBQUAD CSkeletalViewerApp::Nui_ShortToQuad_Depth( USHORT s )
{
bool hasPlayerData = HasSkeletalEngine(m_pNuiInstance);
USHORT RealDepth = hasPlayerData ? (s & 0xfff8) >> 3 : s & 0xffff;
USHORT Player = hasPlayerData ? s & 7 : 0;

// transform 13-bit depth information into an 8-bit intensity appropriate
// for display (we disregard information in most significant bit)
//BYTE l = 255 - (BYTE)(256*RealDepth/0x0fff);

int xyr = RealDepth;
int xyg = RealDepth;
int xyb = RealDepth;
if(xyr > 1366) xyr = 1;
if(xyg > 1367 && xyg < 2732 ) xyg = RealDepth; 
else xyg = 1;  
if(xyb < 2732) xyb = 1;

RGBQUAD q;
q.rgbRed = 255 - (BYTE)(256*xyr/1366);
q.rgbBlue = 255 - (BYTE)(256*xyg/2732);
q.rgbGreen = 255 - (BYTE)(256*xyb/4096);



return q;

}
[/code]

I know my code sucks but it does work and I was able to get a depth image particle cloud type thing like this in touch via the screen grab top. Is there any chace of you could post another version of your shared memory app, but with this code in it?
Cheers,
Adam

Thanks for example Snaut. Is there any way to receive depth channel with user color?
Thanks.

Hey has anyone got official kinect sdk to work with touch yet?

I’m waiting over here, with ‘baited breath’, for some some C-coder/madperson to publish an OSC/touch bridge so I can use the Kinect4W I just bought. In the meantime I did manage to load and run some of the examples in the SDK and confirm that it does work. My programming skills stop at BASIC, though, so I guess I’m stuck. I wonder if I should dust off that part of my brain and take a look at VisualBasic ? I’m not sure if that would even be viable. I already wear too many hats so I should prolly just wait on the sidelines… its killin me though

So a few day ago I found a c# wpf program that will port the 20 kinect points out using osc to max. Hopefully soon I’ll have some time to rewrite part the parts that are needed to have it work with touch. Should not be to bad was just built for the beta and they have switched up how they organized the .dll’s from multiple to one so may take a bit to rip it apart and get the different methods to work again. When I finish it ill post a link to it.

looking forward to that ,Andrew! What are the prospects for other non-skeleton data into touch, like a depth mask channel ? The near mode into touch is a very interesting prospect. Sounds like msoft doesnt even know how it will be utilized yet. I’m hoping they refine the near mode to track things like just hand skeletons (all 5 fingers ).

I think if I had just a depth channel coming into touch I could be thoroughly distracted for a long time.

some ‘lite’ reading I’ve been doing:

groups.google.com/group/openkine … 3656fa95b8

so Instead of rewriting someone else’s app to give us the needed osc information I found another that will work just fine with Touch!

908lab.de/?page_id=325

Things to note this one though. It pushes everything in on 1 line so you will need to break it apart into a table to use. Also when you first get it going actively the oscin will fillup with a massive amount of rows that will make it look like it is capable of tracking multiple people. This false just limit the rows to 20 and save yourself the trouble. Currently this app is only able to track a single person so don’t worry about the id’s coming up differently per 20 lines.

Developer said that currently this is only able to track the skeleton data of a person and not the depth or anything else that the kinect is capable of doing, but those functions would hopefully be added in the future. Hope this helps!

Hey thanks for digging that up, Andrew. I got skeleton data coming in to touch from Kinect4w- plenty to mess around with whilst people build this bridge!

I was in the SDK examples and saw that nearmode generates a single ‘centroid’ average as soon as it recognizes a humanoid, sometimes for half the body, but not very cleanly. Looking forward to what people can come up with once this depth channel is fully opened up to touch… meanwhile , plenty to work with in skeleton mode!

Just made a quick and dirty implementation of Kinect For Windows skeleton info to OSC on the localhost. I posted the source and the executable here:

code.google.com/p/osc-kinect/

It uses the Microsoft SDK. It’s rudimentary and may have bugs but I’ve been using it without issues so far. If you do modify the source, maybe send me the changes or do it by checking out the code with Subversion on Google Code.

Right now you run it by command line OSCKinect [-p ]
The default port is 7000.

From the same link above you can download a sample .tox that works with it.

Some info on the format of the messages it sends out can also be found at the link.

Looks like official kinect nodes are coming to touch, check the wiki.