Azure Kinect?

Hey, does anyone know if there are plans to include Azure Kinect in the future?
I was looking into the kinect and it looks like it has a great depth map, right now sales are limited to the US and China, but I’m thinking of getting one, but I want to know if there are plans to implement it into TD, and the timeframe it might be done.

Or is there a way to do it myself?

We are actively working on it right now. You could do it yourself using the C++ TOP and CHOPs if you wanted though, but it is coming.

Cool, I can’t wait.
:slight_smile:

Am interested also. Thanks!

Can’t wait indeed! :stuck_out_tongue:

How’s going? Is there any update for future beta version?

Yes! It’s ready to be tested. Send us an email support@derivative.ca and we can send you a build link.

Hello,
please may I ask whether it is possible to use multiple Azure Kinect devices in order to get cleaner skeleton tracking? Based on documentation for experimental Azure Kinect support I guess it is perfectly fine to use Azure Kinect CHOP processing data from single sensor.

However I am curious whether it is possible to first align data from multiple sensors and then run skeleton tracking on this aligned and combined point cloud? Or is it possible to run multiple Azure Kinect CHOPs (each processing data from single device) with something like tracker-based accuracy information that could be later used in determining best position for each tracker?

I am not sure if Microsoft SDK is yet prepared for multi-sensor skeleton tracking - I have not found much information on this so I thought I might ask here. Thanks.

As far as I’m aware there is no way to use the data from multiple cameras to improve the body tracking system. I believe connecting multiple Kinect Azure’s is just to sync their capture times together - their image and skeleton output’s are independent.

You can definitely align the point clouds from multiple Kinects inside TouchDesigner, but there’s no way to feed that information back into the Kinect SDK to calculate new tracking results.

You could theoretically merge the skeleton results from two CHOPs, but I’m not sure what the quality would be like (especially if you have multiple players in the scene). The next build will have confidence values that could theoretically be used to select a skeleton or merge joints from different skeletons.

So far the results from the Azure body tracking seem to be pretty good compared to the older versions. There have been some complaints online about lag, but the body tracking is still pre-release so I expect there will still be a lot of changes.

Thank you very much for information.
Great, I guess confidence values (once available) could be used in improving skeleton results from multiple devices. I am not sure if it would work as expected, but imagine following procedure:

  1. Process all skeletons from multiple devices by searching trackers in certain radius. (Only trackers above some confidence value would be used in search process in order to eliminate errors). In case some skeletons are “close enough” they will become identified as the same player.
  2. Average tracker positions by using array of values the same tracker holds for currently processed player. Each position value will be multiplied by its confidence value.

Please do you think this approach could possibly work? I don’t have access to Azure Kinect since it is not yet available in Europe, but I am looking forward to test it out :slight_smile:

Does “And here’s the best part for live performance and interaction design – you can freely combine as many cameras as you want, and sync them without any weird tricks.” mean multiple Kinect Azures automatically work like one device?
Sentence from - Azure Kinect promises new motion, sensing for art - CDM Create Digital Music

@monty_python Yes, theoretically what you’re describing would work, but once you start merging skeletons you’re potentially breaking the constraints on the system (fixed bone lengths, rotation limits, etc) so it depends on how accurate you need to be.

Once you match up players, you could pick the camera that has the highest total joint confidence and go with that version (potentially with some blending as you switch) which may help when once camera becomes obstructed.

@DIOVONIAV We only have once device to test with right now, but my understanding is that the automatic synchronization features allow multiple Kinect cameras to capture data at the exact same time, but all of their outputs still remain separate.

So in TouchDesigner you will have a Kinect Azure TOP for each device with its own outputs, but it will be easier to do things like stitching the video streams together or merging the point clouds into a single scene.

Thank you very much for information. That sounds good.
May I also ask whether it is possible to select specific gpu to perform skeleton tracking?
Lets say I would have dual gpu setup processing data from 2 azure kinects and I would like to assign one gpu per Azure Kinect CHOP…

CHOPs are CPU based, and for TOPs on the GPU TouchDesigner uses 1 per process most effectively, using multiple GPU on a single TD process isn’t ideal setup.
Refer to: Using Multiple Graphic Cards - Derivative

Yep, sure, I understand. I was referring to Azure Kinect Body Tracking SDK that is used by Azure Kinect CHOP. If I am not mistaken it is using cuda to compute skeleton. I am not sure how the SDK itself handles multi GPU setup so I was wondering whether there is a way to specifically assign GPU to each Body Tracking process…

By default the Kinect Azure does use CUDA for the body tracking AI model and then that information is passed over to the CPU for the CHOP. From what I’ve seen, Microsoft doesn’t give us any control right now over how the workload is distributed to the GPUs. This may change in the future since the SDK is still pre-release. The next version does have CPU body tracking as well, but it is much slower than the GPU version.

I see, all right, thanks once again for information! :slight_smile:

Update : For those looking for the Kinect Azure build, it is now available in the latest experimental build here: Releases | Derivative