Kinect Point Cloud texture

Hello, there’s any suggestion on how to align this image with a projector?
I’m alignin the camera manually, in the 3D space trying to match the projector position and I put the FOV of the camera to a value like the projector FOV but it doesn’t work.

There’s something else to do?

Hi All,

Just curious - are you running two kinect sensors (k4w v2) on the same machine in the same touch process?

looking at the best way to use multiple kinect v2 sensors in TD - looks like it may have to run on separate machine and touch in/out around the place…?

How did you go with this? Did you solve your problem? I am currently looking at hour to calibrate with the realworld view of the projector

Best,

O.

Hi @malcolm,
I was playing with this today and saw that you did a great job of assigning depth to the the 1080p RGB texture.

This is nice and very useful, but I’d rather get the the 512x424 depth image with the color data, is there a way to do so? I feel like the color to depth mapper is in C and so we can’t use it without a change in the top code.

Thanks

Hello there

I want to record a performance, should I record both point cloud texture and RGB camera texture? Or the point cloud texture has already the RGB color of the color camera?

Thanks!

point cloud texture provide xyz values of the points.
the color camera only provide you the rgb data.

two separate textures.

New to TD.

Experimenting with the point cloud and the example offered, thank you…!

Not sure how to explain my issue. I seam to have a large shadow/vacuum of points around outline of my body that moves to covers the points representing my body the closer I get to the Kinect.? is this artefact or a byproduct of the process.
many thanks

How can I export the point cloud data into Houdini? Anyone?

To export to Houdini…there are so many ways.

Use a tableDat and write to a csv
Write a .chan file that can be read through Houdini in Chops
Write a sequential .bgeo sequence

Try outputting it as .exr with the Movie File Out TOP. Set the Movie File Out TOP to output as an Image, or an Image Sequence. Make sure of course your input is a 32-bit floating point TOP.

Related to much earlier posts in this thread, the next build of the 2019.10000 series will have a Depth Point Cloud output which is lower resolution but solves some of the artifact issues the color space point cloud has

digging out this old thread i also struggle with this issue, newest build of TD using a kinect azure.

is this weird contour shadow because of a narrow distance between the person and the wall behind?

I’m not entirely sure which shadow you’re referring to (posting a screenshot might help), but I know there are some artifacts when you remap the color image to the depth camera (using the Align to Other Camera parameter). This is because of the distance between the two lenses on the sensor and unfortunately there isn’t much that can be down about it (it is worse when you’re closer to the camera).

You can also get some black (unknown depth) outlines in the depth image which I believe are caused by difficulties capturing the reflected IR light on surfaces angled away from the camera. You could probably fill some of these in with a shader that used a min or max on surrounding pixels.

thanks for your quick response @robmc :slight_smile:

this was the reason, along with a narrow distance as you said.

i’ve also found an article on this by microsoft; this shadows are an occlusion that happes due to lack of information in the background:

unfortunately i am not good at writing shaders and would need a lot of work to find the right tweaking to this, but i was planing to get a second kinect azure and i think that syncing them will imrove the image and maybe this way i can get rid of these occlusions.

in the meantime i will start to learn how to write an appropriate glsl-shader :slight_smile:

No problem. A second camera can definitely help fill in some of the data obstructed in the other camera’s view. Unfortunately, the sensor doesn’t automatically merge the data from both sources, but there are ways to combine it depending on how you plan to use it.

Let me know if you need any more help.

1 Like

@robmc: did not think that i would ask so quick, but is there any documentation that you would recommend on combining two kinect pointclouds into a single TOP for instancing :slight_smile: ?

Unfortunately, I’m not aware of any specific documentation on the process at the moment; however, the basic setup isn’t too difficult.

Generally, all you need in your network are your two Kinect Azure nodes in Point Cloud mode, two pointTransform components (from the palette) and a pointMerge component (also in the palette).

This toe file has the basic layout: pointMergeExample.toe (10.3 KB)

The potentially tricky part is aligning the point clouds from the two cameras so that they appear in a consistent 3D space (otherwise each set of points is positioned relative to the camera that captured them). The point transform components will allow you to shift and rotate the points relative to each camera in order to align them, but there are multiple ways you can figure out the correct values to use here. Depending on how you plan to use the data, you can also choose whether you want to transform both sets of points into a new unified space, or just shift one set to set to align with a primary camera.

Depending on the level of accuracy you need:

  • you can physically measure the camera position and angle
  • you can place some sort of reference object (or ideally more than one) in the scene that are visible from both cameras and then manually shift/rotate the points until they align sufficiently
  • or you can use some computer vision algorithms to automatically calculate the orientation of the camera (the kinectCalibration component in the palette does some of this, but was designed for a projector).

Hope that helps.

1 Like

that definitely helps, thank you so much :slight_smile:
i will make some tests and eventually send some results here

@robmc Thanks a ton for posting the pointMerge example. I’m able to get the pointclouds aligned pretty well using this, however I’m having trouble overlaying the Kinect color camera feed on the pointcloud, like in the kinectAzurePointcloud technique. I’ve tried using pointTransform from the kinectAzurePointcloud node, but it doesn’t retain the color overlay. Could you please give some advice about the best way to achieve this? Thank you!

No worries. I’m not sure exactly where you’re running into trouble, but I’ve attached a modified version of the pointMerge example that includes the colour camera data for colouring the point cloud.

The main thing to remember is checking the ‘Align Image to Other Camera’ parameter on the Kinect Azure Select. This will make sure the pixels in the colour image lineup with the correct pixels in the point cloud image.

I’m then merging the two colour images using an identical pointMerge component and then feeding them into the pointRender component that sets up the geometry component and material for rendering.

Hope that helps. Let me know if you have any questions on it.

pointMergeColorExample.toe (18.3 KB)

1 Like