Kinect2 - Projector Calibration

Hi Markus
I would like to project the mask, that I derive from the kincet player index on the moving player.
Is there a way, that I can use the camera data out of this calibration for that purpose?
greetings knut

Hi Knut,

that should be possible. Your Kinect is assumed to be positioned at root (0,0,0) looking straight down the z axis. Trick would be to position a rectangle now in front of the kinect (straight down the z axis, no other translation or rotation) that holds the video texture from the kinect. A good question is how far out it would have to be - Needs to be far enough to be seen by the TouchDesigner Camera… Maybe you would have to play around with distance and size a bit to get it right…

Best
Markus

Hi Markus
OK i understand. I will try that.
Thnx for your help
Knut

Does this work for Player Index? If so , any pointers pls

Thanks

Justin

I tried to follow the advice from markus with no success.
Need a little Bit of Time to understand where the problem is and to come back with questions if necessary.
Knut

I was trying to get this to work with the Intel Realsense for awhile.

I couldn’t get any reliable results. I consider two things may be an issue.
It may be that the camera FOV is different for the Kinect vs Realsense so I tried the different OpenCV toggles to guess focal length and aspect, didn’t help.

The next guess was that the realsense rgb camera doesn’t have the same FOV as the 3d depth image. I would guess if the point pairs are compared, it would be sampling an incorrect position.
I tried the “depth aligned to color” settings, but it seems like the pointcloud top doesn’t have this option. The “color aglined to depth” doesn’t find any checkerboards.

Any more suggestions about what to try to get the realsense to calibrate?

I was also guessing that the depth data would have different values considering the realsense can be used closer proximity. But that shouldn’t matter the calibration shouldn’t need actual distances, should it?

Sometimes It appeared that the depth data was not discovered on the white board I was holding. Would that record 3d position of -1 or discard the point pair?

I dont have no realsense so I cant comment on that.

But some learnings from using the kinect a lot

  1. 7 to 10 point pairs a normally sufficient with a kinect 2.
  2. Lots of problems if I try to get data when using a short throw projector. If I include data
    from checkerboards that are closer to the edges of the projection area, the data becomes unreliable - the projected image does not hit the person any more.
    Never saw that on projectors with a “normal” lens. Seems to be a problem of the projectors lens
    although I would expect that the calibration would cover that…
  3. still problems to use the kinect player index to project a mask on the person. I cant get it
    to work in my enviroment

good luck

I have got problems using the component as a camera. Anyone successfully calibrated the camera object to be used and calibrated with the rgb-camera from the kinect or maybe that is not possible?

A workaround would be to use the rgb camera as a colormap and instance it on the pointcloud? Maybe that would be the only solution or are there an more straightforward approach?

Sorry, not sure I quite understand. Does it correctly calibrate the pointcloud for you?

Cheers
Markus

Yes. The pointcloud calibrates fine. I wanted to calibrate my camera, only using the rgb (ordinary webcam), not the pointcloud. But guess that is not what this method does. :slight_smile:

Hi @snaut thanks for that! Finally played with it for a second, here’s a junky proof of concept of interactive projection mapping :wink:

1 Like

Snaut (Markus)

Thanks so much for this tox. Super effective and clever. I love using the projector for the sizing the grid on a big white board that I move around the room. So much cooler than using multiple sizes of cardboard with checkerboards printed on it.

What I would like to do is add a high def camera input that will also get calibrated to the Kinect + projector combo. By using the projector as the grid source, I don’t have to use the IR output of the kinect and clever real-world lighting to show the checkboard like older methods of aligning a Kinect with a different RGB camera.

I think it would be a super clever and useful object that perhaps others would use.

It would allow calibrating a Kinect with a Hi-Def camera (and projector) without the nasty process of using the IR stream to try and read a checkerboard.

Win Win Win

I’ve spent some time going through the KinectCalibration.tox and have a basic understanding of what’s going on in there. My python and OpenCV skills are where I’m coming up short. I think I understand most of the OpenCV script, but don’t know how to add the other camera and associated code to get it’s transform matrix and extrinics and intrinsics and tie the depth data to that camera as well without breaking what 's already there.

Is this a feasible? Overly complicated ? Tips ?

Cheers

Stuart

Hi Snaut,

thank you for sharing this Kinect Calibration Tool.
I’ve been trying to get it to successfully calibrate on and off, but i am struggling.

The kinectCalibration version that is shipped with TouchDesigner (2020.20020 & 2019.33840) seem to have some bugs in the python code?

the lines 40-41 & 87-88 seem to have to be updated to

gridResX = int(parent().par.Gridresw)
gridResY = int(parent().par.Gridresh)

for it to capture points.

It calculates intrinsic & extrinsic matrices but the intrinsic matrix seems to have weird negative values ?
the Kinect2 is just placed behind the projector.

image

Do you have suggestions what would be the best way to debug this ?

Thanks for your help!
Seb

Hi @infernalspawn,

thank you for pointing this out. The reference to the parameters is most definitely wrong - this will be fixed in the next build. Still looking at the values for the intrinsics.

Cheers
Markus

Hi @ControlFreak,

do I understand you correctly, that you’d like to have the same functionality as depthkit?

The main task would be creating a routine for calibrating the external camera to the Kinect so that the Kinect’s depth data lines up with the cameras view.

Cheers
Markus

1 Like

Hi Markus

Depth keying against a high def camera in realtime is my main goal.

Yeah, DepthKit is doing the process of calibrating the two cameras like I want to achieve, but it always felt like a post production tool. Perhaps that’s changed. The user had to record streams and then line them up in a time line and fine tune offsets and such.

What I really like about your KinectCalibration.tox is by using the projector as the checkerboard source, it makes much easier to get some good point captures (and you don’t have to use and print multiple sizes of checkerboard patterns) I really find that your tox so elegant in many ways, but that part is my favorite.

I thought by making an enhanced version of your tox to include a second camera that could be lined up to the depth camera an improvement over their method of collecting calibration images. Perhaps relying on the point cloud data from the kinect and not using the depth or IR streams like DepthKit does may lead to some surprising results.

Plus, it could be cool to make a camera rig (physical) that has a mini projector along with kinect and high def camera. Oh the art that could come.

Cheers

Stuart

1 Like

Hi Markus

So I stayed up late last night and managed to modify the OpenCV script and network of the KinectCalibration.tox to add another camera, get points and do the calibration. I got extrinsic and intrinsic matrices output as well for that camera.

So if I understand correctly, my first set of Matrices is the Kinect’s relation to the projector. My second is my other camera’s relationship to the projector.

I suspect at this point I need to some matrix math to the two to get the relationship of the Kinect to the second camera and not the projector.

Am I over the target?

Cheers

1 Like

Hi @infernalspawn,

so the intrinsics look all good - Malcolm gave me a link to the details of the camera view matrix: OpenGL Projection Matrix

Also tested here and got a successful calibration after fixing the parameter reference error.

cheers
Markus

Hey Markus
Just to confirm: if i want to use a kinect azure i just need to replace the kinect v1/v2 specifics with the kinect azure chops and it should just work…true?
Greetings knut

Hi @knut,

i haven’t tested it yet with the Azure but theoretically yes - it should work the same way, just make sure to toggle the Align Image to Other Camera parameter in the Kinect Azure or Kinect Azure Select TOP where you fetch the Point Cloud.

Cheers
Markus