Trompe-l'œil and projection mapping

I’m trying some projection mapping, and having trouble with my eye camera.

First question would be if there’s a default that matches the FOV of the human eye well enough? Is that affected by scale I work at? I picked 1.0 = 100" out of convenience.

Then, as I move the eye point around (hopefully by head tracking, down the road), I will need the items exactly on one plane (z=0) to not move at all so they will correctly match with physical features. I could do another pass, but that would be very inconvenient. Is a simple ‘look at’ enough?

I’m starting to think I need to place a virtual plane that represents my ‘screen’ and texture map that with my eye camera, rather than using my eye camera directly in a render pass?

Pretty new to TD, but a lot of OpenGL time.

Bruce

oi bruce!

did you check the MUTEK WORKSHOP?

derivative.ca/Events/2012/MUTEKWorkshop/

they cover a tromp-lœil in Day 1 Part 4.

player.vimeo.com/video/45282609
player.vimeo.com/video/45282688
player.vimeo.com/video/45282724

~fxamis

I did watch that a few times, thanks. He isn’t trying to match something physical and virtual though, so he doesn’t touch on the matrix. But I’m using a lot of that.

Bruce

Watching that video again, there’s a magic hand wave moment when he uses a Texture SOP and ‘perspective from camera’ mode. I thought I understood what that did (and was going to use another technique, such as Kantan), but I now wonder if that mode is clever enough to do what I need - I’ll try.

For reference, the technique I need is outlined in various places, including this: [url]http://csc.lsu.edu/~kooima/pdfs/gen-perspective.pdf[/url]

I’ve been calling it an asymmetric projection frustum, when dealing with it in 3D stereo systems. Apparently off-axis projection is another name.

Bruce

I’m a step removed from this, but you may need to use the new Matrix CHOP/DAT option in the Camera COMP, on the view page. This would allow you to create a matrix manually from that whitepaper and use it in your rendering.

Thanks Malcolm. It turns out that a plane (or a model, I presume) plus the perspective camera texture coordinates is exactly what I’m looking for (or so it appears). There’s quite a few handles that go away, and it’s a bit odd to tweak, but it’s a lot easier than a custom matrix, a bunch of channels and math etc.!

It’s a great feature - very quick way to do it.

Bruce

Now we’re getting esoteric… but in case anyone else has considered this stuff.

A ‘perspective from camera’ setting is such that the frustum extends from the camera through the outer points of your defined ‘screen’ object. That makes tracking the camera around in respect to the screen. The issue is that this is a fairly arbitrary FOV, which doesn’t match the human eye that well (when the camera is exactly where a persons eyes would be). It’s trivial to move the camera in respect to the screen, and get perspective that mimics the human eye, but then the position is ‘wrong’, and occlusion etc. would be different, IMO. I think

But I may be overthinking it - that happens.

My second challenge relates to projector position. Thinking aloud here, but maybe someone can confirm. In order to place projectors at arbitrary positions and have them correctly see the real world objects I’m projecting on (to fill in shadows from other projectors, etc.) I would need to have the physical objects appear twice. Once, shaded as needed so that the virtual/eye camera can see them and draw my effects, and again, mapped as camera perspective, so that the projectors can see it and compensate for the surfaces they are hitting. Is this about right?

Bruce

Resurrecting a really old thread… but I recently tried implementing the method from the Mutek workshop. Here’s what I found… it basically works, but introduces a small amount of keystone error off axis. It may or may not be noticeable for 2D work. For 3d work, however, it’s effectively creating a “toe-in” stereo rig if you treat each camera separately. Toe-in 3D is supposed to be a bad idea (or so I’m told) and indeed, objects at the zero plane were not converging properly at extreme angles. I tried a variation where I kept the cameras parallel and then introduced a window shift to converge the views at the zero plane. This doesn’t work off-axis either, since the window shift is not really comparable to a lens shift, but happens in 2D only…

So… in the interest of doing this the right way, I went back to the white paper posted earlier (which proposes a method of building a projection matrix based on an arbitrarily defined frustum and viewpoint) and with Malcolm’s information on how projection matrices are built in Touch, I was able to get it implemented. As far as I can tell it’s working perfectly!

That’s awesome. Nice work.

Plan to share it?

Bruce

Sure… the one oddity I’ve noticed is that I can’t seem to make it work if I don’t have my trompe l’oeil “window” perpendicular to the Z axis. Somewhere in here there’s an incorrect assumption… I’d also like to figure out if there’s a way to make the Mutek workshop method work correctly in stereo vision because this method will only work for rectangular, flat screens. Curved or dimensional screens wouldn’t work with this method, though it should be more efficient if your application fits these constraints since it doesn’t require a render, a re-texturing of geometry, and then another render.

To explain what the network is doing: the screenHelper COMP is just a simple way to specify the dimensions of your screen or window (which ultimately should correspond to your real-world display). The four inputs control the position of the left, top, bottom, and right sides. Internally, it generates 3 nulls that define the screen dimensions. These are used by the eyeL and eyeR COMPs, which contain an implementation of the math described in the white paper mentioned earlier in this thread. The only inputs to these COMPs are the eye position (x,y,z). I avoided using python and did it all with simple CHOPs, under the assumption that this would be more optimized in Touch than executing a python script every frame. Ultimately, they each create a 16-channel CHOP that contains a projection matrix. This is fed into a camera (which is also positioned at the eye position specified). The cameras are referenced by render TOPs outside, which are then composited as an anaglyphic 3d image. While obviously any 3d display technology could be used by the output, the anaglyphic image does demonstrate that no matter where the viewpoint is, the rectangular box’s front face doesn’t move or exhibit a stereo disparity, which is expected because it lies in the window plane. The The spiky ball hovers somewhere in front of the box. I’ve put some simple animation in the eyePoint constant to demonstrate this.

(was having trouble adding the attachment to this post for some reason… see my next post for the download)

1 Like

hmm… having trouble with the attachment… should be here now, though
trompeLoeilassymetricfrustum.toe (11.5 KB)

1 Like

I recently followed the steps in the 2012 video and saw the same issue. However, I can eliminate it by using the Subdivide SOP on the rectangle in the “plane” geo and before the “perspective-from-camera” Texture SOP. I think the issue was a uv-coordinate interpolation issue, but isn’t there a better solution than subdividing?