projection matrix dilemma

i’m new to the concept of projection matrices. here’s my dilemma.

i have successfully applied a projection matrix to a camera. i have a light that i want to use as a projector. i want the light to project a render of the camera’s view- matching all of the camera’s parameters. it’s easy to place the light in the proper location and rotate it properly- but i don’t know how to match the camera’s fov to the light’s projection map fov.

i’ve done this in the past without using a projection matrix, simply matching the light’s projangle parameter to the camera’s fov parameter. it could be a distant light or a cone light, it always works.

my camera and light both use the same projection matrix. but the matrix does not affect the light’s projangle.

is it possible to extract or access the actual camera fov value when a matrix is applied? in the camera’s view tab, all the parameters i’d like to access become greyed out and display default values when a matrix is applied.

i investigated using a custom projection on the light, but i don’t think that is what i want. it seems like it will not affect the projangle parameter.

Unfortunately we can’t automatically extract an FOV from a projection matrix because not all projection matrices have an FOV, such as a orthographic one. Or the projection matrix can be something super custom like a circular projection. The projection may be asymetic also, meaning it’ll have different FOVs at different parts of the output.

The trick to solving this is to use the inverse of your projection matrix and multiply these points by it (-1, -1, -1, 1), (1, -1, -1, 1), (1, 1, -1, 1), (-1, 1, -1, 1). These are the 4 corners of your near plane. Once you multiply these points by the inverse of the projection matrix, divide XYZ by W, now you’ll have points in camera space. Visually, these will make the base of a 4 sided pyramid with the top of the pyramid being the camera, which is at (0,0,0). This is your view frustum. Assuming the view frustum is symmetrical, you can use trigonometry to get the FOV from this.

thanks for the reply malcolm- this is still pretty advanced stuff for me! i’ve been reading this -

opengl-tutorial.org/beginner … -matrices/

which apparently i am supposed to read eight times. i’m not quite finished with my first. as far as i can tell, the 4x4 projection matrix is creating the values for fov, near plane, far plane, and aspect ratio.

this problem i’m working on is a single fov perspective projection matrix- so i think your suggestion will work.

i’m attaching a sample tox. i am able to get the first few steps you recommended figured out. i can get from the original matrix back to the values for the near plane. now to get points back into camera space- i’m not sure how to proceed. when you say divide xyz by w- could you give an example of a way to visualize this? i’m pretty sure i can get the fov from the resulting frustum.

thanks!

g
fov_extraction_from_matrix.tox (854 Bytes)

The way I like to visualize what a projection matrix does is imagine a 4 sided pyramid with it’s top cut off. In a typical perspective projection matrix the smaller top is the near plane and the larger one is the far plane. Now imagine it being full of gel and you place things inside of it (your geometry). This is ‘camera space’. Now you take the pyramid and you squish it so it becomes a perfect cube with sides (-1, -1, -1) to (1, 1, 1). Everything you’ve put in it the gel moves relative to each other so things that are closer to the far plane are squished farther together than things close to the near plane. This is why things farther away from you converge to the center of the image. Just like if you are looking down a set of railroad tracks they converge as they get farther away.
So multiplying a point in camera space by the projection matrix and dividing by W will place an item into that unit-cube. Conversely, taking a point inside that unit-cube and multiplying by the inverse projection matrix and dividing by W will put it back into camera space (the cut off pyramid).

In linear algebra a point is defined by (X, Y, Z, 1), and a vector is defined by (X, Y, Z, 0). When you take a point and multiply it by the projection or inverse projection you’ll end up with something that is no longer (X, Y, Z, 1), the W coordinate will be something else. But for this to be a point it needs to have 1 in the W coordinate, so you divide by W for all the components (X/W, Y/X, Z/W, W/W) = (X/W, Y/W, Z/W, 1). Now it’s a point inside the unit-cube or inside the frustum pyramid.

That original pyramid with the top cut off can be any shape at all, which is why a projection doesn’t necessarily have an FOV. It could be a long box (which is what orthographic is), or it could be something with the near plane much larger than the far plane. Or something asymmetrical, or even something that wraps entirely around the viewer.

1 Like

ouch. martix math is hard.

i have a much better understanding of the generalities of matrices and how they apply to calculating object, world, camera, and projection space transformations. but the specifics are still pretty hardcore for me.

now i am trying to create the inverse of my 4x4 projection matrix. the way i thought i should do it earlier is mega incorrect. i think i know the right way to do it now- based on using this as a guide.

ncalculators.com/matrix/4x4-inve … ulator.htm

do you think that this calculator and the formula illustrated there are an accurate guide to the math i need to build my own routine?

to create a projection matrix in the first place in td- it seems that what is required is a frustum scale, a near plane, and a far plane.

is the frustum scale always 1 (a square)? or is it somehow linked to or dependent on the render window aspect ratio? i’ve seen some differing information on this.

is there any more information on the specific formula td uses for a projection matrix? is it the same as opengl?

yikes-

the online inverse matrix calculator i posted a link to gave me crazy bad results.

this one worked

jimmysie.com/maths/matrixinv.php#a=mf&s=4

but rounds the float inputs to like 4 places. but, it gave me very close to the identity matrix, so i’m pretty sure it’s accurate.

Python’s numpy can do matrix inverses as well, if you are doing this inside of TD.

The width of the view frustum is linked to the aspect ratio of the render, otherwise a circle wouldn’t be circular in the output render, and instead stretched

i’m attaching a related test. attempting to use the texture sop to projection map a render onto geometry, i get similar results to my experiments with projection mapping from a light. when the camera that has rendered the scene is using a projection matrix, it doesn’t seem to work.

here’s another theoretical approach i’m working on in order to simplify the problem. apply the same projection matrix to both a camera and a light. then use the resulting render from the camera as a projection map for the light. at some point, by simply by moving the light’s projangle slider back and forth, shouldn’t you land on a value that will align to the rendered image back to the object? (regardless of the proper math is arrived at, you should get an fov value that works, right?)

when i try this, i never get true alignment. the projected image map seems to have the proper aspect ratio, which is cool. but it doesn’t line up with the underlying geometry. i also need to change the light’s ‘projector extend parameter’ to mirror or repeat to see the map at the proper scale on the surface of the geometry.

i can post a trial of this if you’d like to take a look
texture_sop_projection_matrix.tox (2.37 KB)

I’m not sure how you are generating your projection matrix, but a projection matrix can be anything. It doesn’t necessarily have an FOV (such as an orthographic projection). So whether or not it should have an FOV that will map onto it perfectly depends on how it was created. Also the aspect ratio of the projection will matter as well.

ok, so here’s another theoretical question.

how can i create my own simple projection matrix- from basic default td camera comp parameters?

I create a new camera comp in td- without changing any parameters- just using the default. the projection parameter is default - ‘projection’, not ortho or blend or custom. i have an fov of 45 degrees, a near of .1, a far of 1000, and for the sake of this example, i have a render top set to 1280 x 720.

having these camera settings, is there a formula i can use in TD to create a projection matrix that i can directly plug into another camera? this new camera (using the resulting projection matrix) should give me an exact duplicate of the scene rendered from the default camera, rendered into another 1280 x 720 render top-right?

Here is the code we use to make a projection matrix. Yes this should match up with a Camera COMP that is just using parameters and no custom projection.

void
perspectiveFrustum(float l, float r, float b, float t, float n, float f)
{
   // Make the matrix all zeros
    initialize();
    mat(0, 0) = (2*n)/(r-l);
    mat(1, 1) = (2*n)/(t-b);
    mat(2, 2) = (f+n)/(n-f);

    mat(0, 2) = (r+l)/(r-l);
    mat(1, 2) = (t+b)/(t-b);
    mat(2, 3) = (2*f*n)/(n-f);
    mat(3, 2) = -1.0f;
}

perspectiveRadX(const double fovx, const double aspect,
                  const double zNear, const double zFar)
{
    if (fovx == 0.0f)
    {
        identity();
        return;
    }

    float xmin, xmax, ymin, ymax;

    xmax = zNear * tan(fovx / 2.0f);
    xmin = -xmax;

    ymin = xmin / aspect;
    ymax = -ymin;

    perspectiveFrustum(xmin, xmax, ymin, ymax, zNear, zFar);
}
1 Like

this is awesome.

once you make a projection matrix you can retrieve stuff like the fov.

focalX = fx 
width = outputResolution
fov = math.degrees(2 * math.atan(width / (2 * focalX)))

thanks again malcolm!