NVIDIA SMP

Cool. It looks like I’ll be buying a 1070 too :smiley:

So far the ‘Stereo’ optimization has been released in build 4500. The more general multi-projection will be coming in the next build.

Awesome!

this is brilliant for dome / 360 projections. Very curious about the real-life impact when using 16 different camera’s. Thanks so much for integrating this so fast Malcolm, it’s immediately super-helpful for our current project.
Also that shaders now will have access to info about the current camera is fantastic, it will enable me to easily create a steroscopic 360 GLSL Mat !

Hey Malcolm

The render top wiki says multi camera is only supported for 2D and cubemaps. Is 2D a typo? If cubemaps work, it seems that 3d should already work.

Is it technically possible to use the multi camera with variable render resolution, I.e specify resolution per camera and still get all the multi cam benefits. The use case is vr and renderpicking. Ideally we could use a single render top for the 2 vr views and for all the controller views (but at lower resolution)

Hey Achim,
When it says 2D, it means the output is 2D as opposed to a cubemap. Any render is always 3D.

The Render Pick nodes already render at a tiny resolution, just the Pick Size x Pick Size. So by default the render is 1x1. The resolution of the Render TOP doesn’t matter, only the aspect ratio (since that’s how the UVs will be mapped into the scene).
The idea with the multi-camera rendering is exactly for VR cases. You can now have your VR Render TOP which will be doing 2 eyes in one pass. And then a Render Pick DAT which will be doing all the picking in one pass also, for example from 2 controllers and the head position all at the same time.

So that is all in all 2 passes , one for both eyes and one for all the controllers/picking.

I was wondering if all that could happen in one single pass?

An in addition maybe also render a cube map (which will need a different resolution than the VR renders) in the same single pass

Correct, it’s all in 2 passes right now.

It’s possible that other extensions would allow for efficiently rendering a cubemap at the same time as the main render in the same pass. Picking I don’t think so since picking is done with a different shader than a regular render.

I got my 099 license and Pascal card :smiley:. Can I still use TDWorldToProj in a geometry shader? On 5580 I see Geometry Shader Compile Results: 0(73) : error C1008: undefined variable "TDWorldToProj"

Not currently, I’m considering how automatic I want geometry shaders to be, since they allow lots of custom work to be done. Let me think about that a bit more.

AMD just released their equivalent features:
[url]http://www.roadtovr.com/amd-radeon-crimson-relive-adds-asynchronous-space-warp-latest-radeon-software-update/?utm_source=Road+to+VR+Daily+News+Roundup&utm_campaign=6635ac90a5-RtoVR_RSS_Daily_Newsletter&utm_medium=email&utm_term=0_e2e394ad33-6635ac90a5-161841085[/url]
So with some extra effort maybe there could be cross platform support.

Bruce

It’s actually been in AMD drivers for a while, they’ve just added some branding for it is seems. We already support it on AMD as well, it’s the same GL feature.

Seems like there is a driver bug right now that stops is from working though, may have a fix coming soon though…

any news on this Malcolm?

Looks like the recently fixed bug didn’t cover our usage case, so it’s still an open issue. For now I’ve disabled the feature on AMD cards, it’ll just rendering the cameras 1 pass at a time as it did before. AMD is working on it.

How do you set these x-offsets exactly? Is there a chance you could post a basic, working example please? I’m investigating if this could become an efficiency increase for rendering a projection mapping scene, which is to say, several cameras(projectors) looking at the same scene. I’m probably misunderstanding it entirely but with the way the documentation is I think I can be forgiven.

You set them in the cameras with your transform/projections like normal. When the X-Offset mode is enabled the GPU will throw away all the differences between your first and the other cameras, except for any X-offset contained in the transform/projection. FOV etc will all come from the first camera.
If you need anything doesn’t fall in this very constrained case, leave the parameter to ‘Automatic’ and it’ll use a different SMP mode if the GPU supports it.

I’ll expand the help a bit.

Recently I was working on a VR project,
Found that if I enable Multi-Camera Rendering, some objects that use the geometry shader will not work correctly.

(Such as screenshots, use the simplest example.
TD: Windows Build 2017.16360 GPU: GTX1070)

My guess is because TDWorldToProj () happens to the vertex shader, which then sends the gl_Position of the two views to the geometry shader, and then receives the information’s fragment shader to draw them in the same drawing view flow(Or I just missed something)

(I’ve tried to put TDWorldToProj () in the geometry shader, but it’s not defined there, and I guess this would not be the right way)

Want to ask whether there is a custom solution in the current version of the TD API? (Probably an unexposed variable)

If it can not be solved, I can only give up this part of the performance in this version of the project.


geo_mu_cam.toe (8.92 KB)

Thanks for the post. I’ve been trying to give geometry shaders some more functionality lately due to their up-tick in usage (and seeing the great things done with them at the TouchDesigner Summit). In the next 2018.20000 build we post TDWorldToProj(vec3, int cameraIndex) will be available in the geometry shader. You should use that function there instead of in the vertex shader to properly support multi-camera rendering.

Glad to hear this!

Thanks and look forward to :slight_smile: