Texturing 3D Geometry for Projection Mapping

I am trying to figure out the best method for texturing 3D Geometry in TD. I have created a 3D model and imported it as a .fbx with UV Map projected from camera’s perspective. Then I use a phong shader to add the rendered animation video as a material to the geometry. The issue is that if there is geometry that obstructs the mesh behind it then the rear mesh will be textured with the more forward mesh’s texture.

Is there a better way to get the fully rendered textures mapped on to the 3D geometry?

I’m going through this myself. I’m using Houdini Apprentice at the moment and trying to figure out the best way to bring in the UV Texture. I’m thinking of exporting geometry, camera and desired perspective transforms to Touch. Also, saving out the UV Texture from the position I’m interested in. I’ll share my findings.

Therein lies the rub of doing content rendered from a camera that will not be the same as the projector position!

I’ve done a number of tricks to try and “fudge” those areas, like edge bleeding the UV coordinates down into the overlapped areas so they at least don’t look like copies of what is in front of them, but the fact is unless you create different content for those areas (perhaps by rendering a couple passes at different slices of your scene?) you will always have to fudge it somehow.

Another technique is to develop some content that is uses surface based effects that can be rendered from the projector’s virtual eye at the end of your mapping pipeline that can be composited with your rendered mapped content to smooth over the fact that you don’t have content for those areas…

It’s a pain if you’re trying to do perspective effects and you have overlaps from your viewpoint, I find it best to try and minimize those areas if you don’t have the time/money/labor to create a more complex setup that gives you content for those areas.

Also there is a texture from camera option for the textureSOP in touch, so you can do a lot of testing there in realtime to see how you’re overlaps are doing…

So I have identified two work flows:

3D Lighted Building Effects (Only works for Blender Render from what I have found)

  1. Model Building Geometry in Blender
  2. Texture in Blender
  3. Animate Lighting Effects
  4. UV Unwrap Model using “LightMap”
  5. Save new image in UV Editor
  6. Activate “Animated Baker Render” Addon
  7. Select Animated Bake from Bake Menu
  8. Export .obj with UV Map from Blender
  9. Sequence Animated Bake images to Movie file in Blender
  10. Import .obj geometry in TD
  11. Import Animated Bake Movie in TD
  12. Add Phong Shader
  13. Connect Movie In to Phong Shader as Color Map
  14. Connect Phong Shader to Geometry as Parm: Mat
  15. Add Cameras and position at Projector Positions
  16. Use Camschnappr & VertPusher to allign projectors with real work Geometry
  17. Render Out
    *Advantage is that the Building will be 100% properly lit from any angle in TD regardless of viewer or projector perspective! But this won’t work if you have 3D effects & animations because physics only allow these to work from a single perspective and appear correct as you are projecting on a 2D faces still.

Perspective Effects with additonal 3D Geometry

  1. Model Building & Effects Geometry in 3D Software
  2. Texture in 3D Software
  3. Animate Effects
  4. Add camera at viewers Perspective
  5. UV Unwrap Model from Projected Perspective
  6. Render Movie from Projected Perspective
  7. Export .obj with UV Map from 3D Software
  8. Import .obj geometry in TD
  9. Import Perspective Movie in TD
  10. Add Phong Shader
  11. Connect Movie In to Phong Shader as Color Map
  12. Connect Phong Shader to Geometry as Parm: Mat
  13. Add Cameras and position at Projector Positions
  14. Use Camschnappr & VertPusher to allign projectors with real work Geometry
  15. Render Out

Any additons to these work flows?

Hey Pochflyboy,

Thanks for the incite.

So what I gather is you are modelling, texturing and animating in another package. The rendered movie from the Perspective View will be projected back onto the model from the Projector View? A little confusing because you say to render the movie from the Projector View in the second workflow.

I was hoping to use UV Maps taken from Houdini’s UV Unwrap or UV Pelt SOPs. An unwrapped UV Map will have all the polygons flattened out instead of UV Projected Map which is projecting a UV map from one axis. While that method will work, it seems like I would have to make UV Projected Maps for each group of different facing polygons, something the UV Unwrap or UV Pelt Maps would take care of.

Update:
I was able to use UV Unwrap and UV Edit in Houdini Apprentice. RMB on a node to Save Geometry to an .OBJ file. Drag this file on to Touch Designer. Add a Texture SOP to the model and select Modify Source to use it’s own UV coordinates. So far, this is working well as a solution.

What i do is create a frontal orthographic UV map in my 3d program ( cinema 4d in my case) and render all my content based on that “template”. For example in this piece- [url]https://vimeo.com/114833319[/url] things like outlining, lighting effects and construction/destruction are more or less fitted to this template, which takes on a 1:1 aspect ratio.Lighting effects, and 3d transformations are rendered from an orthographic camera view in the 3d program and stretched to fit the unwrapped UV map. Then in touch after I calibrated the geometry using Camschnapper I hook it up to a Texture SOP with the orthographic view. I think it works well with most geometries and camera positions if your projector is not like totally off axis. If you’re on cinema this tutorial was really helpful for me on how to export a UV map-
[url]https://vimeo.com/63255097[/url]. Hope that helps.

Know how to create a good UV map, it’s fundamental for projection mapping on structure with 3D mesh.

If you make a projection on building, stage, structure… Create a clean UV, will help everybody, peoples on after effects will have reference with your “pixel map”. So make it flat, a front projection is a good start.

And you will prefer use a MAT Constant (no light affect = Same luminance as your 3D render)

No one seems to be mentioning, but you can make your ‘best’ UV map, import that with your model, do what you need in touch, and then render using that UV map.

You end up with a TOP that’s the same as you would get from a baked render output from Blender etc. - it’s pretty awesome. You can look at the model all around and it will be lit and textured perfectly (as perfectly as your map allows).

Let me see if I can put together an example file for it. Is there a touch standard model that has a nice UV map?

Bruce

Hi Bruce, so you mean that you could do a texture baking inside TD? it could be a great thing for me! please tell how to set my render network and wich operator use!

thank you in advance

Yes, that’s exactly what I’m doing. It’s a hack, but it works really well.

I need to make an example that shows it. Remind me if I don’t get to it in the next week or so?

Bruce

mmm, i love hacks!
if you want I will remind you every single day of the week!

thank you guy!

Stefano

Bruce, I’d love to see your method in action, too.

Working on a demo for it.

Any good news from texture UV baking? I’m pretty curious!

Oops sorry, this dropped off my radar. Back on my plans to do a tutorial.

bruce

Since I keep forgetting, here’s the basic steps.

Bring in your model with UV coords. This seems to be trivial with an FBX, I presume works with other types. The UV map represents how the pixels need to be laid out - baked.

Pass your model through a Script SOP with the following script. This ‘stores’ the UV map away as a vertex attributes, for use later:

[code]def cook(scriptOP):
scriptOP.clear()

# copy the input
a = scriptOP.inputs[0]
scriptOP.copy(a)

a = scriptOP.vertexAttribs['uv']
n = scriptOP.vertexAttribs.create ('uvMap', (0.0, 0.0))

for p in scriptOP.prims:
	for v in p:
		v.uvMap[0] = v.uv[0];
		v.uvMap[1] = v.uv[1];
return[/code]

Now you can process as normal, including changing texture co-ordinates as needed to make the images you need.
Now - in your final render, you will need to use your uv map as the pixel position. Make a phong Mat with what you need, and use it to generate a GLSL Mat. In your vertex shader, edit:

// Use the custom attribute we added to save orig coords in vec3 uvMap;
at the top, and then at the very end: // Leave the vVert stuff, but change position! gl_Position = vec4 ((uvMap.st * 2.0) - 1.0, 0.0, 1.0);

Now the vertex will be outputted at the UV map position, not at the rasterized position. The trick is that all the other information your pixel shader needs, to light the pixel etc. is in the vVert structure, so you get a correctly shaded model baked out to a map.

Voila! I will still try to make a tutorial, with the whole process. Hope this is better than nothing. I was very pleased with myself when I worked it out :smiley:

sounds great mate. An example would be amazing. Cred for nutting it out.

Great!
I’ve tried unsuccessfully…
I’ll wait for a pratical example!

thank you!

As promised oh so long ago. Happy to answer questions on that thread.

viewtopic.php?f=20&t=9871