Efficiently combining geometry with particles?

I have a particle system with thousands of particles that I’m sorting by depth before rendering with point sprites.I also have a plane with an image on it (and transparency in its texture). Some of the particles should appear behind the plane, others should appear in front of it (depending upon their depth). Enabling order-independant transparency incurs too much of a performance hit. Is there some clever way of doing this that I’m not thinking of? Perhaps some way to mask the renders depending upon their depth and the imagery on the texture?
Thanks,
Michael

Okay, so, now I kinda almost got it working without having to turn on order independent transparency… I’m still doing something wrong, though. I made sure transparency was turned on in the shaders for both my particles and the plane. I also set them both to write depth values. Now, the particles appear correctly behind and in front of the plane. However, the particles appear as “invisible squares” whenever they pass in front of the plane (the area behind the point sprite disappears, even though the texture on the point sprite is circular and has alpha transparency). If i turn on “Alpha to coverage,” everything works great, except it makes my sprites look funny (I don’t really know what “Alpha to coverage” does. Any advice?
Thanks,
Michael

You could split up the particle system into two SOPs using two Delete SOPs: all the particles behind the plane and all the particles infront of the plane. Then sort the two systems, and render them in farthest to closest order (behind plane, plane, infront of plane).

I think I’m just doing something stupid… I’ve attached an example… If you go to the shader pointsprite1 and turn on “discard pixels based on alpha,” it gives you an idea as to what the scene should sort of look like (obviously, I don’t actually want to use discard pixels based on alpha).

Thanks,
Michael
depthproblem.3.toe (20.9 KB)

Maybe I’m totally off base… haven’t been able to make this work. I’ll try your method.

Well, actually, I’m not sure I can use your method because I also want to have planes move around the scene and correctly move in front of and behind the the particles. The planes also have transparency in them.

you can dynamically delete points based on the current position of these planes, either using the clip SOP or the bounding options of the group/delete SOPs.

Turns out the Delete SOP doesn’t work correctly deleting points from a particle system (and Clip SOP only works on primitives).

Here’s another way to do it. I’m using an expression in the Point SOP to make the pscale attribute (which controls the point size) 0 or 1 depending if it’s in-front or behind the plane. A point scale of 0 causes the particle to disappear. Then I render the particle system twice. One set has all the points behind the plane with a point scale of 1 and all the points in-front with a point scale of 0. The other has the opposite. I render the points behind first, then the plane, then the points in-front.

This could be done in a shader also for speed. I’ll look into adding some clipping abilities into the Point Sprite MAT in fact.
depthproblem.4.toe (22 KB)

Thinking about this more, you may actually get the best performance if you split up the particles into a two DATs and make your separate particle systems right from instead. Sort by the P(2) column then use Select DAT to select the two sections of the table. You’d also avoid the Sort SOP that way too.

What about something like this? It seems relatively efficient… It renders the scene twice, once with the particles, once without and then creates a set of masks using a depth TOP and 2 chroma keys and then uses Inside TOPs to separate the scene… I feel like this could be optimized, somehow. Also, I really want to tie the chroma keys to the depth of the plane, so that when it moves, the masks are updated, but I haven’t been able to figure out how to do this mathematically. I think what I really need is for the depth TOP to output global coordinates and then do some sort of > and < operator on the image (do these exist?) .
Thanks,
Michael
depthproblem.4.toe (21.2 KB)

Yup this works too. The equation to map a camera space Z to the values in a 24-bit fixed point depth TOP is

depth = ((far + near) / (far - near)) + (1/CamDepth)((-2far*near)/(far-near))

This will give you a value between -1 to 1, which needs to be remaped to 0-1 (divide by 2 and add 0.5 to it to do that).

Chroma Key is a little expensive for this operation since it needs to do a HSV->RGB conversion, which is slow. Try using RGB Key or Threshold instead.

Also, use a Render Pass TOP for your 2nd pass to save memory (and increase speed).

You can also output camera space depth into the Depth TOP if you change the Depth Buffer to 32-bit floating point (on the Advanced page for the Render TOP), and select the linear camera space depth option. Since your depth values are now 0-far plane instead of 0-1, Chroma Key won’t even work now, so Threshold TOP is a better choice here too.

And if you’re up for it, the best performance can be reached if you do this in a GLSL TOP that does the comparisons and combination all in one go.