Texture Sampling: GLSL vs Sop2Chop UV -> Top2Chop

I’m getting discrepancies with two ways of sampling the same texture.

The first method relies on GLSL to do texture sampling, i’ll call it “Internal Sampling”. I’m taking a SOP’s UV coordinates and converting it to a TOP texture (so all Red Green pixel data is UV coordinates) to then be sampled inside a vertex shader into a vector2 called “SopUVPos”. That is then used to sample a TOP texture (in this case a circular Ramp) and put those color values into the vertex colors.

The second method relies on a TOPtoCHOP (which has its UV input fed by those SOP UV coordinates) to do the texture sampling, and that CHOP data then gets packed back into a TOP texture, which then gets sampled inside the vertex shader using a more standard way of determining uv coordinates. This method we’ll call “External Sampling”.

I’d assume that both results would be the same, since they’re essentially doing the same process of sampling a texture, except one is in GLSL and the other in CHOP, but the results are different.

Compare the top row of pixel color of these next two images:

Changing the sColorMap’s Extend V to “Hold” under the GLSL Mat parameters kinda makes it better, but there’s still a difference with the external sampling. Look at the white edges, they’re a bit more faded.

What is causing these slight differences?
GLSL sampling.toe (11.7 KB)

2 Likes

Hi there,

Not sure if this helps, but there is a third method.
I’ve added a uLimits to the vertex shader that describes the world space limits the texture will be projected on. Then in the vertex shader I calculated the uv coordinates using the vertex position.
Though I noticed this also gives slightly another output, so not sure it helps.

Cheers,
Tim
GLSL sampling.2.toe (11.7 KB)

1 Like

Great, the texture buffer is a good thing to update my workflow with.

Anyways, the point of this project was kinda to have a way to pixel map and visualize LED data. If you check inside the Source COMP, there’s a CHOP at the end that sequences all the RGB data. RIght now it’s formatted to address strips from bottom to top align left to right. Since this project is pretty much SOP based, you can wrangle around a few things and get whatever shapes you want and in whatever pixel address order you’d like.

For the visualizer aspect, i compared 3 approaches: CHOP instancing, SOP instancing and GLSL instancing. SOP had two methods, either with packed in point color data or using the Color Map on the MAT. On my Macbook Pro 2013, running TD 2018.23760, here’s what i get for 40000 instances:

CHOP - 51fps
SOP (point color) - 39fps
SOP (Color Map MAT) - 60fps
GLSL - 33fps

These results were kind of unexpected. I might’ve made a mistake somewhere in there.
pixelMappingLEDs.toe (13.7 KB)

The first image you posted has the artifact at the top due to “/project1/source/texture1”. You can use a SOP to DAT to see that it adds UVs where either the U is 0,1 or the V is 0,1. If you want to sample the lower leftmost pixel for example, you need to sample at (0.5/width,0.5/height). If you sampled at X, Y where X<0.5/width and Y<0.5/height, you would risk interpolating with another pixel in the image, and by default, materials are set to “repeat”.

Another option to change is the filter, which you can switch from Mipmap linear to Nearest. This will give the hard edges that show up in the “external sampling” photo. The reason that photo has hard edges is that the function texelFetch is always filter “nearest” and extend “zero”. The function texture(…) is configurable for each sampler.

The GLSL code might be slow on your computer because there’s a dependent texture lookup. The lookup to get the SopUVPos is then used to lookup the color. The /project1/SOP container is efficient because the UV is baked right into the SOP, which means it doesn’t need the SopUVPos lookup.

I figured this would be the case, and before texelFetch, you can see in my vertex shader i create the vec2 halfUVPixel and then add that half “pixel” to the UV positions. But on the upper end it would exceed the value 1 by the halfUVPixel value and still have the sampling errors.

What I guess I’m wondering is that when you sample the same texture using the same exact uv values (0 to 1 before the half pixel shift) using a TopToChop (which can be found in the Source COMP and later used in the CHOP example and SOP example with the ChopToSop) why does it then sample correctly? I see that under the Extend Parameter page, Image Top and Bottom are both set to Cycle, yet they don’t show the same color sampling issues as when the GLSL MAT samplers are set to Repeat.

I found that the filter didn’t have much effect (probably if i scale down the original color map texture size, i can start seeing something), but definitely setting the extend to Hold makes it better, even if it isn’t technically correct since it’s just copying the same value from the row right below it.

Hmm, interesting. Is there a way to make the GLSL more efficient, but still retain the ability to use arbitrary SOP geometry to determine the vertex positions and uv coordinates?

I’m looking at /project1/source/topto1 and its interpolate parameter is set to nearest sample. I think this explains why it doesn’t wrap around despite having UVs without the 0.5 pixel offset.

For more efficient GLSL you could do a heavy one-time cook of UVs into the SOP itself, but if you do this, you might as well use a Constant Material. It’s easier to manage, as you’ve done in “/project1/SOP”. If you’re doing real-time vertex displacement, you might want to calculate the screen-space UV in realtime. You can actually check how TouchDesigner does this if you put a color map on phong, set the color map’s texture coordinates to screen-space, and then export to GLSL. That shows how configurable built-in TD materials are. I put screen-space alpha-masks on constant materials that instance moving geometry. Whabam and no GLSL. :wink:

ah, very cool. I think most of the time though i prefer the bounds of the geometry to define the uv coordinates for sampling rather than just using the entire screen space. I’m actually interested in the use case for using screen space / alpha masks for renders, because once you start moving the camera things start to get funky. Got any video examples of work you’ve done that uses this method?

You can use a Point SOP and do a UV expression inside the point sop

“Add Texture Coordinate”:
x: (0.5+(me.inputPoint.index%imgwidth))//imgwidth
y: (0.5+math.floor(me.inputPoint.index/imgwidth))/imgheight

Where (imgwidth, imgheight) might be (1280,720) for example

For an alpha idea, check out 0:29 in this video
vimeo.com/229642430

You can see the guy swipe some content to the left. There are three columns where users can all do this, but we didn’t want the third column’s swiped content to end up in the middle column. It doesn’t make sense to have a separate render pass or TOP for each column. Instead I used three sets of instanced geometry and three materials, all inheriting from the same shader (using the inherit parameter), but each material had a different alpha mask. The alpha mask was a 1/3 white vertical column.