any pixel/expression top?

While the existing tops are great, I really feel the need to have 2 more things (maybe they’re there!):

1 - relational expressions, ie I want to make an alpha channel that’s driven by the relationship between r,g and b. I don’t seem to be able to find anything that does it. Maybe the channel mix (no docs, I’m not sure if I can use some $CR, $CG, or $CB vars in there?).

2 - a channel mix that takes two inputs, so that channels can be easily swapped between images! Especially if vars are usable, some nifty things can be done!

thanks,
d

This is what the GLSL TOP is for really. It’s really quick to learn how to write a simple GLSL shader. A GLSL TOP shader is a piece of code that is executed once for every pixel in the image.
To get started, here’s a simple shader.

uniform sampler2D sInput1;
uniform sampler2D sInput2;

void main()
{
    vec4 col1 = texture2D(sInput1, gl_TexCoord[0].st);
    vec4 col2 = texture2D(sInput2, gl_TexCoord[0].st);

    // Do your expression here. Right now it's simply input1 * input2.
    vec4 outcol = col1 * col2;

    // write out the result.
    gl_FragColor = outcol;
}

Just put this into a DAT and put the DAT into the ‘Pixel Shader’ parameter of the GLSL TOP.

Hey,

the Reorder TOP takes up to 4 inputs and lets you swap them around.
I also often use the “Channel Mask” parameter on the TOP parameters Common page. Applying different effects to different color channels is quite fun - easiest example for using it would be a rgba delay but things like hsv blurs are great too and easy to realize with that as well.

markus

Thanks a lot for both replies, very very informative!

I’m inspired to ask one more question…, which goes with an RFE I emailed directly to derivative a while back… In that email I had asked for a “while” operator, kind of what houdini 9 has, to allow to recursively do an operation, maybe setting some env vars or other to know when its time to exit.

The glsl top is great, but it acts on a single pixel. Any other way to use the hardware to look at a variety of pixels to make decisions based upon neighbouring pixels?

I know I may be confusing you here, but I’m looking for a way (fast way) to take a cropped piece of an image and run pattern recognition on it, ie. try and find out where the pattern I was looking at in the previous image has moved by the current image.

This could be done by taking subsampled crops of the cropped area, and matching our previous sample to it to determine the closest area, then start from that approximative set of locations and resample with a smaller interval, etc.

To be able to do this in hardware, like the glsl top, would be awesome. It would be awesome to be able to do this at all :slight_smile: (hence the “while” operator request).

d

The GLSL TOP can sample neighbouring pixels also.
It can only write out one pixel at a time, but it can sample many other pixels and do whatever with all of those samples. (Thats how things like a Blur or Edge TOP work)

Very interesting.

a couple of questions:

  • do you have a simple sample code for sampling other pixels that are not the current one?
  • what memory is being used? I mean, what if I want to store large amounts of data, ie. an imprint of every area sample I take before the code decides what to do, in the form of a bidimensional array each imprint? what if the areas sampled are rather large?

tx!d

Ok here’s some background on GLSL.
'sampler2D’s are the TOP image(s) that are inputted into the node.
texture2D() is the function that samples the image at a given texture coordinate.
gl_TexCoord[0].st is the texture coordinate of the current pixel on the output image. It’ll be (0,0) for the bottom left pixel, (1,1) for the top right pixel.

To illustrate what gl_TexCoord[0].st is, output it as the color.

gl_FragColor = vec4(gl_TexCoord[0].s, gl_TexCoord[0].t, 0.0, 1.0);

Ok, so to sample neighbors, all you need to do is offset the texture coordinates.
The next concept to understand is ‘uniforms’. A uniform is a input to a shader, it’s value is the same for every pixel thats rendered.
I automatically set the values for some helful uniforms.
‘uResolution’ which is based on the resolution of the output image.
‘uInputRes1’ which is based on the resolution of the first input
‘uInputRes2’ which is based on the resolution of the 2nd input
etc.

Declare the ones you are interested in at the top of your shader like this

uniform vec4 uResolution;
uniform vec4 uInputRes1;

It’s 4 values are vec4( 1.0 / XRes, 1.0 / YRes, XRes, YRes);
The first two values are the ones that you are interested in. Offsetting a texture coordinate by 1.0 / XRes will move texture coordinate so they sample 1 pixel to the right.
So for example if you wanted to sample the pixel thats 2 pixels up and 1 pixel right of the current pixel, you’d do this

vec4 col1 = texture2D(sInput1, vec2(gl_TexCoord[0].s + uInputRes1.s, gl_TexCoord[0].t + (2.0 * uInputRes1.t)));

There is a sampling function called texture2DOffset() that encapsulates this more for you, but not all video cards support it so I won’t go into it right now.

To answer the 2nd question, hardware shaders don’t have memory, they have registers, which are quite a bit more limited. How limited depends on your video card. My Geforce 7900 GT has the ability to store 32 vec4s at a time. But the way nvidia hardware works the more registers you use, the slower the shader will run. So you should try and work in small batches if you can. Do some samples, do some math, repeat.

Hope that helps.