OK, so those are parallel worlds, but then is there any way to control Point Sprites with vertex shaders?
Or let's say that I generate position channels out of r g b values from a GLSL TOP, would I be able to grow some Point Sprites from there?
To be more specific, I was thinking on a GPU model for audio synthesis. By generating a texture with a sine wave, translated into a grey scale horizontal ramp, and using a camera where distance would be affecting frequency and amplitude (if some fog is applied). Things like that.
A point sprite with a sine wave sounds even better. I've just read also about this uniform sampler1D sCosineLookup, but didn't try it (there are no ways to generate a sampler1D with touch, right?).
Should I just map polygons and digest all it's extra vertices?
I wanted to evaluate the different possible models before diving into this task.