Differential growth in Touch?

Hello!

I am just beginning to learn Touch (and Python).
For a lot of the things I would like to explore, TD seems brilliant, but I am of course still grasping the width of the system.
A big remainder of my explorations will be linked to more generative natural shapes/movements/growth working with kinetic installations and projections.
I need to start my explorations somewhere and so… My question is, would something like differential growth be possible in TD?
If you look at this Houdini tutorial the principle seems very simple, but since I lack the overview of what is actually possible in TD I just wanted to ask you guys for your opinion regarding this.
entagma.com/differential-line-growth/

After I understand how to do these kind of things, I really want to animate them (slowly) based on some kind of input. So… if this a very difficult/impossible task in TD I would like to focus my research on a system where this would be possible. But I cross my fingers that these things can be solved in TD so that I can have everything within the same system.
I would of course be very interested in any tips regarding easy to understand principles within this field. I understand the principles, but I am not yet at a coding level where I would know how to implement this from scratch.

Thank you in advance for your thoughts :slight_smile:

I forgot to tell you that I did find some of the nodes used in this tutorial in TD, but the essential ones seems to be missing or might be called something else maybe?

Hi!
Well, sadly onme can’ really translate houdini tutorials in a one-by-one literal way.
The really big thing here is this solver node, which essentially creates feedback. We don’t have a feedback SOP (why? this is a feature request!)
But you can actually build feedback using the cache SOP. See the attached example. Since What is done in this tutorial creates a Lot of geometry very fast one has to really be careful with subdividing in the feedback path. Otherwise TD will just stop responding after a couple of frames. In the example, I am doing something different (without any subdivision) but I hope it’ a good starting point. all the best!
feedBackSOP2.toe (5.45 KB)

Oh, by the way: I was interested if we can do this in touch, and it seems we can. I find feedback in the SOP world really interesting and I’ sure one can achieve very interesting results. That being said, I don’ think this examp[le is best done in the SOP world. It looks like a shader (or even just a TOP network) could do this a lot more efficiently.

Thank you so much for your reply!
I’m away from my computer for a few more days, but I am eager to check out your feedback setup.
In your last reply, I was very happy to hear that this would be possible to achieve in TD. You mentioned it would be more efficient to do it as a TOP or like a shader. Any pointers towards that direction? I am very curious about this as well. This would give me a good indication about where to start researching.
If it’s possible with a TOP network that’s super interesting. Any particular nodes I should start researching?

Wow! That feedback SOP is really, really cool!
There are tons of explorations to do with that technique.

I’m going to have to explore this further :slight_smile:
Any other thoughts, regarding the TOP / shader route for these kind of things?

Came here to kinda ask the same question :slight_smile: I got the Gray-Scott diffusion algorithm going in Processing and want to do that with GLSL now.

I got the shader to work, but I’m not sure how to properly inject the processed result as the next source and get the feedback going. When I switch from the noise TOP that I use to seed the process to use the the rendered result TOP as an input I only get black. Can someone point me to what I’m missing here? I use the same parameters as I used in my Processing code so those should get me a result.

To explain a bit what I did: I generate a colored noise field as a starting point, define a couple of constant parameters and feed all that into the pixel shader. The shader then goes over the noise texture applying the diffusion algorithm by interpreting the red channel of the texture as the A value and green as B. The resulting value is stored in the blue channel to keep the values separated. After I render the shader to a grid I separate the channels and feed the red and green back into the shader as the new texture input.
diffusion_shader.toe (6.04 KB)

baddestpoet,

Your question is a little different from mrouioui because you’re going for image-based growth and mrouioui is looking for SOP growth.

For TOP growth, check out these threads

I looked at your file and would recommend using a GLSL TOP rather than your first Render TOP. It would also be good to use a Feedback TOP. I’m not sure why TD isn’t detecting a dependency loop between /project1/glsl1 and /project1/texture.

Ah yes, I completely overlooked the GLSL TOP. Used it together with a feedback and got it to work now, just need to fiddle a bit with the parameters. Thanks for pointing me in the right direction!

Attached my sketch for anyone else who’s trying to figure out the shader way.

Edit: C&c welcome :wink:
diffusion_shader.toe (5.24 KB)

Actually, as I said in one of my previous post, I am interested in all/any ways to work with this.
I haven’t got my hands dirty with GLSL yet, but if that is the best way to attack this, I will :wink:

Thanks so much for sharing the GLSL way! Can’t wait to dig through it further :slight_smile:

If TD proves to be a tool to solve all of these kind of “natural” simulations then it is getting awesomer by the click! Now I am just waiting for them to dramatically improve the 3D part so we can experiment even further with these ideas :slight_smile:

Really nice reaction diffusion shader. I am trying to increase the speed of it unfortunately cannot find the right combination of parameters. Is it possible to speed it up just from the feed rate?