3D ArtNet mapping

Wow! very cool (and quick!) Markus.

Thank you very much!

Should I expect difficulties if I want to extend this to a rather large 3D array of LEDs? Say in the order of 100,000 RGB LEDs (300,000 ArtNet channels)…

Thanks,

hm - that’s roughly 600 universes which sounds like a lot of data. Is this going out via sACN or ArtNet?

Operations will be heavy - no doubt - when dealing with that amount of data. In case you run into trouble you could split the process up into multiple processes. One for rendering and retrieving the color values, others for converting them to CHOPs and sending them out to your lights.

A limitation that you have to build around is the face that textures are limited to 16000 pixels in either direction - so when feeding in the position of the LEDs as a texture, you would have to split it up to something that can be computed on the gpu.

Cheers
Markus

Markus,

If the final matrix was somewhere between 10x10x1000 and 10 x 67 x 150, I believe the maximum size for any given texture would be 10x1000. So I would be ok in terms of having the GPU deal with the texture size or did you mean that the texture limit was 128² = 16384 pixels total? I doubt the latter.

The final output would be ArtNET and I’ve estimated the required ArtNET bandwidth to be just under 100Mbps so I would most probably implement an end to end Gigabit network for such a setup. Should I also expect problems with TD regarding network throughput?

Thanks again,

correct - the output has a current hardware limit of 16384x16384 pixels.

A quick test:
animated noise (res 1000x100) → TopTo CHOP (1.2ms) → 3x Shuffle CHOP (0.7ms) → DMXOut CHOP packet per channel mode (1.5ms)
revealed no big problems - so you should be fine.

Regarding the Depth Peel, you don’t need a huge resolution, so you should be fine on that end as well.

Cheers
Markus

I just recently did something like this for a volumetric LED display.
Although I used FadeCandy as the output, the technique should be similar until the end.

I built out each point of each LED strip in 3D using a little script and an ADD sop.

I set each point’s UV coordinate to the output pixel index.

Using a sphere or something with a Group SOP you will get a group that includes all the voxels within the sphere.

Use a POINT SOP to apply a color onto that pixel.

Once you convert those Sop Points back into a chop, you can Re-order them using the UV coordinate and output to the correct pixel.

Before I figured this out I was trying to figure out the depth render method, maybe I’ll have to give it another shot now

Heres a tox if my technique is interesting to you:
Volumetric_Sample.tox (48.4 KB)

Holy crap this is amazing, thank you so much for posting this. 3D Volumetric PixelMapping is like my dream goal in TouchDesigner and this is by far the best evidence yet that it is feasible without heavy GLSL knowledge or a crazy C++ TOP or something.

That being said, I would caution you about using art-net. It can (especially in broadcast mode) take down a network at a far smaller quantity of universes than what you’re proposing. Streaming ACN is definitely more efficient and future proof, but if you know your networking, either can be workable for these purposes. Just try to do unicast art-net if your switches can handle the number of devices you are sending data to. Calculating the bandwidth is only one part of the picture and there are other things (like routing database capacity and implications of cascading switches) that may not crop up until you have the whole system up and running.

I can certainly vouch for TD’s network capabilities. I did some testing with a custom sACN comp that I hacked together from the one Lucas Morgan posted on this forum, and I had 256 sACN universes going at once to an LED system that had almost 50 pathport nodes and probably 3-4 network switches haphazardly installed (not my design). So with basically 256 UDP Out DATs, I noticed no bottleneck on TD’s part. Once I did some minimal optimizing and breaking the processing of the data into multiple TD instances running on my computer, I was able to get it running at over 30fps on a laptop with a gigabit connection.

I am super curious to hear how this turns out for you using Art-Net so please do post a followup on this forum.

I want to push the number of universes to 10,000’s of thousands and more…

My panels are receving 24 universes of sACN. 64x64 matrix ( youtube.com/watch?v=LcnajTwsTqM )… these are really low cost, and I want to run hundreds of them pushed all over the place…

How best to map to these

Hi Andrew,

what kind of content are you planning on playing on these?

Best
Markus

hi,

im curious about the locked strips geo in harveymoons example
i tried replacing it with me own geo and it dosent work.

i noticed the strips have extra point attributes, how can i add these to my geo?

resurrecting this thread.

I had a look at harveymoon’s Volumetric_Sample.tox, and I’m having some trouble unpacking what’s happening in there.

As already mentioned by sssslamin, it’s not clear how to replace that locked SOP null_strips, and to me it’s not clear how that SOP is created in the first place.
That null appears to have (about) 64 lines made of 64 points, but I don’t understand where these parameters are set, or what’s determining their position.

This technique seems very interesting and streamline, but a bit difficult for me to understand.

If that locked null could be replaced by a simpler geometry (i.e. one line, or one circle), it would perhaps make the rest of the network clearer too.

anybody interested in having a look at that .tox and point me in the right direction?

hey I’m back sorry.

I made these strips using a python script, a table DAT and an ‘add’ SOP.

This allows you to specify exactly where the point exists in 3d space.

I had a limit of 64 per strand. I wrote the script to build out each strand and then I would manually adjust the virtual positions to match the random heights of each strip from within the sculpture. I decided not to post that led mapping part in the tox sample.

The real trick with this whole thing is that I set the UV mapping to be the pixel index number.

Since in the end I wanted every pixel to be in order for the pixel driver, when I built the point table, I added a column for the UV and set it to the index number of the LED.

If you are adding a new pixel map, first make each pixel point exist in sop space, have XYZ and the UV(0) / ‘u’ value is the pixel index.

Make sure to have a black invisible color also so that it can add a color to it later.

heres another demo making a grid of pixels
base_make_pixels.tox (57.8 KB)

This is really great stuff – completely different approach to 3D pixel mapping than I’d been using (based on Matthew Ragan’s tutorial) and one that allows you to use actual 3D objects. With this methodology, how would you approach colour? Obviously solid colours are already covered, but how about ramps etc? I feel like there should be a way to pull out the ‘active’ pixels and remap them with gradients or whatever, but there are too many pieces here I’m not familiar with…
I’ve uploaded my look look at it – details of the DMX module can be ignored

EDIT: Worked out at least on method converting to TOPs, compositing, and then back to CHOPs. Second file attached.
3D gradient.tox (107 KB)
3D.tox (106 KB)

thanks harveymoon for sharing that base_make_pixels tox, and for explaining the process behind it.
I find your network and general approach very interesting, great work.

thanks also GraigGR for sharing your take on it.

Already a topic from a long time ago but still inspirational.
I updated the patch with Previz and Artnet output but only getting lower fps even with simple geometry.
3D Artnet mapping.5.toe (9.3 KB)

1 Like

I am trying to use this example but the glsl multi top just outputs all white. Is there a bug in the latest touch designer or am I missing something?

1 Like

This is cool, but is there a more solid way to apply ramps and colour? It seems the method you are using uses some kind of feedback system which creates a delayed effect. It would be great if you could just apply colour to whatever geometry you were inputting and that would have a direct effect on the geometry that was being outputted. I can’t quite crack how to add colour into the mix

I’ve played with the solution of harveymoon, but at 163 840 pixels it’s very laggy and undoable… My laptop has a 4080. GPU and CPU runs under 12%. I don’t know where is my bottleneck but it’s caused by the group SOP…

For my future project I would need wayyy more than 163 840 pixels…

It’s definitely a CPU bottleneck. You wouldn’t happen to have an 8 core CPU would you? TouchDesigner is single threaded so you can only max out one core of the 8 - hence the 12%

I would recommend investigating RayTK. I just whipped up an example file that can render almost 200,000 points and 1200 universes of DMX:

It runs fine at 60FPS on my 3070.

Here’s the file with RayTK 0.36 already embedded in it.

In this case the banana point clouds would be the XYZ locations of the LEDs. and then there’s some RayTK operators moving through them and colored by the point material operator. It all gets downloaded into CHOPs and split into the ~1200 universes, just not hooked up to a DMXout CHOP. The load on the DMXout CHOP may be a little too much for the Windows network stack, but you could split this to two files and keep them in sync easily.

Make sure to run it in TD 2022.35320 or earlier as it’s still a little buggy in the 2023 versions yet.

1 Like

Wow thanks, i’ll try it out ! I’ll let you know!

Yes my CPU is a 16 cores and 24 logic, I didn’t know TD only use one.

I’ve run dmx output test to see where I’m capped, and I’ve reached about 4200 universe before getting delay and noticable skips. But, for some reason, I needed an external 1gbs network card as windows capped my internal network card for some reasons…

(btw, I think we’ve met at the Montreal TD meetup 2 years ago. I was the guy with the orange glasses.)