Complex Mapping Setup - can TouchDesigner do it?

Hi!
First of all - I have no TouchDesigner expierience at all - but I’m experienced with projection mapping in general.

I’m about to create quite complex projection-mapping installation - and wondering if TouchDesigner is the right choice for the job. Project has number of requirements, I listed some of them below:

  • multiprojector setup (probably around 8 projectors)
  • complex model - think lots of buildings
  • buildings should be lit completely - meaning not only roofs, but sides as well
  • projections should be properly blended and have equalized brightness (please note - simple edge blending is not enough, as projectors will be positioned radially and overlapping spaces may be partially occluded by parts of a model; also each projector may lit on buildings with quite different distance from projector, so brightness will vary and should be equalized)
  • real-time 3D animation (controlled via network)

Could it be done? What’s the most difficult part? If it’s not possible - is there any other solution I should look at?

Thanks!

Hi mruce, welcome to the forum.

well just to give you any reference about scope, here are some examples of mapping projects which were also done in TouchDesigner:

derivative.ca/events/2011/abudhabi/
(49 projectors)

derivative.ca/Events/2015/Luminosity/
(12 and 24 projectors)

derivative.ca/events/2016/Gasometer/
(12 projectors)

derivative.ca/Events/2014/ArtMall/
(64 projectors)

(see derivative.ca/Blog/ for other examples)

Graphics hardware: in general the strong preference is to use Nvidia Quadro cards which can do hardware frame-locking over multiple computers, and you can also use multiple Quadro’s in one computer and assign a seperate TD process to each card. This is the only solution where tearing-free output is always guaranteed.
You’ll need to setup a sync system for your project timeline and content between the different TD processes/computers yourself though. You can use a Sync CHOP if you use pro license, OSC chop if you use commercial license.

A somewhat cheaper hardware solution would be to use two outputs of one fast Geforce card and use a few Datapath X4’s to split that into 8 outputs.

Hi nettoyeur, thanks for info and projects links - they are truly impressive. None of them is similar to what I’m about to do though - in particular I need to lit complex model in 360 using rich 3d, real time generated graphics. m.youtube.com/watch?v=RGSsVIg-yYw - that’s the closest I can find. My project will use projectors only (no additional lamps) and advanced, real-time shading.
Do you think TD is the way to go?

this project seems a bit similar and is (I think) also mostly done in TD. lvthn.com/work/connected-city/ (there’s a movie at the top of the page)

Touchdesigner is a realtime system, so yes you can do real-time animation, realtime lights & shadows and add your own GLSL shaders. Not sure how advanced you’ll need your shading to be though.

In v099 (currently still in experimental beta status) you can use PBR materials (also directly from Substance Designer), and Image Based Lighting from any HDR image. Image Based Lighting shadows are not available (yet).

Yep - that’s quite close to what I want to create. Thank’s for your help!
In 0:27 there’s a moment with C# code - wonder how do they integrate it and what did they used it for?
I guess there’s one last thing I’m wondering about - what are advanced mapping and blending techniques used in TD? I want to map using complex geometry, not simple primitives - do you know of any resources I can learn about it?

I personally have never integrated C# in Touchdesigner - you can easily write C++ DLL’s (for TOP and CHOP), Python, and GLSL (pixel and vertex and geometry shaders - in 099 also GLSL compute shaders)

For mapping there are several solutions which can be used in serial and parallel in the same project. If you have a virtual 3d model the solution is Camschnappr

For more basic stuff there’s also the Stoner

NB: These mapping tools were built with the available nodes in TD - so you can also totally read the source of these tools and extend them, or rip them apart and build your own if you need custom stuff.

If you download the free version of TD you can try them both.

Having worked now on a number of installations with large numbers of projectors and machines, my perspective is that the most challenging aspect you’ll find is thinking about a proper configuration process, and a way to remotely calibrate your virtual cameras.

Best case scenario you’ll use an iPad or another computer to do your calibration process. Trying to manage the touch network and calibrate is one of my least favorite challenges - this all requires a significant degree of prep work to ensure that your controller and calibration process work well.

If you’re brand new to Touch, give yourself at least twice the time you expect it will take. There are a large number of teaching materials online these days, but taking the time to do some deep learning will be very important for you along the way. Similarly, Give yourself a healthy degree of debug and optimization time. While you can often build out solutions very quickly in touch, mindful optimization often takes time to ensure that you’re back up to ideal performance frame rates.

+1 Everything Idzard said.

+50 to what Matthew said, though. In my opinion, not only double the time, but honestly, I wouldn’t attempt what you’re talking about until you’ve done some smaller projects first and gotten a hang of TouchDesigner in general (or you have budget to bring on external developers). Then you can decide if it’s the right platform for your project. TouchDesigner can pretty much do anything. It’s not like certain projection mapping platforms where you just drag and drop a few things and call it a day (we’re getting much better though!), it’s very much a development environment.

All the links posted in this thread are from the hardest hitting TouchDesigner development houses and developers, they aren’t first time users or even high level intermediate users. These are all the best of the best working very hard to make these things work in real-time and be rock solid. I think it would be misleading to suggest that any of these would be within the reach of a first time user.

With that said, the community here is unlike any other in the amount of sharing and collaboration, and if you decide you’re willing to get your hands dirty, everyone here can provide guidance to help get you unstuck along the way (as they already have above).

Matthew, I’m curious about your quote here. When you say managing the touch network and calibration is not your favorite, is that referring to navigating within the Touch environment while managing the calibration? Or working with a GUI interface and the normal CamSchnappr/Stoner tools?

Or are you doing something different like a custom interface to the mapping tools that you have made yourself?

Hey ajk48n - sorry for the long response:

Working with the TouchDesigner network UI itself over a VNC connection is, in my opinion, not only frustrating but also cumbersome and inconsistent. VNC, or any kind of screen shared application for that matter, requires it’s own overhead to run and subsequently can’t give you accurate results in terms of knowing how well the target machine is performing. Additionally, it means that you have a machine with a configuration that’s out of sync from your master set-up. Depending on how you’re working this can mean work that’s lost in the saving process, or unique to a single machine that can’t recovered on a re-start. I largely work with installations where we use a single TOE file and lots of TOX files for the configuration of our networks, a machine that’s configured manually through a remote connection needs special attention to ensure it’s data / configuration isn’t lost during a restart - which I’ve done any number of times.

I also generally dislike the practice of using the Touch UI during shows and installations unless necessary - opening the network editor starts a cascade of operations that cook all manner of operators and place unneeded elements into both CPU memory and into VRAM - resources you can’t recover unless you restart in perform mode. While it might be stoic idealism, I firmly believe that when possible one should build out projects so they can be configured, run, saved, and exited entirely from the UI that’s been built or pieced together - there are a number of reasons for this particular ideology and a large part of this is tied to the fact that if you rely on using the touch network to operate or calibrate your installation / show, you’ll never be able to teach someone else to do it without you (even another touch expert) because any number of gotcha details are only apparent to the programmer / developer. For better or worse I’ve worked in situations where at some point I have to hand over controls to a client or to someone I’ve trained, and that usually means keeping them out of the network editor.

Thinking about remote calibration, as a process, is also important because it’s unlikely you’ll be able to see a model mapped from multiple angles from any single vantage point. You’ll need the freedom to walk to various parts of the room / installation, and being tied to a workstation will cost you valuable on-site time… and the longer you spend calibrating, the less time you can spend putting an installation through it’s burn-in paces - testing the various media or generative pieces. Again, this comes from the experience of having narrow margins between load-in, client review, and execution so any time lost to work around solutions are often costly and frustrating.

This also comes from the fact that these days I frequently have to think about how you calibrate and configure machines where direct access is not a safe assumption. A recent installation runs on 36 servers (18 primary machines and a mirrored 18 as back ups) to drive 32 projectors, led screens, and a controller interface. The venue has no single vantage point from which all surfaces are visible, and from the control room about half of the display surfaces can be seen. The venue floor, where you have to calibrate the installation, is three floors below the server room. Additionally, playback servers run headless and are only configured to output content rendered for mapping - which means on these machines there is no user interface. In this kind of installation there is simply no other answer than to consider how to calibrate machines from a remote location… you have to build an iPad app, use a custom touchOSC interface, or use a laptop to drive a calibration interface you’ve built in Touch.

In some cases you can cleverly re-use existing pieces of built modules - we do this at Obscura with Stoner all the time; in other cases it’s harder to get this right (Camschnappr, I’m looking at you… you’re so beautiful, but so hard to think about controlling remotely). Regardless, I think that it’s best practice to think through what you’re calibration process is going to look like - beginning to end. I am also in the camp that thinks it is unreasonable to rely on screen sharing tools to solve this problem. I’m sure there are some who hold the opposite opinion, but I’ve made huge messes with VNC trusting that the value ladder increments will work as I expect (they don’t), that my screen is always updating (it often doesn’t, especially in full screen mode), that where I’m clicking on my screen is where I want to click on my remote destination’s screen (it works most of the time, then when it doesn’t you’re really in trouble), or expecting key commands to work the same way (depending on your tool and configuration that ctrl+s may not have actually saved your toe file).

Worse than trouble shooting your own software is loosing time to fix a bug that is really just an artifact of poor planning / slow screen refreshes / or bad behavior you’ve never seen (because you’ve always relied on having the network editor available at all times). While these might seem like nit-picky details and grumbles they come from countless hours of frustration and trouble shooting that could have been avoided. The miseries I’ve visited upon myself are ones I try to steer others away from… at least when it’s possible to do so.

1 Like

Ever work out the remote Camschnappr problem?

If we had some sort of ‘next point’ and adjust LRTB it would get close enough for touch-ups, of not major work.

Going to enter the maze again…

Bruce

Hey Bruce -

Not yet. I’d have to dig in again to have the right perspective. Using stoner remotely what we do is to run stoner locally then stream over the various tables that are making it work. All of that runs on top of some logic to sort out which machine we’re talking to, and which output so we’re only streaming relevant changes.

I think you’d have to sort out a way to fake both sides of the UI on your remote as a way to see both model space and screen space - then you might be able to get away with just sending over a transform matrix to your target camera / projector. Best case scenario you’d have some additional feedback out your projector to see your alignment points. I don’t think it’d be a huge endeavor, mostly just a pain to plot out and plan a solid approach - then lots of testing.

Right, I see your approach. I have a frankenstein containing stoners and storage too. But I’m looking for a quick fix first…

And it looks like the keyboard commands are in a macro that could be copied easily to an OSC type system…

This isn’t for a real saved structure, just accessing a camschnappr remotely.

Since I’d presume every amateur schnappr has to deal with this at some point, should make a good RFE - some sort of standard control protocol.

I’ll report back, but like I say - looks doable.

Ok, yes, that’s the bomb. Best canscnappr experience I’ve ever had. I just made a radio set to pick points, and coarse and fine movements (slider for coarseness). Sent via OSC to multiple schnapps.
I’ll post my bodge, but it should definitely be a core feature.

Can’t wait to see what you’ve come up with Bruce!

Hi Bruce, I’m trying to figure out how to trigger these camschnappr macros (movePoint tab, movePoint up, movePoint down, etc…) with OSC. Is that how you accomplished it? Did you have to use tscript?

Never touch tscript!

I’ll have to dig it up, but I believe I had to grab the macro stuff and put it in another module that I could trigger. Not really a long term solution, which is why I didn’t share… but it was well worth the few hours work.

haha! I will steer clear.

I figured out how to run the macros! Here is the first example of simulating the tab key: (assuming your camschnappr component is at root)

op('/camSchnappr/camSchnappr/local/macros/movePoint').run('tab',fromOP=op('/camSchnappr/camSchnappr'))

Oh, cool, thanks!

That would have saved some time, but the whole approach saved so much time and heartache, I don’t mind! :smiley: