Close
Company Post

Luminosity - Keith Lostracco's TouchDesigner Media Server

There exists an impressive range of TouchDesigner-built tools and applications authored by the community to perform and operate live shows. The most recent and likely ambitious we've seen to date is Keith Lostracco's brand new TouchDesigner media server 'Luminosity'.

No stranger to taking on the building of dauntingly complex systems (see Keith Lostracco's Mindbending Fractal World "Escape Time"), Keith tells us Luminosity was built to create a front end for TouchDesigner aimed at providing operators with an easy-to-use interface to perform, control, and distribute realtime media while retaining the open-ended programmability of TouchDesigner as well as its speed.

 

Keith describes Luminosity's functionality as combining a high-end media server, a VJ program, and TouchDesigner. The same types of audio routing and distribution methods found in digital audio workstations and mixers were implemented here to process video and data as well as audio.

The initial idea - which was to facilitate the playing of clips (video, TouchDesigner Components and audio clips) in a similar way that popular VJ softwares or Ableton Live does, quickly evolved to taking on the functionality of high-end media server systems.

A staggering undertaking with spectacularly comprehensive results, Luminosity was debuted for V Squared Labs' Blizzcon Starcraft where 12 projectors 3d mapped the complex stage with full previs, and again at the Superbowl this weekend to manage a 24-projector ESPN event.

With a number of features pending, this tool is not quite 'finished' but it's a fair prediction that Luminosity will prove to be useful in many places and that it will continue to evolve.

Over to Keith.

Keith Lostracco: The workflow is straightforward. There are clip channels, auxiliary channels and master channels. Media can be dragged and dropped onto clips (video, audio, custom COMPs, controller data etc.). The clip is then routed to a clip channel (in different ways depending on the trigger mode). Clip channels can either be routed to auxiliary or master channels. Auxiliary channels can either be routed to other auxiliary channels or master channels. Finally, any channel (clip, aux, or master) can be routed to an output. Also channels can be routed to multiple destinations while auxiliary and master channels can have multiple sources.

In essence channels are individual mixers that can be routed to other channels using standard bussing techniques found in large-format audio mixers. Since the majority of media being routed is video, different blend modes can be chosen for each channel routed to a channel as well as its order in the mix.

Master channels are special channels that use only one set of controls and effects for all master channels. This allows for the ability to route different video to different outputs while at the same time having the same effects on all the outputs and having one master fader on all outputs.

All channels also have insert effects (another audio term) where effects can be dragged and dropped onto each insert slot and thus engaged. To control all this functionality a UI system was designed and built using custom gadgets that are easily used and created but that have as little impact as possible on performance.

An impactful decision made very early in Luminosity's creation was to separate the UI processes from media processing. The result is that the majority of the UI's performance hit is on the CPU so that even on single-GPU systems the UI has no perceptible impact on the media frame rate.

Setting up the system in this way did make for a more difficult design process but performance gains far outweigh the extra effort. It also made the transition from the UI controlling one machine to multiple machines much easier.

Since the majority of the jobs Luminosity was designed for are large-scale video projection / mapping / distribution installs, it was necessary to build a comprehensive system to video map channels individually as well as to split content across multiple physical media servers while using one UI interface to map and control all the nodes.

To do this a "server setup system" was implemented. The server setup specifies: the number of machines, the number of GPUs on each machine, and the number of the outputs being used on each GPU.

When setting up specific nodes/outputs that are stitched together to be part of a larger canvas, a full map resolution must be specified as well as the portion of the total resolution that each individual node needs to render. Tools were then built for both video generators (synths or COMPs) to render only that specific portion per node. This allows for full scale/resolution realtime scenes to be rendered, mapped, blended and synced across nodes.

When using TouchDesigner's built-in Sync CHOPs in Pro Mode, and Quadro series cards with the Sync cards, frame-accurate-sync works across multiple nodes (all GPUs are considered nodes each running one instance of TouchDesigner whether on a single computer or spread across multiple machines).

There is also a video conversion and distribution system where full map resolution videos can be dropped on clips and after specifying settings for format and frame rate the video converter/distributor will then crop and render out a new movie for each of the nodes to now play according to the crop settings specified in the server setup.

Remote 3d and 2d mapping and blending tools are present to map each output on each node using only the master server's user interface. Backup mappings can also be created for outputs if projectors are double-stacked for backup - in the event the first projector fails. Edge blending can be achieved in both 2d and 3d space making edge blending across 3d surfaces very easy and intuitive.

As many mapping jobs are extremely complex it was necessary to build a pre-visualization system in order to determine the many unknowns that exist in any given install - for example: the number of projectors needed, lens throw, render perspective, projection image size etc.

Luminosity's PreVis module allows the user to build a 3d scene of the venue and of the projection surface, to create a projector for each output, and then to map and blend each projector virtually onto the 3d model. Lights are used as projectors to project an image onto the geometry. The image is then mapped using either the 3d or 2d mapping tools and blended.

All the PreVis mapping data is then copied to the actual mapping outputs. If at the event, physical projector placement and projection surface is accurate (to the PreVis), then the majority of mapping and blending is already done when the project is loaded for the first time on site, with the maps requiring only slight adjustment. If using 3d edge blending the blends barely need any adjustment on site.

Random points

Most control panels have local preset systems which allow presets for each control panel to be stored and recalled easily. The mixer also has presets so that different routing and blending setups can easily be stored and recalled.

Most user interface gadgets can learn MIDI, OSC, DMX and Ableton (using custom M4L patches) by selecting the corresponding mode and then clicking on the newly color-coded gadget and moving the controller.

Custom components can be built using either the default effect or synth component. Once built they will appear in either the synth or effect lists and can be dropped on clips or effect slots. When using the default effect or synth COMPs all gadgets will be part of the preset and controller systems.

Clip player

Clips can be video, video with audio, audio or controller data (OSC, MIDI, chop animation etc...). Any number of clip channels, scenes, and banks can be specified. Clicking the clip triggers the clip, pressing a scene button trigger all the clips in that scene. When the "Pre-Load Movies" preference is on then all the clips in the current bank are preloaded into memory for the fast stutter-free triggering of simultaneous clips. When another bank is selected the previous bank is unloaded and then the new bank is pre-loaded. Luminosity is still fully functional during this process. Banks can be previewed by pressing shift and hovering over the bank with the mouse, this doesn't load the bank but allows the user to look at its icons in order to determine which bank needs to be loaded next.

We had a few more questions for Keith...

Derivative: What prompted building Luminosity. Was it inspired by Fragtool? By working with Vello and Epic on these huge productions and seeing the need for it?

K: The original reason to build Luminosity was to build a tool that could be used both for a live production and for non-realtime video production. The idea quickly evolved into the form of a master control system where all data streams in a performance could be centralized. A sort of master sync hub where video could be produced and captured and output, where audio could be produced and analyzed both for output and for video control, and where lighting controls could both be taken in and output as a sort of lighting synchronizer/processor.

I have had this idea for quite a long time. Early on when building Fragtool I saw that Fragtool could be just a single part of a larger system. Vsquared's Epic system was definitely a big inspiration. Its unique method of creating zones and using global sets of geometry, lighting and camera positions were super powerful methods for outputting video at live events. Peter Sistrom introduced me to the power of TouchDesigner's storage system which now plays a key role in Luminosity data systems.

D: Curious also as to the amount of time and iterations something this complex and detailed takes...

K: Early on I made one major change in the way things were organized in the system because I saw some future obstacles with the first method of organizing channels, clips and routing. Since then everything has been built with paths/methods for future features to be easily implemented. When I first started building it I spent weeks just thinking, designing and testing different methods in order to get the most flexibility and the best possible performance. When building such a large system a lot of time and thought was needed in order make the system as efficient as possible - otherwise performance would be very slow. Using a lot of scripts to set parameters rather than exporting, splitting CHOP networks into cooking/non-cooking networks so everything doesn't need to cook every frame, and writing shaders to streamline certain videos processes were a few of the methods I implemented.

D: You frequently reference audio tools as inspiration for Luminosity - and I guess to a large degree most video tools, VJ tools, sequencers, editors etc. are modelled on audio tools. But you do come from an audio production and engineering background. Does that background give you extra insight? Make things easier?

K: I think it does. Historically video and audio have been treated quite differently in typical broadcast scenarios, I think because of bandwidth and their totally different switching aspects. However, now with emerging techniques from the live video/mapping/VJ scene and with fast new hardware we can start to treat video like audio, using and processing it in similar ways.

D: What makes you so capable and really good at building complex systems?

K: I'm not sure if I'm really good in comparison to others - I just don't like to give up on an idea and usually take a brute force type approach - put the time in until a method is found. Most of the time I'll try and do things the best way rather than the fastest which is not necessarily easy. Sometimes a lot of research or learning must be done and/or trial and error. By thinking of the big picture and asking some questions. What is the purpose of the tool now? What is going to keep it from performing in the best way possible? Finally a huge question is what are the strengths and weaknesses of tools that have previously been created? In the end usually a decent method is found.

D: Do you start with a written idea? Sketches? Flowcharts? What is your process?

K: I do but more in a TouchDesigner way. I do a lot of experiments to find the best methods.

D: Do you regularly have 'aha!' moments when you're working on other projects or performing for example? like "aha! I know how to do that better/faster".

K: A lot - especially over the past year. There are so many ways to accomplish the same thing that it seems there is always a better method.

D: Why TouchDesigner vs. any of the other tools out there?

K: The way TouchDesigner has been designed is nearly unbeatable. The tools and methods Derivative have created make developing tools unbelievably fast. I don't think a single person could create such a complex tool as Luminosity in such a short period of time in any other environment. I can't thank the Derivative team enough for all their hard work in creating the incredible software TouchDesigner.

D: What next Keith, what next?! :-)

K: I still want to add a number of key features to Luminosity. Lately I've been developing some interesting 2D/3D fluid/particle systems that I'm really excited about. I have an idea to create a new formula a fractal fluid... Also at some point once a timeline and animation features are built into Luminosity I would like build a raytracer for both realtime and and non-realtime rendering in Touch.

A note about Vello and V Squared Labs

I definitely want to thank Vello Virkhaus and V Squared Labs for their contribution to Luminosity and the opportunities to use it on some large scale events such as Blizzcon. Without Vello's support a lot of the functionality of Luminosity wouldn't be there.

A big thanks to Keith for taking the time to talk to us and congratulations on this very fine product. And for anyone interested in learning more about Luminosity please contact Keith directly!

Comments