A year ago we spoke to Trypta's Keith Lostracco for our review of TouchDesigner-built Performance Interfaces by members the community. Capable of producing the intricacy you see above, Keith's Fragtool is an incredibly powerful and elaborate application that has become a key element in Trypta's visuals, enabling the creation of 3D fractal animations in realtime to music in a way that was not previously possible with other software. Here are some snippets of that conversation:
Background: In 2002 Keith designed and built a three room recording studio near Nelson BC, started to play bass and drums and to produce electronic music with his brother Greg. In 2006 Keith dove into 3D animation (Maya) and performing live visuals at local festivals and shows for Greg's live electronic act. In 2008 after using Max/MSP at a live event Keith discovered TouchDesigner and soon after, began using TouchDesigner exclusively. In 2012 the brothers started Trypta, "an audio visual act intended to blend the mediums of live audio and synchronized live visuals with the feel and presence of a movie while using the audience in the performance - it's hard to describe in words!" (It really is!)
Keith Lostracco's FragTool
Keith: With FragTool I created a component to simplify slider creation, naming, type, functionality, ranges and the ability to receive preset changes with having as little impact on performance as possible. There are two main types of sliders: integer or floats. The first type is just a value slider the second is a value/mod slider where there is a button on the right that opens another two sliders and a drop down menu to be able to select a mod source and adjust the gain and offset of that source. Inside the component that contains each group of sliders is a table with all of the attributes of the sliders, then a couple of replicator components just create the sliders dynamically when a new parameter is added to the table. This is the beginning to a dynamic menu, eventually there will be a table for each fractal type and a switch that selects the appropriate table when a fractal type (or render engine) is selected, in turn the replicator will recreate all the sliders.
The are over 100 parameters to control the parameters of the fractal and the other 4 pages. All your standard fractal, coloring, lighting, rendering parameters.
The controls on the top left area are for presets. The presets recall everything - animation keyframes and channels, mod sources/settings, and all other settings. Every parameter is saved out to a DAT table when storing a preset and scripts update all the UI elements back to their original settings.
Escape-time algorithm fractals are a type of fractal. The Mandelbrot and the Julia set fractals both use the escape time algorithm. One of the two fractals (the main shape) used in Escape Time is a combination of an Escape-time fractal and a KIFS or Kaleidoscopic Integrated function system.
I call it Trypta Box as it's basically a mutation of the classic Mandelbox fractal and a KIFS with a bunch of custom parameters. To make a long story short for every position in 3d space, the fractal algorithm calculates if the surface exists first using the Mandlebox algorthim, then folds the space (point ) using the KIFS.
It then does that over and over to get more detail. If the surface still exists at that position after the set number of iterations, it colors the pixel that is inline with it and the camera position.
Escape Time was one of the first fractal music videos we did in TouchDesigner. At the time we were so excited by it that we hit the 'upload' button before it was really done (both audio and video).
It was the first time we comped two different fractals as well as OpenGL geometry. It was also the first iteration of using the stems of a track (individual audio components) to modulate certain parameters of the fractal. I discovered there were a lot of challenges to overcome to be able to do it well and efficiently.
Because of the way humans hear sound (amplitude and frequency) and the actual rate of sound, the amplitude data from audio (even if splitting different frequency ranges) doesn't necessarily correlate to what you would expect to see or least what is visibly relatable to the current sound.
Mostly we found it takes: splitting up the sounds (before they are mixed ie. the stems); then taking those and editing them to the get the most "up front sound" of that particular sound; then using FFT transforms to make new sounds that only take energy from the desired frequency; and finally taking that sound, taking its RMS and converting it 60 hz data with some envelope shaping to smooth out the transients. Since Escape Time we've come up with a better system we've used for some upcoming releases.
<<back to blog