Login | Register 
FEATURES
APPLICATIONS
DOWNLOADS
EDUCATION
BLOG
WIKI
FORUM
STORE

 

s


  Nov.15.13 Patrik Lechner's Relentlessly Non-linear, Fully Generated Universe

s
 
z
 
rt3

There's an authenticity to Patrik Lechner's generative audio-visual compositions that is deeply and persistently gripping. With each work published over the last couple of years Patrik has had us increasingly intrigued by his processes, methodology and influences. Some weeks ago we posed a few opening questions of Patrik who responded in generous detail starting with the declaration: "I'm in love with TouchDesigner!"

Attributing a central principle based on iteration and improvisation to his work Patrik expresses a strong aversion to linear editing tools which for him places what is lively in an aesthetic in danger of being deadened. He goes on to describe a very dynamic practice that requires (a) first building modular tools to create the work - without getting indefinitely lost in tool-building and (b) producing work that is as unrehearsed as possible, procedurally. The work speaks for itself but what Patrik shares below adds volumes to the conversation and satisfies much of our curiosity. Interesting and strongly recommended reading for all of our users, we think you'll appreciate!




start here: experimental electronic music > freedom in improvisation > synthesize anything you can imagine > confronting self with self > understand what you're doing > generate everything

I'm 27 years old and actually come from experimental electronic music. I started making music when I was 16 because I saw some kind of freedom in art, and especially in improvisation. At that time I was playing around with an electric guitar, but I soon discovered electronic music, which to me, seemed to mean total independence and freedom of expression because you could simply synthesize anything you can imagine.

Sitting in front of fruity loops I soon got frustrated and dove into Reaktor, into pure data and finally Max MSP. Since that time I am waking up with Max (I programmed an alarm clock to play random mp3s) and go to sleep with it. For me as a musician this seemed to be the perfect environment. Also from that time on I decided to nearly exclusively work with synthetic sounds that I made myself. I nearly never use samples that I or somebody else recorded with a microphone. The idea simply is to confront myself with myself and as few other aesthetic influences as possible while working on something, and try to really work from the ground up, trying to always understand what I'm actually doing. That may sound stupid and I'm not saying it's great, it's just how I work and I learned a lot by doing it that way. I guess that is the reason why I only generate geometry inside TouchDesigner and don't use any pre-rendered material. I find it simply more interesting to work in this procedural way than to model something in every detail.

F-Sult

visual inspiration > finding methods for oneself > not knowing is interesting > tools & processes > discovering TouchDesigner > build your own > the science behind > diving deep

At some point in time I started doing visual stuff. I think my initial interest in visual 3d stuff arose from seeing Gantz Graf by Alex Rutterford. I find it very funny when I read that Rutterford said that there is no generative element in that video, since I didn't know that for a long time and was (and am) convinced that a generative approach is the way to go if I want to do anything like that. I think it's often more inspiring and productive if one does not know how something is made, since it provokes a lot more thought and one will find methods that are maybe better for oneself.

Still using pd with GEM, an environment for realtime rendering openGL stuff and, I was frustrated by my own ignorance and the seemingly overcomplicated process of doing things there. I moved to vvvv. Same thing there and via a longer time working with Jitter I finally arrived at TouchDesigner.


F-Sult
It was quite a coincidence that I discovered TouchDesigner, I saw some YouTube tutorial about Max, and the guy doing it mentioned that Jitter is completely obsolete (which it's not) since there is Touchdesigner. I tried it and was.. I don't know what to say. I know, Jitter and vvvv are great, sure have their advantages, but I felt like I finally found the tool I was looking for. There is so much about TouchDesigner that I found unbelievable. I mean, take this little ladder menu you get by middle-mouse-clicking on a parameter and why isn't this in every piece of software I know?!

If you are the person who is interested in the science behind your tools, starting to build your own in order to create aesthetic output, I think you know what I mean when I say that there is always the danger that you develop a tool for three months and never actually get to using it. The science behind many things (for example a simple digital lowpass filter) is so deep that one can dive into it to an arbitrary extent and lose the actual aim of doing music.

TouchDesigner seems to reinforce the balance between creation of tools and actual aesthetic expression, and this is invaluable for me.

don't starve

working > teaching > learning > non-linearity > improvisation > controlling data in realtime > meaning


I'm actually an audio engineer at an Austrian TV Station, a multimedia technician/programmer and student of media technology, but also I taught for example Max/MSP to the audio technicians of the Burgtheater in Vienna and I'm privately tutoring an American composer in Max. Also I work as a scientific assistant at the Institute for Creative Media Technology at a university in Austria. I studied a bit of philosophy (by the way, for people interested in visual stuff & philosophy I highly recommend the works of Husserl and my biggest influence; the arcane and speculative works of Vilém Flusser, who, for example, claims that video is the actual medium of philosophy), made a lot of music, had concerts in Austria, Germany, Italy, Dubai and Shanghai.

I'm very interested in building my own tools for what I do, because I hate working with things that I don't understand to some extent, and I can't live without modularity and the possibility of adding whatever features I need. Also I have a deep and emotional reluctance to working with linear editing tools, be it in audio or video. I simply believe in a certain liveliness in aesthetics that is in danger of being killed (at least for me) if things repeat without alternation - it becomes so static.

So iteration and improvisation became a central principle for me in doing music, and I'm sure that in my visual work it also has a big influence. Music and video production are kind of the same thing for me, it's just mapping of data in a way that feels appealing, controlling this data in realtime and exploring ways of generating mappings and data that is "meaningful".

Patrik's Equipment

I'm using a lot of midi controllers for the music. With those I'm controlling a custom Max application that grew over, uhm, maybe 10 years now. There are a lot of FX, resampling tools, sequencers and of course synths in there. Also very important for me is my analog modular synth. I have a case full of modules that specifically deal with CV (a Sequencer, clock dividers envelopes, some logic stuff, etc.) which I also use to control the music and the visuals.

Now there is a lot of data that is ready to be visualized. I hardly use any FFT stuff since I already know the spectrum of a signal if I know it's, for example, a sawtooth with an amplitude x and a lowpass with a cutoff y. All data one could want to have is already there, there is no process of analysis. Therefore it is more or less easy to create some kind of coherence between audio and video, or at least it is technically simple. Analysis only happens in the sense that I kind of have to find out what data is important and makes sense to have a visual representation.

To get more technical on this point: I'm using one computer for all the music. All the midi stuff is connected to it and this computer is used for selecting which data is important and sends it to another computer, the TouchDesigner machine, via OSC.

lllustration 1: My OSC recording/playback COMP containing the data for rt3

I created a rather simple COMP specially to record all the incoming data, to inspect it, save it, load recorded CHOP data and play it back to synthesize visuals from it, since realtime recording of the video is a thing that I have done many times and it was always a bit frustrating. If you don't have a commercial license you lose a lot of FPS since you can't do it hardware-accelerated and I only have a crappy SD camcorder with which I did my early stuff. It looks really rough.

Also the whole improvisation idea is a big source of errors. I don't know how many beautiful tracks I've made forgetting to record the audio, the video, or the data. If I have only the audio, I just have to do it again, or do something different rather. This for example happened twice when I worked on the "F-sult" video. I really liked the first audio take a lot more that the third one that is now used. It was quite different also, but I forgot to record the data, only had the audio, so I couldn't make a video out of it. The TouchDesigner patch for "F-sult" was already finished and it didn't make any sense to go for FFT and envelope following if everything is set up in a much more detailed way already. Also I only record stereo, not multitrack. There is only minimal "postproduction" Some EQ, a bit of compression, if at all.

With rt3, similar errors happened, and you can actually hear that the sound is clipping and distorting quite a bit in the middle of the piece. Since I have no limiter before analog to digital conversion, I do have to take care about the levels, and if I get too enthusiastic during recording the result is distortion, but I liked the version so I didn't care.

The making of rt3 > describing the step-by-step process > the bigger picture > play around > improvise > technical achievements as starting point

You can see in rt3 there are six important structures on the top layer:

1. The OSC recorder/Playback
2. A comp that allows attenuation of the signals via an animation COMP
3. The actual image generation
4. a "preroll" COMP
5. a simple record CHOP
6. finally the movie out TOP

To go through one by one:

1. The OSC recorder has already been explained.

2. The second COMP is there since I have the problem that I mix my audio signals live with an analog mixer and only record stereo as I said. So TouchDesigner is for example, receiving the amplitude envelope of the base drum generated by the analog modular synth. That does not mean that the base drum is actually turned up on the mixer and one can actually hear it. Therefore I have two options: deactivate the trigger of the base drum instead of muting it, or do some postproduction on the recorded control signals. (Mixing digitally or having an analog mixer with post fader direct outs would be a way to avoid this, but the one I'd like to get sooner or later is a bit costly)

3. What can I say, that's where data becomes video.

4. Quite important for me is what I call my "preroll" COMP. It is controlled via OSC by Max. In Max I start the preroll and generate 4 short sine tones and at the same time the preroll COMP composites a countdown on the incoming image. Since I don't record the audio in TouchDesigner I have to synchronize audio and video afterwards and this is a big help.

5. I used that to record data generated by the video. It may be quite subtle but I actually used the position of the centre of the 3d world relative to the camera to pan around the sound a bit in postproduction. Also distance is controlling reverb amount. As I said there was no real postproduction and I just wanted to test this, but I plan on going further with it. I'd like to play around with ambisonics stuff controlled by TouchDesigner since I did work with ambisonics quite a bit already and I like the idea that the video has more influence on the audio also. I'm not sure if this could become very computer game-ish, I'm very much into mono and stereo and very skeptical regarding multichannel things.

6. Just recording the video.

I learned a lot while working on this project. Also it was a big relief for me since I have so little time to work on these things and I'm happy that I could finish something finally again. It kind of started out because I met a friend who is doing a lot with vvvv. He showed me a bit of his vvvv skills and things other people are creating with vvvv and I showed him a bit TouchDesigner. I was kind of impressed by the video he showed me but also felt inspired and challenged. I just thought, come on, I should sit down and see what I can do.

rt3

The bigger structure of creating this stuff is more or less always the same (there are exceptions): play around in TouchDesigner until something looks like I want it, start thinking about the music for some time, work on the TouchDesigner patch, start improvising with it a bit, try to find sense between video and audio. I try not to play it very often before the recording though, because an improvisation kind of loses something if you try to repeat it.

Also technical achievements are often a starting point for me. For example I've been working on a camera COMP for some time now that incorporates easily controllable DOF, simulates automatic light adjustment, is capable of a vertigo effect and stuff like that. Also, the control of translation is a bit more customized for my needs. This was definitely one thing that I wanted to try out with this project. (See screenshots below.)

technical studies > inspiration: Designing Sound > look deep, concentrate on important parts

The only project that was a bit different until now was "don't starve". It was my first try of doing visual work 1. for somebody else 2. without midi data (or other underlying data) 3. for music that is already finished and not improvised. That was quite new to me and an interesting process. For the Austrian composer Asfast I did one other video in a similar way that is not released yet.

F-Sult is a good example for a piece that actually is only a collection of technical ideas. I regularly produce small TouchDesigner files, small technical studies where I am trying to achieve different materials, trying out python ideas, doing experiments with geometry etc. you can see some of them in the screenshots too.

A big inspiration for all my work in video is the book Designing Sound by Andy Farnell. It's a great book on synthesizing physical objects, but also a great way to learn some tricks on approximating what you want if you can't (or don't have the skills) to simulate the real thing. It really encourages one to learn a lot, for example - why thunder sounds the way it does and shows how to simulate a simplified version - to look deep but concentrate on important parts of a phenomenon. The pseudo-refraction and the pseudo particle system screenshots are good examples for that I think. I find the refraction quite convincing although it has nothing to do with actual refraction. Also if the final output is a video as fast as rt3 it just doesn't matter if refraction was correct or not (I didn't use this in rt3 though).

Random StudyTrying to get really many 3d particles via instancing without writing GLSL. They aren't one real particle system, but for some cases it's enough and quite cheap.
Psuedo Refraction Study

088 > python > new opportunities > looking forward to the future > next steps, quite a few of them

088 was released on 12th of September, I think. The 13th is my birthday :) so, thanks Derivative!
I bought a commercial version a bit before that so I'm playing around with 088 since maybe the beginning of September, I'm very happy with it and impressed by the frequent updates. I've been learning python since its integration was announced and there are very fun ways to do so, first and foremost the MIT OpenCourseWare computer science lectures. All the interface tweaks, python, soft shadows, and also 64 bit are unbelievable addition to a product that, to me, seemed far ahead anyway.

I'm starting to work at This-Play, an Austrian company who mainly work with vvvv, so there will be be a lot of knowledge exchange and we intend as well to organize workshops for Max, vvvv and TouchDesigner. This opportunity yields the chance to work with TouchDesigner in a more commission-based style which is bound to get interesting.

Another project that is pending and has already started is a collaboration with a painter, Christopher Stürmer. The aim here is to combine classical painting with digital projection, to sonify the whole thing and present it as performative live-painting. Also I'm supposed to write a book for Packt Publishing (details forthcoming, stay tuned) and to finish studying. So there is too much to do to make bigger plans right now.

Personally I hope to go a lot further making audiovisual work. I'm currently preparing a short 15 min set to be premiered on December 6th, 2013 and have plans to make one that is about 40 mins to an hour. I'd like to experiment more with multichannel sound and projection areas beyond the 16:9 screen. I'd like to explore 3d mapping too of course. Technically I hope I can get more into multi-computer setups and learn some vvvv as an addition to TouchDesigner and for a deeper understanding of graphics (it's always good to see a thing from different perspectives, right?). In TouchDesigner, I want to learn a lot more about SOPs and a long-term plan is to gain fluency in GLSL but that will have to wait as my knowledge of python is something I prefer to extend at the moment.

In audio I'm trying to actually learn a lot about physics at the moment and experiment with physical modelling, trying to build simplified simulations of sounds. Therefore in the future, that might also be something I will try to incorporate more in my visual work.

Work-in-progress for a live performance in Vienna in December - Velak gala at Brut

<<back to blog


s

 SHOP & DOWNLOAD
Download
Store
CONNECT WITH US
Like Us
Follow Us
Watch Us
COMMUNITY
Blog
Forum
Wiki
Education
COMPANY
Partners
Privacy
Terms of Use
Jobs
Contact Us