There's a short sequence near the end of Canadian/Swiss filmmaker Peter Mettler’s new documentary feature film The End of Time that received significant media attention even before the film had its North American premiere at TIFF in September. The sequence which has also been called a “synapse-fryer” (Scope) was described by the Hollywood Reporter as a “sustained experiment in pure audio-visual abstraction, a dazzling montage of symmetrical shapes and overlapping patterns set to pulsing electronic music.”. The 8 minutes of film in question is created with Mixxa, the massively flexible HD video mixing tool built entirely in TouchDesigner by Derivative founder Greg Hermanovic.
The End of Time Trailer
Segment, The End of Time Mixxa Sequence
Since 2002 Peter has gravitated towards mixing images to electronic music in live settings, exploring ways to perfect their performance by working with various combinations of tape, video mixers and playback machines. Then 7years ago at a party where they were both mixing visuals Peter met Greg. Sharing the same passion for the craft and it’s actualization they hit it off immediately with Peter quickly becoming the most ambitious and prolific user, and in a sense, co-developer of Mixxa. The rest, as they say, is history.
This article includes an interview with Peter who kindly made the time to speak with us the day before The End of Time’s TIFF premiere. Following Peter’s interview is another with Greg who sheds insight into the design, evolution and raison d'être of Mixxa, in the process also recounting the developments and advancements in technology that have brought us to this point in time and tools. Together, what Peter and Greg disclose traces a substantial part of the history of Vjing and tells the compelling story of what drives the desire to mix images to music LIVE.
"In exploring the nature of consciousness and seeing, I’ve been making films that try to immerse an audience into an experience of something in which they must also confront themselves. The viewing experience becomes more of a meditation around a subject than strictly an absorption of information. In previous films I’ve explored subjects like the Northern Lights or culture in Bali, but in some way they always explore the interconnectedness of things and how we perceive them." Peter Mettler interviewed about Petropolis - Aerial Perspectives on the Alberta Tar Sands, 2009
Mixxa Sequence, The End of Time (incorporating graphic elements of Harmonia by BrunoDegazio)
With his intuitive way of working often structured as a self-described voyage of discovery, Peter's fondness for mixing image compositions in the context of live improvised performance is not surprising. If you are familiar with the filmmaker's work then you will appreciate that these same skills and sensibilities applied to the processes of live mixing is an opportunity not to be missed.
From the perspective of the filmmaker whose work is to slowly and deliberately shape stories over often great lengths of time, performing live visuals fulfills Peter's desire to improvise in the moment, usually in synch with live musicians or DJs. The process is one that pulls from the filmmaker's experience and skillset but that is also according to Peter, very much dependent on the ability of the subconscious to form patterns and tell stories.
For Peter, Mixxa brings a spontaneity to working with film or video that cannot be achieved through traditional means of editing. Just as almost all the arts have potential for a free-form or improvisational execution (eg. music, dancing and writing), Peter says that this is difficult to achieve with film. Whereas to a degree it is possible to improvise with the camera in how you react to what is unfolding before you, the nature of editing has traditionally been a slow and mechanical process that does not lend itself to improvisation.
Peter sees in live mixing the exciting potential for a new form, both in its practical creative approaches as well as in its articulation and language. Not only does Mixxa facilitate an effective manipulation of images into a painterly vision within a real-time response, but it also allows already familiar images to be re-contextualized, conjuring new meaning through varied associations. (It also, perhaps for the first time in image technology, allows for a relatively free flow of a performers' subconscious narratives and musicality to turn into a visual result for an audience to participate with in real time.)
"It's to do with sensing or divining currents of meaning through the use of technology. Live mixing is a bit like going into a technological trance, letting things unfold and emerge while reacting to themes, emotions, ideas which are coming at you. Sometimes I record the mixes, because there's no way I can keep track of all that's happening. In reviewing it, I can more easily see the embedded narratives and surprising juxtapositions," he explains. "There is a logic within that is much different than anything the rational structuring mind would have come up with. I find that exercising associative perception can give you a finer appreciation of the relationship between things." The VJ Book, Paul Spinard, 2005
Performing live visuals in contexts ranging from improvisations with Fred Frith to the compositions played by the Art of Time Ensemble to live dance pieces and various techno dance environments can be demanding and complex.
"Illustrating how the improv works with Fred can give you an idea of the kinds of demands put on the "instrument". Fred is famous for his tabletop guitar, with pickups all over it, and a kit of chains and brushes, sticks, vibrators and rice and string etc – all things that can produce sounds when coming into contact with his guitar on which he plays as a virtuoso to begin with. He has the signals sent out to an array of boxes and foot petals than can record loops and affect the sound in ways that he also can play back to himself. There truly is nothing like watching/hearing Fred play. Meanwhile I have Mixxa setup to quickly retrieve clips and layer them and apply real time effects with a seemingly infinite variety of combinations and possibilities. We seldom discuss what we will do and usually I have no visual plan other than an array of possible clips that I have collected into a bin as a starting point. I really try to let the music prompt me to choose what I do, trying to stay in the moment as I navigate through the interface. Fred also reacts to what he sees on the screen, so we are reacting to each other. I also have control over the sound in my clips and can mix these as 4 independent channels. It's quite surprising to suddenly see and hear a synched sound that actually belongs to the image – like someone speaking for example. And at other times we are no longer sure who is even producing the sound!
Another approach to the improvisation has been theme-based, like the recent homage we did for Andreas Zuest, where I filmed a series of his paintings and books, mixed with live action clips and weather reports and scientific pursuits and such – called 'Meteorologies'. There I actually laid out the order of clips and some presets, improvising my way through them. And an even more strict approach was for a chamber piece of Vox Balaenae by George Crumb which actually involved me hitting cues following a musical score. But I performed everything - all the fades, effects and combinations, alongside the musicians."
It necessitates an adaptive and powerful audio-visual instrument that can store, organize and make easily accessible, large libraries of footage to be played, edited, and effects-processed usually in multiples, on the fly and in real-time. Peter's search for this visual improvisation instrument was realized in Mixxa and in the relationship he struck with Greg Hermanovic: "I’m a very demanding user/tester. When we first met, I had a lot of ideas about what I wanted in a software version of this visual improvising instrument. Greg has been developing a lot of my wishes—alongside his wishes—in order to create an interface that allows performance in the ways I’ve been describing. Something akin to a musical instrument that allows an intuitive and responsive manipulation beginning with the layering of 4 channels of picture and sound".
Peter Mettler Interview, Peter's house, September 2012
Peter Mettler at Home, photo credit Isabelle Rousset
Peter, when you are playing Mixxa at your best, are you in control or not in control?
What a tricky question... the controlling part of your mind is free but your motor responses are tuned in to control what you’re doing. So you’re not in control in the sense that you’re thinking about it, but your body is tuned into what’s happening and in that sense it just flows. You can’t actually say what you just did - which is why it’s good you can snapshot presets.
Do you think technology can become invisible in the practice of using it and is that based on mastering that technology or becoming part of that technology? The way that perhaps we are part of nature can we be part of a technology so that it can become invisible, or seamless?
You guys go right for the bone! It's a question that's central to what I'm wrestling with in my work and in my practice, this kind of at-odds condition with technology. I kind of hate it yet I'm trying to use it to create epiphany or understanding - experiences that use and transcend technology, or make it invisible as you say, but I'm not convinced it's possible.
We are consistently conditioned and profoundly affected by the technologies we use on a conscious and subconscious level. With technology we have developed ways to see what we can’t see with our naked eyes, be that proton collisions or galaxies light years away. Technology also provides us tools with which to imagine possibility, different worlds.
One of the interesting things in mixing images - mixing images with Mixxa - is that you're taking images that are coming at you from the world and making them your own. You're possessing the things that are bombarding you all the time. You personalize them and they become your "notes of music" - your palette. Sometimes they become completely unrecognizable from what they were. I often use the analogy of mixing images to be like performing on a musical instrument.
I don't know about you, but I think it's interesting what we're tackling – in designing this instrument. It's like a dance with the devil, it's a dance with modernity and you're using this technology that is affecting everybody in a way to try and step beyond it's control over you.
Mixxa Sequence, The End of Time
Something you’ve discussed in the past has to do with using technology to explore something spiritual about our conscious or unconscious...
Yes, we can’t escape technology, so it is part of us, part of our consciousness, and where do you make the divide between nature and technology?
I like the idea that “we are nature learning about itself” as one of the characters in the film says in reference to a Teilhard de Chardin quote. But that includes technology – technology is part of nature and evolution too.
What do you think of - and how do you adapt - to the increasing complexity in the technology we use? Five years ago Mixxa had 3 bins and 20 movies with a couple of sliders to fade and mix with and now you’ve got a grid of 81 possible effects and 150 bins with 20 or 30 movies in them. That’s adding so much complexity: more content, more choices to make and having to navigate and orchestrate that.
I played the piano so the musical instrument I know has a fixed set of 88 keys where each one is tuned so you know what you’re sitting down to each time. You develop relationships or melodies and riffs with your muscle memory and your melodic sense, your poetic sense. Everytime you come to this instrument - and you may come to it in various countries and basements, via electronic keyboards or other manifestations but that set of familiars is always present.
So one of the things I think we're groping with with Mixxa is offering the possibility for someone to design their own keyboard and to say that this is the way it is -- at least for a few years in order to develop the skill of working and mixing with that particular instrument or configuration. And to define physical parameters, for example, we use very basic midi faders right now, but ultimately with a desire to do something like Monolake did with his Monodeck. You sort out the things you really want then you make the knobs and lock them in positions where they're most useful for performance.
Greg: It’s true, I realise I pull the carpet out from under you with every new interface design. I try to keep the essence of it the same while extending it in ways that I personally want - like the flexibility of every cell in the matrix for instance. You don’t need to embrace the extended choices but as you say, you want to sit down to a recognizable instrument where you can reproduce something quickly. That constancy is something you want and need. It’s not the diversity of Mixxa that’s important to you.
Actually diversity is important – different types of projects or performances require different manipulations, but being able to have a constancy in the types of pallets or access to a history of controls or arrangements of clips etc. is important for reference and something to build upon.
Our relationship has been really interesting in that we have a parallel pursuit in common and we feed off of each other's needs or desires as part of a learning and development process. A lot of the changes you make are in the interest of more fluid performance, which is what I'm constantly hoping to improve as well.
I personally think the desire for the "fixed instrument" is to be able to spend time honing the craft instead of honing the machine, to get to the point where I can discover what I can do with a set of elements, go as far as possible, and then start developing the instrument again.
Greg: It’s interesting because some VJs will get a new version of a software and just look at all the new stuff to see what they can do with that. They learn and grow and build a repertoire that way. Whereas your needs are more humble, or less exploratory in terms of the tool and more exploratory in terms of what you can do with image combinations.
Yes – to use the piano comparison again, a musician will train for years or even a lifetime to perfect the manipulation of the 88 keys and 3 foot pedals of that instrument.
Can you go through your history of mixing live visuals Peter - You're an advocate of improvisation and seeing relationships develop as a filmmaker too - what led you to live mixing, and also what gear/processes have you used in the past?
It's funny but I think it all started with playing the piano between the age of seven and fourteen years old. My parents made me do it. I didn't really like it very much, I was studying classical, it was very disciplined and I didn't really like that route. I preferred to improvise and when my parents set me free of having to take lessons, I started to really explore improv. As I played I would envision things - creating a visual accompaniment or a narrative in my minds eye, story and image would intertwine emotionally.
Then around that same time, I made my first film in a super-8 club in high school. That's when it clicked: "This is what I want to do! - and it's a way to engage with just about anything in the world." It's a medium of experience as well as a medium of expression.
And ever since I've been trying to combine strategies of narrative and documentary, with a degree of exploration, improvisation or probing the unknown – engaging in the mystery of the way things unfold.
One of the things that lead me to mixing happened in the early 80s. I had quite a big library of recorded sounds -- from the environment, dialogues - anything really. I would play with a pair of musicians in St. Gallen, Switzerland. We had a room full of instruments where one guy specialized on drums and another on guitars but they all played everything and made all kinds of noise and sounds. I had a bunch of tape recorders and multi-track recorders and I'd patch them all through a mixer. I was running 12 disparate sound recordings without knowing what was coming next. I'd listen and mix out soundscapes and they would react to it – and I'd react back. It was really quite a fascinating process, which became a regular thing we did.
At the end of Gambling Gods and LSD edit, also in Switzerland, I'd often thought it would be great to mix that way with images and sounds somehow. This was also around the time of mini-DV recorders of which I had acquired a few during the making of the film.
I'd have DVD players and DV tapes playing off of DV cameras and a Walkman playing cassette tapes - so all manner of tapes - and a sound mixer with many channels of sound being mixed. [The above photo is a good indication of Peter's set up at that time.] We even had a bicycle for generating electricity - people peddling power! We also took the sound from the various devices and routed it through the mixer. There was also Motion Dive on a laptop which was the beginning of my digital mixing with basic clips. I was starting to combine Motion Dive with tape stuff. The tape stuff just playing out in linear fashion like the audio stuff I was doing with the musicians. The tape would just run and I had a little preview monitor like the flip out screen of a video camera and I would combine things that looked interesting on the fly.
The first thing you notice going from mixing sound to mixing visuals is how with sound it's so easy to combine things, they easily become one - but with mixing visuals there are so many ways it can go wrong.
Yes, it’s very easy to come to brown when working in an additive process.
Much more so than with sound and very difficult to back-track out of it.
G: One of your other sources, the first time we played together was in fact TouchDesigner. I was taking video from your system with scan converters. When we first met you sort of latched on to the idea of Mixxa being vertical channels like an audio mixer where you have a level fader at the bottom and your source at the top with all the effects in the middle. The first version of Mixxa was like that but you expanded on it to have four channels of video going through with the very important level fader at the bottom which matched your audio mixing model.
It really was from being accustomed to live sound mixing and to mixing sound for film where everything is laid out in those vertical channels where you can see your sound path and how you're affecting it. When we first met I'd actually had another tech-savvy friend build a sound type mixing interface with Isadora and had that go through a channel of the analog MX50 mixer. By comparison, Mixxa today is more like a balloon that you can poke at to get at what you want. A whole other thinking combined with the traditional sound model.
G: There’s also a model of DJ mixing where two are mixed into one and while that’s playing you’re preparing the next two - which is why we went to four.
Even the two pairs is not really that important to me - it's the multiples - it could be six too and to be aware of what's in the six. You've probably noticed I don't do a lot in the AB going into the ABCD, I tend to use them singularly more.
So maybe the question here is how do you use Mixxa?
That's a tough question because there are so many different ways. If you look at the Mixxa part in The End of Time it was made in multiple stages by trying out different things and mixing short sections then outputting to Final Cut and editing those sections together, then going back into Mixxa and creating a whole new thing. So it's actually a very slow process.
But live mixing has been more spontaneous and often involves an improvising musician. That said, recently we did a thematically-based show where we were using the same rough set of clips and genres for four shows in a row. It was an interesting change in the way that we work because over the course of four shows you can start to refine elements you think work and play off of what you did the previous times.
So did you feel more locked-in that way or more liberated because you had a structure to play around in?
Well, because it's a 40 minute live show, it's a bit tense. You've got an audience sitting and watching and evaluating. So you feel somewhat liberated because you're on slightly more solid ground. You can be bold within the things you've structured for yourself, whereas when you're completely going free you can stumble and come up with nothing for a phase, or brown. Then you have to recover and build something new. But often when you do that - and especially after mixing visuals to three hours of techno - you really get into some incredible combinations and worlds. You're there watching it thinking "Wow, I've really got to remember how I got here and and how to get back there" and sure enough it's pretty tough to get back there. It's also context-dependent, even if you snapshot a preset it's often about the whole experience - the music, vibe and people... you can't just take a preset, pull it up with another piece of music, in a different scenario and expect to achieve or experience the same thing.
VJs in action, INIT 2010, photo credit Isabelle Rousset
In view of your mixing style can you describe the process of developing Mixxa as the instrument that liberates you the most?
One of the pursuits in developing Mixxa is to imagine a usable instrument with a Monodeck-type of control system behind it. The challenge is that there are so many things to do... while you're choosing a clip you should also be tuning the density of track C or something. So how do you do multiple things at the same time - not to mention addressing all the effects? Or what if you want to react to the rhythm and there are sharp changes and cuts happening in the music and you want to react to those? Where and how do you layout those controls so that you can play, quickly react and navigate in a fluid manner?
You can sometimes anticipate what's going to happen musically and you want to be right there ready to do the change-up like they're doing. There are basic challenges in setting up controllers to allow you to affect things while you're still going through libraries of archives selecting imagery. You often need super quick access to a control with no time to mouse around. This is why it's often fun to mix with two people where someone's prepping while the other person is more focused on mixing. Or two people can be performing, like in a band where one is playing the visual 'bass line' and the other is filling in details on top of that. Sometimes it's amazing when two people are playing because there are four hands at work.
What was your incentive for the Mixxa section in the EOT? I recall that it was an idea while the film was still in production.
Well the way that I work, the last three films especially, has been very exploratory. In a nutshell, I gather pieces and elements, structuring and combining these along the way while we're still shooting, threading together relationships and associations throughout the process. Mixing images has been an ongoing strain in my work since Gambling Gods and LSD as a way of discovering new language potential. It's been an important parallel pursuit to the more conventional filmmaking - which isn't that conventional to begin with - but does at least fit into theatrical release conventions. So it's been a parallel pursuit for me that points towards a different understanding of image and sound and time - time being the main subject of the film. That's the main function of the Mixxa segment.
It's interesting that technology only recently allows us to make stuff like this. Historically when you look at filmmaking in the 1890's, the Lumière Brothers first cinematic inventions started with planting a camera, turning it on and it recording whatever happened. That was cinema, static shots and long takes cut together head to tail. That's what the technology suggested and allowed. The Lumière's camera capturing the train pulling into the station was the beginning. Then with the advent of turning the camera to represent two points of view, shot – counter-shot - it became more interactive, the language has been developing ever since. It can reflect the workings of our unconscious just as it can replicate aspects of how we think and see.
Now technology and various media permit layering, stuttering, combining things back and forth in an expressive and real-time way. We can quickly see back in time, perceive events in layers, juxtapose many things in rapid succession. It completely challenges linear story-telling and perception.
These are some of the ideas behind the Mixxa sequence in the film - and reactions have been interesting. It's the most controversial part of the film. People say "that's a great film but what did you do there?" And others go into it and see it as I intended it to be.
As a filmmaker you work along this "path of least resistance" in the sense that the process leads you towards what the film becomes vs. you forcing the film into a specific direction. So when you are mixing live with Mixxa is that a more agile way of working in this manner… of reacting to where the moment is taking you?
IIt's funny because that idea of the path of least resistance does apply to filmmaking to a large degree, as does the image of the lava flow (in the film). As you watch the lava it becomes a visual metaphor for the way things go. In image mixing, I'm making a lot of choices and reacting to what I have going right at the moment. I'm within a flow as I determine the next choices. The 'least resistance' aspect is the flow of associations, or graphic elements that combine together.
At the same time however, there's a lot of resistance in how you go from your brain to your hands to the interface. Currently you still have to make a lot of conscious choices and physical selections. You have to think about what you are doing. Whereas when you play a musical instrument you can often just play. So Greg and I are hard at work at getting to the musical instrument level. And all instruments require lots of practice too.
But back to the first question "When you're at your best are you out of control?" and in a way you are out of control because you're making a choice for something and going 'Oh, I don't care, oh, nice, oh yeah... sure". And when you do care, thinking "Hmm, what could I get here?" it's different. There's an interesting psychological difference between controlling and not controlling.
That's interesting and maybe this is the difference between (being in) 'real-time' and not real-time? Which is also an interesting answer to 'what is time?' or what is 'the end of time' - in real-time you're part of something, less controlling - in it vs. outside of it. 'A part' vs. 'apart'. Like nature. On the path of least resistance you exert less control and excercise more fluidity or grace or appropriateness - something along those lines. Things make sense. Natural order, evolution, adaptation etc. It's a very difficult to achieve state. When we think we basically stop time.
Peter and Greg Working on Peter's System, photo credit Isabelle Rousset
G: It’s also interesting where we went in terms of controllers. In the beginning Mixxa was intended to be only a touch-screen interface because I didn’t want to be using a mouse during performance. Then you wanted to have physical controllers mapped to lots of stuff so we brought in the Behringer BCF2000 and then the rotary knob box so you could control lots of things because there was so much to control in Mixxa. We then came up with several ways of mapping those knobs to controls and I remember you had your sticky tape markers delineating the functions of the sliders and that seemed to work well for you. Then we tossed the Rotary out because we ended up with the big Mixxa matrix and a simplified way to control multiple things.
Well it's a sabbatical for the Rotary. I like flipping through the four channels and having the set there in a physical way. But Mixxa is very good the way it is right now. I've essentially been double clicking boxes I'm working with which expand while all the other boxes stay small - as the current design intends. It's pretty quick. For mixing I still like the sound mixing model overview. I really want to see my sources, their relationships and the outputs. But who knows how our organizational brains will morph in the future.
You also have your unique approach to mixing. Take Markus Heckmann - very often he arrives at a show to do visuals with nothing but TouchDesigner. He’ll start with a line and whatever noise is coming in and by the end of the night he’s produced a set of complex visuals. Sometimes he brings a couple of synths or designs to work with and develops those over the course of the night, but very often not.
It's a difference worth noting. As you say, you can start with a line and a pulse and noise and develop an animation that way. In a sense it's pure techno – digital visualization. On the other hand the mix can be based on images, on representation, the contextualizing of things, historical associations, narratives, images with their inherent baggage set free in a new arrangement of associations. A delicate balancing act full of potential pitfalls - but at the same time it's very very exciting and compelling.
Well every time you put two things together that are representational you create new meaning or a variation in expression. So you’re talking about grammar and language... it’s all storytelling right?
I love the abstract and graphic aspects too and have really enjoyed the few times Markus and I worked together. Having those two worlds merge via two brains at the controls is amazing..
You have a very busy schedule for the next six months accompanying The End of Time to festivals but do you have any upcoming Mixxa projects?
I have a retrospective in Warsaw May 2013 and I was proposing for part of it a series of plasma monitors with Mixxa paintings on them. So it's less about performance and reaction and more about building a painting in motion. I'll be doing a show there with Fred Frith as well. Also some of my films and a show in a Toronto at this years Hot Docs in April 2013 with Biosphere – very excited about that! Much to do, and of course there's "The End of Time"!
There most certainly is "The End of Time"! Thank-you Mr. Mettler and to our readers, if you haven't seen the film it really is a 'must see' and will affect you very deeply. Also, if you should get the chance to see Peter mix live visuals, grab it!
The Mixxa Interview with Greg Hermanovic, on the road to Montreal, 2012
The story of how Greg came to design Mixxa - and possibly, the reason he’s pioneered the software he has in his career from PRISMS (1985) to Houdini (1996) to TouchDesigner (2000) - is directly related to his love of synthesizers, electronic music and computer music, and the performance of live, reactive visuals. So, for any and all fans of TouchDesigner whether you use it for mapping or prototyping, for interactive projects, building UIs or in any of the many ways it can be turned, know that if it were not for Greg’s love of synthesizers, music and performing live visuals that you might not have this tool at all, or a very different one.
It's an interesting account that spans the evolution of computing technology and how we shape and use these tools to help us in our work and in our play - and it's a story that Greg is always happy to tell...
Greg, how did you come to build Mixxa? Where did this all begin?
It was 1992 and I had borrowed from Silicon Graphics this $250,000 Onyx computer with the latest graphics cards, borrowed on the presumption it would be used for business demos, but really I borrowed it cause I wanted to bring it to this house music party in downtown Toronto put on by Tribe magazine.
The Onyx was massive, the size of a refrigerator - how did you get it in the space?!
Not easy - the party was in a warehouse space upstairs and it took literally six guys to carry it in through the back alley up a fire escape. Insane.
Were you doing visuals in real-time?
Yeah, it was a bunch of visual synthesizers make using PRISMS, completely in real-time. In one visual I was using a math equation to move all this stuff around on the projection and push waves, colors and ramps, and was editing this super-long math equation containing about 1600 parentheses, with sines, cosines, exponents, polynomials, and so on, trying to get another trick out of this long expression because the longer you make it, the more tricks it has in it.
So that’s why you needed all that computing?
Yeah, to get this blobby looking thing to deform! So PRISMS had a lot of powerful realtime programmability, but authoring effects was not visual and dummy-proof enough to be composing visuals in front of a big audience, so it later led toward Houdini and then the very visual nature of TouchDesigner.
So there weren't a lot of real-time 3D visuals back then?
No, zippo. The visuals were always done at that time with VHS tapes with visuals mixing through analog video mixers so we kinda stepped it up. At that time we had video and pre-rendered CG coming off a BetaCam SP deck plus off the computer in real-time into to a crappy video mixer and projector. But good music. The Scottish DJ came over half way through and said this is the first time everyone’s standing around watching the visuals instead of dancing while I’m playing, which surprisingly amused him.
Well that’s good and not good - when people stop dancing...
Well, when that happens I cut the visuals till things go back to normal.
Isn’t it kind of a drag trying to edit a long math equation at a party? And working with tape... or for that matter, having such expensive computing that’s supposed to be somewhere else but is in an environment where beer gets spilled everywhere? Where parties get shut down and gear confiscated?
Yeah, it was difficult, risky and certainly not very relaxing for the VJ. I had to design something more portable, affordable and practical. The original direction of Mixxa in 2006 was based around a touch-screen for low-accuracy VJs to punch away at. I wanted close-access to all controls – no overlay windows, no levels of menus – two clicks (taps) to get from anywhere to anywhere in the user interface – that was the design requirement, it still is. Like the fabulous analog Panasonic MX50 video mixer (Peter and I both owned one and still do) which was popular with VJs because it offered so many controls right-there on one panel.
A driving factor for Mixxa was I wanted to see all stages of the effects on-screen at the same time, seen as live-icons. Why not - it was 2007 and it was possible, thanks to NVIDIA, to see 40 mini-monitors on-screen at once. So it started being very visual – WYSIWID - what you see is what it’s doing.
I had the aesthetic then for the UI of minimal text, mostly black, just enough to know what you were doing, and controls just large enough that you couldn't miss even if you were off a bit, and sliders where could just tap the border to get exactly 0, small detail like that. I wanted there to be less reading and more reacting.
And why a touch-screen?
With hundreds of controls at the same time, it’s impossible to make a hardware controller to allow the control of it all. So I skipped the desire for massive hardware controls, I had it with figuring out some kind of universal MIDI controller, but succumbed to simple motor-controlled fader boxes.
Mixxa was born with 3 video channels, each with a simple set of effects, and then mixing 3 channels down to 1 in 2 stages. MIDI controls of anything – yes. LFOs controlling parameters - yes.
I was pleased with Mixxa - it was very structured (versus editing in a TouchDesigner network) but allowed for lots of variety. I added video-in from camera, simple 2D shape generation, and video streaming from Touch 017 synths.
And that’s around the time when you met Peter who was also performing visuals with his setup.
Yes, then in comes Peter. He wanted to do a bunch of things that wasn’t priority for me, but I was determined to build what he wanted plus what I wanted in one tool.
First thing he suggested: Video should follow the idea of an audio mixing board where you have 4+ channels (in columns) and at the bottom of each channel is a level fader. That’s how he wanted to work. Some fixed effects above the faders. OK. And not 3 video channels but 4, which made more sense – mixing 2 video channels that the audience sees, while screening and preparing video files on 2 other channels, then doing a transition from one pair to the other. Made total sense so I went for it. With the 3 mix-down channels that meant it was a 7-channel system.
As you say, Peter and you have very different priorities or styles. Peter has a LOT of video and requires lots of processing power, and graphics cards maxed out. At the time... how did you work around those challenges?
Everything got extremely video-centric – Peter wanted bins of movies (he's reached over 120 bins of up to 50 movies each), plus the organizing of collections of movies. But having all the 1000+ movies accessible at once quickly outgrows the computers, and he wants to move between any of them randomly – stream-of-consciousness, destination unknown, and all this running on a measly Shuttle PC. Needing the system to not pause when switching - and this is a $3000 computer, not a big edit suite. Sure I'll make it work. It was very challenging because not knowing what the next 30 movies to be called up will be, everything had to run more asynchronously, hence movie pre-loads, read-aheads, deferred closing, opening but not playing…. All this while new decisions were being made. It made our Derivative programmers batty. Then switching from QuickTime to libavcodec was huge – more parallelism and more file formats supported, and trust me, these files were coming from all over the place.
Then we wanted to prepare video without going back to the source and editing it. So we added Peter’s and Greg’s choices of video file pre-processing: in-out trim, black level, cropping, audio level adjust and more. This is done non-destructively in real-time without affecting the original files.
Next impact is the resolution – back in the early days of Mixxa 256x256 was okay, then 320x240 was a luxury, eventually looked like crap. 15 fps, 20 fps, then 30 fps. Then 480x320, 640x480. Then 1024x768, now 1280x720 and we reached 1920x1080 at 30 fps. That’s about 24 times more pixels to move around (24x) and process, and 24 times the memory. In comes solid state drives, and that kinda saves the day – now we can do 4 HD movies at a time off the same drive.
But more things balloon/bloat up the system: 24 times more video pixels with more and more effects, fitting into graphics memory that only went up from 256 Meg to 2 Gig = 8x larger. And we want 60 fps vs 20. And now we're doing 3 different feeds out vs 1. So all along the way we’re fitting a lot more stuff into just a bit more RAM. But thanks to Derivative's programmers' evolution of TouchDesigner, advances in Open GL, better drivers from NVIDIA, somehow it all fits.
Snapshot of Peter Mettler's Configuration Mixxa 2013
A year or so ago Peter asked to be able mix the sound of some of his source material along with the visuals - how did that impact Mixxa?
Audio is a big part of Peter's use of Mixxa - using audio from the movies that are playing, plus audio from sound sources that are not visible. So having an audio section that plays and pre-mixes has been useful, pretty much designed to his wishes.
For our programmers, adding audio mixing to the picture made things more nutty. These are still under-$3,000 computers! So today, just in time, we’re running a 64-bit version of TouchDesigner, which lets the number of movies go into the stratosphere.
I guess Mixxa is a quintessential stress-test for TouchDesigner?
Yes, it serves as an extreme testing ground of TouchDesigner. Before others get their hands on new builds of TouchDesigner, I push TouchDesigner around in Mixxa, check its performance and see if I can easily get something new out of it.
How else has Peter influenced Mixxa's design?
Funnily, the ongoing theme has been to equal the immediacy and simplicity of the analog Panasonic MX50 video mixer which we both started with! I re-introduced motorized faders and a rotary dial box back into the control choices, complementing the mouse and touch-screen UI. Motorized faders are great for level controls and other always-there effects, and I always liked them because you can have your fingers on 4+ at one time, and without looking, you know what value they are set to. Not easy with gestural interfaces.
The key for Peter is video image juxtaposition. As simple as laying a bunch of movies together and easing up and down their levels, their blurs and effects. To combine images, we brought in a wide variety of 25+ Photoshop-like blend modes, plus a means of pre-monitoring what they would look like before introducing them. Overall there are plenty of pre-monitors giving a hint of what lays ahead before commiting to it.
Also a UI design goal is to minimize what you need to keep in your head, while having fast access to the depth of possibilities.
For both of us, capturing a state, or anchor point, and returning to it later was important to have - hence the snapshotting of presets. So for Peter's shorter more structured or choreographed shows, presets are needed. But that alone was not satisfactory.
Presets are good, but they are very disruptive when you recall a preset. Everything changes. It's all or nothing. It took me maybe a year to work out how to transition in steps from any current state to the state of a preset, or to transition from one preset to another. It required an(other) huge redesign internally to Mixxa, and it gained us a much greater flexibility that we're now appreciating.
Mixxa 2013 Matrix
I introduced the Matrix, a 9x9 grid of 81 cells, where each cell can be anything that inputs and outputs video – movie file player, effects, mixers, 3D – anything in each cell. I then introduced the architecture of matrix presets. Since there are always 81 cells, and all presets have 81 cells, you can go from one preset to another, one cell at a time, and you can meander while you are between 2 presets, backtrack, move forward, head toward another preset. It made running a show easy again. And ripe for fearless experimentation.
How did the 9x9 matrix go over? Peter chose a very fixed matrix configuration, kinda like Mixxa Classic Refined, and he was quite pleased with it. He evolves his matrix layout-of-choice slowly over time. At the same time, any (other) Mixxa user has the option to wipe out a matrix layout and build up another set of cells, in any combination. So I'm very happy too.
Working with two or three projectors, any of the 81 cells can be routed to any of projectors, so you can see one mix in various stages of its development, or route the three streams through a projection mapping tool like CamSchnappr.
Palette of Mixxa Effects
What next Greg? What further development do you have planned for Mixxa?
I'm looking forward to growing the small community of people using Mixxa, and seeing what effects and extensions they make with it. It's a bit obscure a tool, but internally there are dozens of useful production techniques that I've crafted and I hope to reveal and demonstrate. Even now, given Mixxa is provided as a .toe file, anyone can go in, have a look and mod it.
I'm very pleased with the evolution over the last year - the matrix, the UI for popping cells open so you can work with any random combination of cells at once, the preset mechanism, the handling of multi-projectors, and the stepping up of resolution. I always say, after the next thing, I'm done, but there's always something exciting to experiment with and integrate. Like hidden in the current version is this mind-boggling capability where every cell can itself be a matrix - this has huge potential that I'll expose soon.
And I want to get back into performing a bit too - I built this infrastructure because every time in the past I made a new standalone visual synth, I wanted to combine it with prior effects plus a lot of the image processing and compositing that I had built into all of them. So by building the Mixxa infrastructure, I can now start afresh on a new thing and have a lot of the prior effects and UI already there.
We would like to thank both Greg and Peter, two very busy guys, for taking the time to talk to us in such depth about their work and driving motivations. It's a wealth of experience for certain that we are very happy to finally have on record! To find out more about Mixxa and to download the latest version follow this link and enjoy!
Peter Mettler Live Performances:
- “Meteorologies” Image and Sound mix improv with Fred Frith, performed at Cinematheque Quebecoise - Montreal, Centre Culturel Suisse - Paris and Videoex Zurich, 2012
- Electric Eclectics 2011, Live mix performance with Tom Kuo and Anne Bourne
- Constellation Young Gods, 2010 – Live audio-visual performance with musicians Gabriel Scotti and Vincent Hanni
- INIT – Toronto, 2009. Live audiovisual performance with a roster of Toronto performers, presented by Tom Kuo and Brian T. Moore.
- Videoex 2009 & Institute for Computer Music and Sound Technology, Switzerland 2009 – Live improvisational performance with Fred Frith, including discussion and screening of Balifilm.
- Kunstraum Walcheturm New Years Party, Switzerland 2008 – Live video mixing (with DJ Styro,
Bang Goes) and others.
- Zwei Tage Zeit, Switzerland 2008 – Live improvisational image mixing performance (with Fred
Frith ) In conjunction with The International Society for Contemporary Music.
- Bas-Reliefs, Toronto 2007 – Chartier Danse, a multi-disciplinary collaboration from a team of eleven
artists under the artistic direction of Marie-Josée Chartier.
- In the Mix, Toronto 2007 – Improvisational live performance in collaboration with various artists
(including Tom Kuo, Anne Bourne)
- Enwave Theatre, Toronto, Live image mix performance with The Art of Time Ensemble,"America and The Black Angel"& "Vox Balaenae" by George Crumb
- La Corbiere, Village Nomade, Switzerland 2007, Live improvisational image mixing performance with Fred Frith.
- DeLeon White Gallery, Toronto (with Monolake) ‘Pusher’ series
- Harbourfront, Toronto 2007 (with Andrea Naan) dance theater piece ‘Shostakovitch/Notes in Silence’
- Nuite Blanche 2006 at the Drake Hotel, Toronto, with Derivative, Tom Kuo, Adam Marshall
- TIFF 2006 special live event performance ‘Elsewhere’ (with Murcof, Marc Weiser/Rechenzentrum,
- Martin Schuetz, Telefunk, Evergreen Gamelan, Tom Kuo, Adam Marshall…)
- Harbourfront, Toronto 2006 (with The Art of Time Ensemble & Andrea Naan) ‘Shostakovitch’
- Danse Cite, Montreal 2006 (with Marie Josee Chartier) ‘Bas Reliefs’
- Qtopia Uster 2005 (with percussionist Lucas Niggli)
- Rohstofflager Zurich 2005 (with DJ Sven Vath)
- RAI3 Rome live national broadcast 2005 (with Fred Frith)
- CPH DOX Copenhagen 2004 (with Transmediale)
- Nyon Visions du Reél Closing Night 2003 (with Martin Schuetz)
- Schauspeilhaus Zurich: The Box 2003 (with Martin Schuetz)
- Walcheturm Galerie Zurich 2003 (with Fred Frith)
- Buenos Aires Film Festival 2003 (with Prinzessin in Not)
- Burning Man Nevada, 2003 (various DJ’s)
- Om Festival 2003, 2004,2005 (with Telefunk)
- Expo Swtitzerland 2002 (with Fred Frith)
- Member of improvisational music trio, ESP, Switzerland, 1993 - 98
<<back to blog