Large scale video 'matrix'

Hello,
I am currently working on a project that needs to realize multiple videos being triggered via user motion. The videos would scale and fade based on how many people are in the room and the level of motion. There might be multiple videos overlayed on eachother. This ais a multi screen environment so will likely run at least a few seperate boxes running TD FTE.

I have spent a few days now reading through the wiki and help files but I am still not positive if TouchDesigner FTE will let me realize the installation.

I see many promising features and I think I want to spend the time to learn the program as it will come in handy for other projects but I was hoping that someone from derivative could spend a bit of time with me to see if this video installation would be a good place to start with it?

Many Thanks
quine

It depends what your requirements are. If you need perfect sync between all of the different displays you will have trouble (if your doing a mosiac for example). If the requirements are a little more lax, and whats displayed on one monitor doesn’t need to be in perfect sync with the other monitors, then Touch will work well for you. You can use the Touch In/TouchOut CHOPs to send data between the different computers so they can communicate with each other.

Maybe you can give some more information about what you’re trying to do exactly with the multiple computers?

There is one point at the beginning of the sequence that would need to be sync’d. Almost like virtual curtains being raised.
The other video/still image content does not need to be sync’d to the other screens (edge blended) or anything, but should start/respond at the same time as the others when triggered by motion.
The user motion would ideally transform the videos/stills in one or more ways (scale, blur etc), and there would be multiple videos moving and layering in each of the setups. Not sure if this is clear? One screen might have 3 or 4 videos playing at the same time and floating around amongst and overtop of each other for example.

Hopefully this provides a clearer scope? If I am to go into anymore detail maybe we could chat offline?

thanks,
quine

As long as you don’t need 100%, down to the millisecond, perfect sync between the screens, you’ll be fine. You can send triggers to all of the computers at the same time, but due to network delay and frame offset, they may be off by 1-2 frames. In practice (especially if the screens aren’t right next to each other in a grid), I doubt this will be noticeable.

Ah nice. 1-2 frames might work.
Thanks for the Touch In/Out
Right well i’ll dive in to figure some of this.

Best place to start specific to mocking up for video playback and control? Then hopefully on to triggering, effects and motion sensing techniques…

thanks!
quine

It sounds doable.

It might be worth taking a look at Eyesweb infomus.org/EywMain.html for the image tracking. It’s a boxes and wires spaghetti language like Touch, Max,PD etc. for vision projects. It can output OSC message to touch via network sockets, so that would be a nice way to hook it all up. It’s possible to do simple vision stuff in touch (use the trace SOP to make polygons out of blobs, then find their centroid, area, distance etc.) but there’s nothing in touch specifically optimized for it.

If you plan to use MAX/MSP along with Touch, there are some camera tracking things to add on to Max/MSP, I believe.

Rodney

yes, relatively easily done, we’ve already done it using max/msp and Touch

au.youtube.com/watch?v=qpJSSv3qKuw
and www.extirpation.org

feel free to contact me