I don’t suppose there’s a way to ‘chunk’ a HapQ file that isn’t?
My understanding is that the chunking is just an application of snappy to sub-chunks of the frame, instead of the whole frame, to help with multi-threaded decoding (someone should CUDA that shit), so the actual video data shouldn’t need to be touched?
So in theory, a file could be decoded as far as the DXT-5 planes, then re-snappied and put back in a movie?
Hmm. If TouchDesigner can’t do it, I guess it may be possible with a trivial ffmpeg app? Malcolm - any advice on that? I’ve rolled my own HapQ handling code using ffmpeg and lib snappy, so it feels like it would be inside my capabilities (although I rarely make movies with ffmpeg APIs).
We do something very similar here in house at Obscura. We use an internal file format called FireFrames which behave like HAP with the advantage of being frame sequences, and allowing for block reading - meaning that only a section of the frame is actively decoded. This is pretty useful for large plates, though there are some frustrating trade-offs.
I’d say that it’s worth chasing if you’re dealing with HUGE resolutions that can’t be cut-up in a typical approach, but in many other situations it’s sometimes more frustrating that advantageous. PM me if that’s too vague, of if you want to chat more