Oculus Rift Support?

!!!

:slight_smile: you just made me the happiest person ever.

Do you have a link for it and how does it work exactly?

(Guess I shouldn’t be overly excited yet. … but I’m just happy to find a post on here after searching for oculus rift period)

github.com/ganzuul/broadcastsensor

:slight_smile: damnit… now I just wanna skip out on the rest of my work day and head home. I can’t wait!

I ordered the oculus rift specifically for touch designer and have never really been a gamer. I can’t wait to start messing around with my kinect glsl finger tracking setup. It looks like the only other thing I’m gonna have to do now to get it going is throw together the glsl distortion shader
which looks pretty simple considering the code is already posted in the oculus rift sdk document.

youtube.com/watch?v=MfBhnY_ … ata_player

Almost. :wink:

+1 for Oculus support, I have a dev kit on order would love to use it with TD.

++! for rift here too. My local hackerspace has one and I am going to install FTE on that machine sometime this week. :slight_smile:

Out of curiosity, what were your experiences in terms of time between ordering + receiving a unit?
-Rob

I’ve made some progress on a Rift CHOP. Getting data from the sensor is very easy, as is applying the barrel distortion shader.

I’ve gotten stuck on getting the output to look right in the headset. I can get it pretty close by manually tweaking parameters, but that’s not a real solution. Right now, I’m extracting a custom projection matrix via CHOP channels and applying those to my cameras, but something is off, lines aren’t properly converging.

I could describe my problem but I think it’s pointless unless someone else is working on this as well. Pipe in if you’ve got one and you want to hack on the problem together :slight_smile:

rob - I ordered mine on April 2nd, received July 12th.

There’s an Oculus unit at our local hackerspace and I’ve just installed 77FTE on the machine there ready to have a mess with it. not sure how much help I can be though, apart from testing. I am a bit clueless with GLSL etc. and struggling along with Python (although I know Touch well in general so might be helpful after all)
rod.

I want a local hackerspace :stuck_out_tongue:


I’ve created a GitHub repository in the interest of fixing / improving my attempt at Oculus Rift Support in Touch Designer.

github.com/momo-the-monster/MMMRiftChop

The repository includes:

[]A Visual Studio 2012 project for modifying/compiling the DLL[/]
[]A TD 088 Composition with my attempt at implementing proper rendering[/]
[]A compiled DLL if you want to get straight to the action (it’s in the TD folder with the Comp)[/]

It’s so close - the head tracking is fine, the 3D effect is there, but something’s not right. I think it has to do with the offset for each eye (ie the ViewAdjust matrix to be applied after the projection matrix and barrel distortion). I’ve tried the values that make sense that are coming out of the Rift Sensor: 0.0395 offset for each eye, in my case - but I still see some sort of de-convergence in my periphery. I don’t see it in the pre-compiled demos or when I render a scene in Cinder, so it must be a step I’m missing, or some parameters I’m not applying right.

The basic flow I’m using is:

[]Extract a 4x4 Projection Matrix from the Sensor for the Left Eye[/]
[]Flip coordinate at M[2][0] to make Matrix for the Right Eye[/]
[]Create one camera for each eye, use custom matrices generated above[/]
[]Use Render TOP to render each eye separately at 640x800[/]
[]Feed output of Render TOPs into GLSL Shaders created from example code[/]
[]Shaders are fed incoming Uniform parameters from Rift sensor[/]
[]Feed output of each shader into a Crop TOP, extending the left eye to the right and the right eye to the left[/]
[]Composite the Crop TOPs using Additive Blending[/]
[]Send final output to Window set to 2nd monitor, fullscreen[/]

I’ve tried adding a Transform Top after the Crops in order to change the eye separation but none of the values from the sensor fix the problem I see.

We’re so close! Looking forward to some input on this, hope someone else has a Rift they can test with!

Excellent news! I’m expecting my rift to arrive in a couple weeks so I will test then.

Rob, delivery times have improved, I ordered mine on the 30th of July it went through processing for shipment this week, so I might see it before the end of the month. I hit it lucky with ordering though, slipped in on a Non-EU/US country shipment, my order is about 2000 units ahead of the current US orders.

I might know what’s wrong. I have a Rift and will look at your example tomorrow. Totally stoked. Thanks for doing this work. I have stereo camera component for TD and it works well. From what you describe you might be missing the horizontal shift necessary for an asymmetrical viewing volume.

hmmm. I was wrong. In reading the SDK documentation, it seems the rift does not need anasymmetrical frustum. I’ll keep testing.

The only thing I found is that the cameras weren’t being translated. So I changed the camera parameters to use the Interpupillary_distance in the riftData (±ipd/2) and that I think made things more comfortable. Not sure if solves everything. I have to test with some different geometry.

Looks like there’s something not quite right with the shader. I’ll take another look tomorrow after I digest the documentation for the SDK.

I modified the shader and changed the parameter values. Seems to work now. But still hangs TD every time I tried to exit. Next thing to fix :slight_smile:

Also no chromatic aberration fix in the shader.

I’m attaching the modified .toe
MMMRiftChopTest.12.toe (10.8 KB)

I put the System::Destroy() call after the delete instance call and now I can exit TD without a hang.

Hi mlantin - thanks for jumping in on this!

Good catch with the System::Destroy() call, though it doesn’t seem to hang on my system without it.

Your version looks even further off in my headset - is it looking right for you? I noticed that you changed some values in the shader, namely:

ScreenCenter-float2(0.25,0.5), ScreenCenter+float2(0.25, 0.5)

to:

ScreenCenter-float2(0.5,0.5), ScreenCenter+float2(0.5, 0.5)

Can I ask why? Maybe you’re accounting for that somewhere else? The first line is verbatim from the Oculus documentation.

Hmmm. It looks right in my headset though I did notice that when the sphere is very big it starts to be uncomfortable. That might just be too much disparity. I’ll investigate.

In terms of the shader, I changed the value because the shader in the doc is designed to work on the full viewport. Since the way you have it structured, the shader runs on only half the image, the sampling needs to be clamped differently.