<img src="https://certify.alexametrics.com/atrk.gif?account=bIEZv1FYxz20cv" style="display:none" height="1" width="1" alt="">
Skip to content
Blog / Latest Articles

VR Immersion Through Immobilization: ‘The Rig’

by Philip Rosedale Cofounder + CEO

Rather than moving your head and seeing the virtual world by wearing a lightweight HMD, what if instead you couldn’t move your head at all, and the force you created in trying to move it was used to update your view? In other words, imagine that your Oculus Rift was glued to a wall, but that when you looked into it and tried to turn your head, the virtual view rotated to follow the forces you applied. Could this work, and could it be extended to immersing the entire body, by attaching yourself to a rigid structure that measures the forces you apply to it in a similar way?

As it turns out, this experiment was actually the first work ever done at Linden Lab, before we started working on the software that would become Second Life. Andrew Meadows and myself spent the first 6 months or so building a room-sized device that immobilized a person’s head, legs, and arms while using a folded projection screen to deliver a high resolution VR experience. We called it ‘The Rig’, and our more fearless early investors and friends actually got to try it out. The findings were fascinating and may be useful now as the race to create fully immersive VR interfaces continues.

Immobilization and force detection is very different than tracking real motion, because since the body is not actually moving, the brain cannot compare it’s sense of actual motion to that which it sees with the eyes. This results in a very reduced perception of latency. Additionally, sensations like inertial mass (the feeling of holding something heavy) that are practically impossible to create with existing interfaces, can be created easily.

As a simple thought experiment, imagine that you are looking at a computer screen, while holding onto the handle of a tennis racket which is bolted to a table…. you can’t move the racket handle a bit. But what you see onscreen is a racket in your hand that is moving perfectly smoothly in response to the forces you are putting on the handle. A ball drops from the air, and you move the racket to bounce it upwards. As the virtual ball connects with your racket, you can imagine that you need to apply a stronger force upward to the handle to keep the racket moving up. This change in applied force ‘feels’ to your brain very much like the sensation of the ball hitting the racket! Finally, imagine that a much heavier metal ball drops, forcing you to apply a huge force to even keep the racket outstretched with the ball on the surface. Can you imagine how believably you would ‘feel’ this weight? This simple experiment was one of the first test setups we built to validate that this approach could work.

As it turns out, detecting the force applied to a rigid structure is extremely easy, fast, and cheap. In a nutshell, you super-glue little resistors called ‘strain gauges’ (pictured above) to the rigid structure (typically metal) that you want to measure forces on. The sensors are very cheap — much less expensive than a motion sensor chip — and can read even very small forces with a latency of fractions of a millisecond.

So as you can imagine, you can replace an HMD with a larger more comfortable screen by building a structure that you lean against or look through, and then using the strain measured on the device to move your view around. We did this, and also did the same with the hands and feet. Imagine pressing your hands down onto a smooth surface covered with these same sensors, and seeing your avatars hands moving in front of you while your real ones stay perfectly still. Ditto for the feet, giving you the ability to walk. Andrew and I experimented with learning to use this interface (it’s sort of like the experience of moving an arm or leg when it is asleep) until we were able to do things like walk, salute, or point a virtual gun.

Although this type of approach to immersion is not perfect since the body no longer receives the nerve signals corresponding to limb motion (called proprioception), it seems possible that it’s simplicity and accuracy might make it effective for future VR interfaces.

Published by Philip Rosedale July 10, 2015
blog-circles

Don't Miss Out

Subscribe now to be first to know what we're working on next.

By subscribing, you agree to the High Fidelity Terms of Service

Need an audio solution?

icon-local-spatializer

Deliver the Best Audio Experience

High Fidelity’s client-side audio solutions work with your existing audio networking layer to deliver the most incredible audio experience.