The Inquiry How to create virtual touch? Without haptic feedback rigs or direct stimulation to the brain, how can we get closer to that special, sometimes intimate, sometimes intricate, sometimes magical feeling that is touch? We’re trying a lot of different approaches, but this video illustrates one combination: a front-facing PrimeSense depth camera, the FaceShift facial tracking SDK, the Leap Motion controller, and the Hifi virtual world software. There’s no physical feeling for either party, but as you’ll see, Ryan is virtually touching Emily’s hair, and that’sone step in the right direction.
The Setup Emily and Ryan both sat at MacBook Pro’s withPrimeSense depth cameras clipped to the top of their screens (we 3D printed the clip), theFaceshiftSDK extracting head position and facial features, and ourInterfacesoftware processing and streaming the data to control the avatar’s face and body. Ryan’s hand is detected by theLeap Motioncontroller. The end-to-end latency is about 100 milliseconds. For our headphone and microphone, we usually usethis Sennheiser headset.
You might notice that the audio is noisy. This is because we applied some gain to bring the levels up. “But you claim high fidelity audio,” you might be thinking. Well, one of the brilliant things about our audio architecture is that it works similar to the real world. The further you are away, the harder it is to hear. But this doesn’t work for recording/capturing, something we’ve yet to optimize for.
We captured the video with the Screen Recording functionality in QuickTime player, piping the sound out from Interface and in to QuickTime usingSoundflower. To capture, I logged in as my avatar and stood next to Ryan and Emily, recording what I observed.
When you see Emily’s right hand raise, it’s because she’s moving her mouse. In our current version, moving your mouse cursor will also move your hand and arm.