<img src="https://certify.alexametrics.com/atrk.gif?account=bIEZv1FYxz20cv" style="display:none" height="1" width="1" alt="">
Skip to content
Blog / Latest Articles

Bringing the Avatar to Life

A key part of our research direction at High Fidelity is to breathe life into the avatar, by capturing data on movement, facial, and gaze information from as many channels or devices as we can and streaming this data, along with very high quality audio, at low latency and high FPS.


This video clip is an early example of our work in that direction. We built a simple test device consisting of gyros and accelerometers (the same parts that are in cell phones, the fuelband, Google glass and many other devices) that are sending their data with very low latency to a computer that is rendering an avatar. As you can see, if the sensors are captured at high FPS and synced to the audio very tightly, the effect is impressive.

The difference between the sort of face tracking you can do with a camera (low FPS) and what you can get from the raw gyros (60+ FPS) is pretty clear. Our goal is to build a platform in which data from many different devices can be simultaneously captured and streamed to create a very compelling version of an avatar that can communicate with a higher degree bandwidth and emotional impact that anything we have seen before.

Add Spatial Audio to Native Apps -- Find Out More

Published by High Fidelity April 19, 2013

Don't Miss Out

Subscribe now to be first to know what we're working on next.

By subscribing, you agree to the High Fidelity Terms of Service

Need an audio solution?



Detailed docs covering how our API works and sample code.

Guides + Examples

Complete guides and walkthroughs covering all you’ll need to get started.