<img src="https://certify.alexametrics.com/atrk.gif?account=bIEZv1FYxz20cv" style="display:none" height="1" width="1" alt="">
Skip to content
Blog / Latest Articles

High Fidelity | NEWSLETTER

We’ve added the ability to record the data stream of an avatar — your audio, facial expressions, and body movement — to a format that can then be played back on your server under the control of javascript. Although this capability was originally for performance testing: we wanted to simulate having a large number of normal-acting avatars in one place without making everyone log in — it also makes it very easy to add lifelike characters to a place or create Machinima. Full blown motion capture is expensive and time consuming — it seems likely that HMDs and hand controllers will democratize the creation of natural characters.


Using this feature allows you to very quickly populate your VR experience with compelling avatar content that you can pre-record, using your HMD and hand controllers. Using javascript, you then set how playback is to be triggered e.g. Entering a room, clicking on an entity, certain times in the day etc.


Reverb is a critical feature of 3D audio, allowing the reflections of sound from the walls of a space or nearby surfaces to be audible in addition to the ‘dry’ primary sound made by people or objects.

We’ve recently greatly improved the quality and speed of our reverb engine, and have also added a number of additional javascript-accessible controls for customizing the sound of the reverb.

To try out the new settings run reverbtest.js and adjust the sliders to change the sound of the space.

Check out the example video:


A discovery from creating VR content is that particle systems are a lot more interesting when attached to people with hand controllers. Having created some great examples like these musical ‘rave gloves’ below, we’ve been working to get the performance of particles as high as possible with the expectation that they will be heavily used. If you’d like to try these out, use the script: Flowarthut.

Avatars from 3D scans

Our initial research at High Fidelity, which pre-dated the Oculus rift and Vive, was to use depth cameras to capture facial expressions conveyed live onto cartoon-style avatars. With the likelihood of first-generation HMD’s making millions of people want to become avatars, we’ve begun experimenting with whether we can use 3D scans of actual people in conjunction with the simpler audio-driven mouth animations that HMD wearers will be limited to using. As a first test, we scanned our whole team using two different 3D scanners (one for hair and one for the face), and then used some simple techniques to make the results not be (too) uncanny. Our holiday card this year is all of us posing together. Although there is still a lot of work to do, we achieved these results with less than 2 hours of work beyond the initial scans, and the effect of having our team meetings using our ‘real’ avatars is quite compelling. A great discovery is at the pixel resolutions of the rift, in a 20 person meeting, one can instantly recognize one’s co-workers at a glance.

Supporting Large Groups

High Fidelity will achieve high scalability both by maximizing the number of people who can be together on a single server, and also by connecting servers together to create larger single spaces. In pursuit of this first goal, we’ve begun optimizations to maximize avatar concurrency for the single-server case. So far, as is show in this image, we’ve been able to support a couple hundred avatars connected to a single server, with the nearest 40 or so rendering at 75Hz in the Oculus using fairly detailed avatars and including joint and facial animation.

Mini Game

With the help our early Alphas users, we took on a challenge to build a mini game, that used some of the functionality that is already available within High Fidelity. The result is Winter Smashup. This is a fun, time based game that was built collaboratively with input from 7 alphas. The game works best with a HMD and hand controller.

Here is the intro:



Add Spatial Audio to Native Apps -- Find Out More

Published by Chris Collins January 5, 2016

Don't Miss Out

Subscribe now to be first to know what we're working on next.

By subscribing, you agree to the High Fidelity Terms of Service

Need an audio solution?



Detailed docs covering how our API works and sample code.

Guides + Examples

Complete guides and walkthroughs covering all you’ll need to get started.