<img src="https://certify.alexametrics.com/atrk.gif?account=bIEZv1FYxz20cv" style="display:none" height="1" width="1" alt="">
Skip to content
Blog / Latest Articles

Recording and Livestreaming in VR

by Howard Stearns Technical Lead, Mixer Team

When I talk to people about what it’s like to work at High Fidelity, I point out that we work at the edge of what’s possible.

At home, my son is thrilled when a AAA game runs faster than 50 frames per second. High Fidelity has to run at 90 FPS or users in head-mounted displays start throwing up. A game is created by professional artists with a managed budget of triangles, textures, and objects. High Fidelity virtual reality scenes are created by the people using it, while they are in it. They bring in objects and avatars on the fly — from who knows where. A game might allow a few other players to join your scene, updating the joints of those other avatars at much lower rates, while High Fidelity animates 100 other users at 45 avatar updates per second. A game is controlled by a keyboard and mouse, while High Fidelity is tracking head and hands in 3D, like Hollywood motion capture. While each scene of a movie takes hours to render on an army of machines, High Fidelity does all this in realtime on one computer.

At High Fidelity, our goal is to make the impossible possible, the possible easy, and the easy elegant.

Livestreaming and Recording in VR

Here’s an example of one of the ways we’re working toward that goal: To support live and recorded videography, we made a hand-held Spectator Camera that captures to the screen. You can pick it up in-world to record what you like. For example:

 
 
Entry from internal staff movie contest, showing our bleeding edge technology development

A tablet app controls whether your monitor screen displays what the camera sees, or what you are seeing in your headset, cinéma vérité style. Separate, free software captures whatever is being shown on the monitor screen, sending to a streaming service like Twitch, or saved for later editing.

For the last decade, gamers have been making videos by getting a game to respond in some interesting way, and recording the screen as it is shown to the player. When such “machinima” are made in a game, the artist is limited to what is provided for in the game, and what the artist can get the game to do.

In a virtual world with user-generated content, the movie script can be played out by computer scripts written by programmers to get specific desired behavior. A game’s core ability to allow you to behave “outside yourself” — to be anyone you want to be — provides opportunities for self-expression that go beyond the limits of physical movie-making and physical cosplay.

But VR changes machinima in two fundamental ways: puppeteering and camera control.

  1. In addition to program-controlled object behavior and avatar animation, High Fidelity seamlessly combines artist-created animation with real-time tracking in way that makes characters much more natural than program-control alone.
  2. It is now much easier to get the shot you want. Moving around in a keyboard-based game is both awkward and oddly smoothed, giving the artist neither full movement nor true realism of movement. In VR, you can move just by turning your head or walking around. Better still, hand-controllers allow you to pick things up and move them around just as you would in the real world. By attaching a virtual camera to a model of a camera, you can move the virtual camera by easily moving the model around.

Now there is a full suite of tools to unlock one’s creativity:

Under the Hood

At first glance, you’d think it would be easy to stream and record video from inside VR. The system is already rendering the scene to the screen and the headset. Just capture the bits and off you go, no?

Capturing the screen is an issue all to itself. It turns out that streamers already have very good software for doing that, making it easy to record for later editing, and sending live to Twitch, YouTube and others. We wanted to let people continue to use their own choice of OBS, XSplit, and others, and didn’t want to be in the business of packaging proprietary video codecs in our open source software. Our Spectator Camera gets the camera view or your HMD view to the screen (either a window or full screen), and your favorite capture program works with that.

The next issue is in rendering twice. Remember that “edge of what’s possible” stuff at the top of this blog? Try rendering the same scene twice, from two different points of view at the same time! Our wizards have created a highly efficient rendering system that can usually interleave rendering from a second camera position simultaneously, without having to do twice the work. In the worst case, it gets close to twice the rendering time, so the whole system is designed to do as much as possible on the computer’s separate graphics processor, in a way that in most cases allows this extra work to not slow down “the game” rate nor even the rendering rate. This efficient second render path is now built into the system, and the Spectator Camera (or other JavaScript applications) turns the rendering path on and controls it. Right now, there is only one such built-in second rendering path, which means that only one application can use it at a time. Going forward, as we and our developer community create more uses for this rendering path, we’ll need to work out how competing uses will be resolved.

Right now, the few Spectator Camera options are those shown above in the tablet app. We wanted it to be simple to use. All of that is done through JavaScript, which can be seen by inspecting the code for the app. As we work through all of the possibilities, we’ll provide more APIs for other apps to take advantage of. We imagine that people will also write scripts that smooth, steady, dolly, or pan the camera entity as a “computer-controlled camera”, because it can be controlled by scripts like any other entity in High Fidelity.

Another issue concerns where the second rendering path displays. The camera viewfinder (shown in the picture at the top of this page) displays what the camera sees. It is visible from behind the camera or from the front (appropriately mirrored from the front!), but it is only visible to the person running the camera app. Similarly, the tablet app itself displays a copy of the main screen display, which can be either the same as the camera viewfinder or the same as what the user sees inside their VR headset. All of these copies are an efficient local texture resource that is not transmitted to other users on the network. That’s fine for local camera applications, but what about for a mirror?

Finally, High Fidelity uses highly optimized calculation and network distribution of the shared virtual world. These optimizations are highly focused on your avatar. For example, instead of sending you the raw microphone input from every other user, we route each audio stream through a mixer that computes what you should hear at your particular head position and orientation in the space, and which sends you just two pre-mixed streams of data. Similarly, we may only send you updates about moving avatars and moving entities when those things are near you or in your field of vision. Computations of what skybox or audio zone to use are based on your avatar position. We currently do not consider the position of the Spectator Camera for any of these, nor which way it is facing.

Art

We had a lot of fun with an internal movie-making contest among our staff. Here’s another one to give you a feel for things:


The Spectator Camera app is available for free in the
High Fidelity Marketplace
Published by Howard Stearns August 1, 2017
blog-circles

Don't Miss Out

Subscribe now to be first to know what we're working on next.

By subscribing, you agree to the High Fidelity Terms of Service

Need an audio solution?

icon-local-spatializer

Deliver the Best Audio Experience

High Fidelity’s client-side audio solutions work with your existing audio networking layer to deliver the most incredible audio experience.