A group led by researcher Andrea Stevenson Won at Cornell University is currently using High Fidelity to record body movements made during communication, taking advantage of how High Fidelity accurately transmits both head and hand movements with low latency, for multiple participants. You can read more at their website.
We are learning more about human communication through VR: In part, this is because we are having to establish and enable what is minimally needed for effective and enjoyable communication. For example, do we need to see finger movements, or just hands? How about arms or upper body posture? What about mouth movements, versus eyebrows?
But Cornell’s work is an example of how we can now begin to go beyond this feature prioritization process and actually start to decode parts of human communication that have previously been difficult or impossible to study. When people talk, they use their body in a very rich and subtle way to communicate. We don’t know what fraction of communication body language accounts for, but we know it is a lot. The challenge is that for scientists to study it they have historically had to rely on high speed cameras that watch subjects, later laboriously estimating body motion frame-by-frame from multiple camera views.