Jeremy’s Virtual Human Interaction Lab at Stanford gave me the scariest “can you step off a virtual ledge?” VR experience I’ve ever had, and I couldn’t sort out exactly why it was so compelling: better than the Valve ‘room’ demo or the Crescent Bay, even with lower-quality visual rendering. But Jeremy knows why, and it’s been his career to figure out more than anyone else the details around how we communicate and perceive each other as avatars. For another great example, check out the Proteus Effect. His advice will help us advance the state of the art in 1:1 presence.
Like so many other developers, I first discovered Ken Perlin’s amazing Academy-Award-Winning work on structured noise when I was coding the procedural landscapes and textures of Second Life. But Ken has done a lot more than that! His work on animation and simplified facial expressions is directly relevant to our work capturing head movement and facial data to animate avatars. The last time I visited Ken’s lab at NYU, he put me in in a GearVR and let me walk around a room and draw notes in mid-air using custom hardware his team had built, anticipating the capabilities that are coming with the HTC Vive.
Welcome, and we are delighted to have you on-board!