Abstract
Immersive storytelling and virtual meetings rely on 3D representations of peo ple, and in applications related to cultural heritage and semiformal meetings the 3D representations should convey a degree of real-ness of the human they represent. Humans can be represented by static mesh models which are rigged and animated to follow movements of the subject in real time. These ‘avatar’ representations are convenient, and require minimal computational resources and network bandwidth. However, they rely on accurately modelling human appearance and movements, and errors in either result in uncanny appearances and undesirable effects on the user’s experience. Still, modern VR systems present a convenient platform for portable experiences that include both embodiment and co-presence supported by human motion capture integrated into the headset in combination with avatar representations of the user.
Live volumetric capture presents an alternative which does not model the user’s appearance or movements, rather capturing a time-varying 3D representation of the user (a volumetric video) from one or more cameras. Current volumetric capture systems target photorealistic representations and produce convincing re sults, however they do not operate in real time and therefore cannot be used to support embodied experiences. Furthermore, volumetric capture studios consist of many cameras with controlled lighting and backgrounds. These systems are therefore not portable, and require the subject of the volumetric video to travel to the studio. A particular application where this is impractical is the Atea project, which aims to connect members of a dispersed Maori community to their marae (a complex of buildings at the centre of the community) and community. Since the Atea project is focused on connecting users to real places and real people, humans should be represented by volumetric video, and so a real-time portable volumetric capture system is required.
This thesis presents a portable volumetric capture system which can capture and render sparse voxel reconstructions in real time. The system was used to conduct two experiments to demonstrate that it could support embodied experiences, and that it (and real-time streaming of volumetric video in general) can support immersive meetings in a semiformal setting. In the context of the Atea project the portability of the volumetric capture system has been critical to producing the volumetric storytelling experiences by recording volumetric storytellers in situ, but also allowing users to experience the results without travelling to the research lab. Rather the system is transported to community events, and many members of the community directly of interest have been able to experience this system.