{"id":805060,"date":"2021-12-16T03:08:19","date_gmt":"2021-12-16T11:08:19","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&p=805060"},"modified":"2022-03-15T22:28:14","modified_gmt":"2022-03-16T05:28:14","slug":"virtualcube","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/virtualcube\/","title":{"rendered":"VirtualCube"},"content":{"rendered":"
\n\t
\n\t\t
\n\t\t\t\"banner\t\t<\/div>\n\t\t\n\t\t
\n\t\t\t\n\t\t\t
\n\t\t\t\t\n\t\t\t\t
\n\t\t\t\t\t\n\t\t\t\t\t
\n\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\n

VirtualCube<\/h1>\n\n\n\n

An Immersive 3D Video Communication System<\/p>\n\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/div>\n<\/section>\n\n\n\n\n\n

\n

Yizhong Zhang (opens in new tab)<\/span><\/a>*, Jiaolong Yang (opens in new tab)<\/span><\/a>*, Zhen Liu, Ruicheng Wang, Guojun Chen, Xin Tong (opens in new tab)<\/span><\/a>, and Baining Guo (opens in new tab)<\/span><\/a>,
VirtualCube: An Immersive 3D Video Communication System (opens in new tab)<\/span><\/a><\/b><\/em>,
IEEE VR 2022 (& IEEE TVCG)<\/strong>. (Best Journal-Track Paper Award<\/span><\/strong>)
arXiv:2112.04163 (opens in new tab)<\/span><\/a><\/p>\n<\/div>\n\n\n\n

\n

The VirtualCube system is a 3D video conference system that attempts to overcome some limitations of conventional technologies. The key ingredient is VirtualCube, an abstract representation of a real-world cubicle instrumented with RGBD cameras for capturing the user\u2019s 3D geometry and texture. We design VirtualCube so that the task of data capturing is standardized and significantly simplified, and everything can be built using off-the-shelf hardware. We use VirtualCubes as the basic building blocks of a virtual conferencing environment, and we provide each VirtualCube user with a surrounding display showing life-size videos of remote participants. To achieve real-time rendering of remote participants, we develop the V-Cube View algorithm, which uses multi-view stereo for more accurate depth estimation and Lumi-Net rendering for better rendering quality. The VirtualCube system correctly preserves the mutual eye gaze between participants, allowing them to establish eye contact and be aware of who is visually paying attention to them. The system also allows a participant to have side discussions with remote participants as if they were in the same room. Finally, the system sheds lights on how to support the shared space of work items (e.g., documents and applications) and track participants\u2019 visual attention to work items. <\/p>\n<\/div>\n\n\n\n

\"face<\/p>\"face<\/td>

\"round-table\"<\/p>\"round-table\"<\/td>

\"side-by-side\"<\/p>\"side<\/td><\/tr><\/tbody><\/table>\n

Snapshots of the VirtualCube system in action, with the local participant in the foreground. The images of remote participants on the screen are synthesized from the RGBD data acquired by cameras. Each of the participants is in a different location. Our system supports face-to-face meetings with two participants, round-table meetings with multiple participants, and side-by-side meetings that includes sharing work items on the participants\u2019 screens. Mutual eye contact and visual attention can be achieved as if the participants were in the same room.<\/b><\/p>\n\n\n\n

\n
\n