{"id":491900,"date":"2018-06-20T10:36:19","date_gmt":"2018-06-20T17:36:19","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&p=491900"},"modified":"2020-04-09T18:55:15","modified_gmt":"2020-04-10T01:55:15","slug":"avatar-embodiment-standard-questionnaire","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/avatar-embodiment-standard-questionnaire\/","title":{"rendered":"Avatars"},"content":{"rendered":"

 <\/p>\n

Inside Virtual Reality (VR), users are represented by avatars. When the avatars are collocated from in first-person perspective, users experience what is commonly known as embodiment. When doing so, participants have the feeling that the own body has been substituted by the self-avatar, and that the new body is the source of the sensations. Embodiment is complex as it includes not only body ownership over the avatar, but also agency, co-location, and external appearance. Despite the multiple variables that influence it, the illusion is quite robust, and it can be produced even if the self-avatar is of a different age, size, gender, or race from the participant\u2019s own body.<\/p>\n

Our research with avatars has tried to push forward the boundaries of avatars, how they are perceived, how users behave when interacting with avatars, the basis of self-recognition on avatars, how avatars impact our locomotion in vr, and how they change our motor actions, all from both the computer graphics and the human computer interaction sides.<\/p>\n

This line of research on avatars is also aiming to understand further effects on psychological and neuroscience theories.<\/p>\n

As part of our effort we have released and contributed with two main opensource projects to the communtiy.<\/p>\n

2020 release of the Microsoft Rocketbox avatar library<\/a><\/p>\n

2018 release of a Standarized Embodiment Questionnaire<\/a><\/p>\n

\"\"<\/a><\/p>\n

 <\/p>\n","protected":false},"excerpt":{"rendered":"

  Inside Virtual Reality (VR), users are represented by avatars. When the avatars are collocated from in first-person perspective, users experience what is commonly known as embodiment. When doing so, participants have the feeling that the own body has been substituted by the self-avatar, and that the new body is the source of the sensations. […]<\/p>\n","protected":false},"featured_media":645876,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"footnotes":""},"research-area":[13554],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-491900","msr-project","type-msr-project","status-publish","has-post-thumbnail","hentry","msr-research-area-human-computer-interaction","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"2018-07-01","related-publications":[491945,381233,940662,420087,558525,145317,566619,161333,578215,161336,579721,238055,618201,238060,634248,264864,634266,322127,693264,359453,697249],"related-downloads":[707845],"related-videos":[],"related-groups":[],"related-events":[],"related-opportunities":[],"related-posts":[],"related-articles":[],"tab-content":[{"id":0,"name":"Behavioural Effects","content":"One of the important aspects of Virtual Reality is the Presence illusion, that makes users behave as they would do in reality. That creates the perfect base to be able to test theories on social behavior that would be impossible to replicate in real scenarios and so study areas such as bystander effects during tragic events or even obedience to authority paradigms.\r\n\r\nA large review on the Virtual Reality Illusions and how they work can be found here:\r\nGonzalez-Franco, Mar, and Jaron Lanier. \"Model of illusions and virtual reality.\" Frontiers in psychology 8 (2017): 1125.\r\nhttps:\/\/www.microsoft.com\/en-us\/research\/publication\/model-illusions-virtual-reality\/\r\n\r\nOne example of our research in this field is the replication of the Milgram experiment inside VR.\r\nGonzalez-Franco, M., Slater, M., Birney, M. E., Swapp, D., Haslam, S. A., & Reicher, S. D. (2018). Participant concerns for the Learner in a Virtual Reality replication of the Milgram obedience study. PloS one, 13(12).\r\n\r\nhttps:\/\/www.microsoft.com\/en-us\/research\/publication\/participant-concerns-for-the-learner-in-a-virtual-reality-replication-of-the-milgram-obedience-study\/\r\n\r\nThe main learnings of this work was also published for the general in our Scientific American observation blog post:\r\nhttps:\/\/www.microsoft.com\/en-us\/research\/publication\/would-you-give-a-virtual-electric-shock-to-an-avatar\/\r\n\r\n\"\"<\/a>\r\n\r\n \r\n\r\n "},{"id":1,"name":"Perception","content":"

We understand and interact with the world through our bodies, therefore it should not be so surprising that when we embody an avatar it has the potential to\u00a0change how we perceive ourselves and how we perceive the world. In this line of research we have explored how having a virtual body changes how we perceive touch. Gonzalez-Franco, M., & Berger, C. C. (2019). Avatar embodiment enhances haptic confidence on the out-of-body touch illusion.\u00a0IEEE transactions on haptics<\/i>,\u00a012<\/i>(3), 319-326. https:\/\/www.microsoft.com\/en-us\/research\/publication\/avatar-embodiment-enhances-haptic-confidence-on-the-out-of-body-touch-illusion\/ But also how the mechanisms for self-identification work when you are represented by an avatar that looks like you> Gonzalez-Franco, M., Bellido, A. I., Blom, K. J., Slater, M., & Rodriguez-Fornells, A. (2016). The neurological traces of look-alike avatars.\u00a0Frontiers in human neuroscience<\/i>,\u00a010<\/i>, 392. \u00a0 But also when it does not look like you> Gonzalez-Franco, M., Steed, A., Hoogendyk, S., & Ofek, E.,<\/span> (2020). Using Facial Animation to Increase the Enfacement Illusion and Avatar Self-Identification IEEE Transactions in TVCG<\/i> https:\/\/www.microsoft.com\/en-us\/research\/publication\/using-facial-animation-to-increase-the-enfacement-illusion-and-avatar-self-identification\/ \u00a0<\/p>"},{"id":2,"name":"Motor Control","content":"The first person avatar represents the location and pose of the user's body in virtual reality (VR), when the head mounted display (HMD) obscure the direct view of the user's body.\r\n\r\nPast works has shown the importance of the first person avatar for interaction, self image of the user and even imersiveness of the virtual reality experience.\r\n\r\nAt the EPIC group of Microsoft research we are looking on ways that we can manipulate the perception of user's in virtual reality by presenting first person avatars that represents the pose of the user's body differently than the actual body pose is. Using the embodiment of the avatar, generated by moving the avatar in a very similar (but not exact) motion of the user's body, we gradually depart the avatar from the user's pose and by doing so effect the user's actual motion in VR.\r\n\r\nAn attractive use of this technique is rendering of touch sensation for people in VR using existing physical objects in their environment such as walls, table or hand held objects. Using uninstrumented objects is called 'passive haptics' (in contrast to 'active haptics' that uses mechanized objects controlled by a computer). A major problem when using inanimate objects is bringing them to the exact positions in the user's environment that will correspond to the locations of the virtual objects that they mimic. Our approach does the opposite, and bring the user's hands, that reach toward the virtual objects, to the real objects in a synchronized fashion. At the exact moment that the avatar hand will touch the virtual object, the user's real hand will touch a real object that lies in a different location.\r\n\r\nThe 2016 paper 'Haptic retargeting<\/a>' (Azmandian et. al), showed that it is possible to fool users to build a whole pyramid made of many cubes, using a single wooden cube, where the user hand reaching to many different virtual cubes is repeatedly redirected to the same physical cube.\r\n\r\nA later work form CHI 2017, 'Sparse Haptic Proxy: Touch Feedback in Virtual Environments using a General Passive Prop<\/a>' (Chang t. al. ) showed expansion of this work to general geometries and general virtual experiences.\r\n

\"\"<\/p>\r\n

Figure 1: The user of Virtual Reality, may find himself in different virtual worlds, such as a spy game (middle) or a space simulator(right) , yet they all give him tangible feedback using the same physical geometry in the real world (left).<\/p>\r\n \r\n\r\n \r\n\r\n "},{"id":3,"name":"Creation","content":"We looked at two challenging aspects of creation of avatars of real people. The first aspect is creation of hair styles that will match the look of the users. To do so, it is enough to get images by a camera of the person from different direction. Our system generated a physical (and animate-able) hair structure that is similar to the pictures.\u00a0 (This was the first work that showed the ability to use multi-view reconstruction, when the hair fibers themselves are too small to be seen in the images, let alone be matched between the views. This work was presented at Siggraph 2005 in the paper \"Modeling Hair from Multi View<\/a>\" by Wei et. al. (SIGGRAPH talk<\/a>)\r\n\r\n\"\"\r\n

Figure 1: Automatically generated hair for an avatar from images.<\/p>\r\nThis work was extended to use input from multiple video cameras and recovery of both the hair model as well as motion (\"Video-vased modeling of dynamic hair\" Yamaguchi et. al.,\u00a0 PSIVT 2009<\/span>:\u00a0Advances in Image and Video Technology<\/a><\/span>\u00a0pp 585-596<\/span>).\r\n\r\n \r\n\r\nAnother challenge is texturing avatars from original images of the real people. The avatar, constructed from the images had differences in geometry from the original images. Those differences raise from the source of the avatar geometry (Such as a modification of a generic model),\u00a0 inaccuracies in the geometry reconstruction from stereo, inaccuracies in the position of the cameras, and last but not least, simplification of the avatar geometry to fit real time applications need. Our system, presented at\u00a0 EuroGraphics 2010 (\"Seamles Montage for Texturing Models<\/a>\" Gal et. al.) is able to compensate for all the above elements and generate a consistent texturing of the avatar, even if the geometry of the avatar is quite different than that of the actual person. \"\"\r\n

Figure 2: A texturing of a model where the smoothed geometry is very different than the actual image (for example, note the missing cavity for the driver in the middle of the car, or the lack of wheels in the model).<\/p>\r\n\"\" \u00a0 \u00a0 \u00a0 \u00a0 Figure 3: Texturing of a very low level of details model \u00a0 (used for background avatars) using existing news photography. The model contains no facial geometry, and has large differences than the real geometry, yet the generated texturing is consistent."}],"slides":[],"related-researchers":[{"type":"guest","display_name":"Anthony Steed","user_id":500381,"people_section":"Section name 1","alias":""}],"msr_research_lab":[199565],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/491900"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":22,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/491900\/revisions"}],"predecessor-version":[{"id":646857,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/491900\/revisions\/646857"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/645876"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=491900"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=491900"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=491900"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=491900"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=491900"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}