{"id":280028,"date":"2016-09-14T17:37:41","date_gmt":"2016-09-15T00:37:41","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&p=280028"},"modified":"2017-01-20T14:16:35","modified_gmt":"2017-01-20T22:16:35","slug":"emotion-recognition-for-ms-cognitive-services","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/emotion-recognition-for-ms-cognitive-services\/","title":{"rendered":"Emotion Recognition for MS Cognitive Services"},"content":{"rendered":"

Emotion\u00a0Recognition takes\u00a0an image with faces as an input, and returns the confidence across a set of emotions for each face in the image, as well as bounding box for the face (using\u00a0MS Face API). The algorithm infers emotions from appearance using a custom Deep Convolution Network. To improve labeling, we labeled each image with more than 10 taggers using crowd source, which allowed us to learn probability distribution for each image.<\/p>\n

We are also sharing publically an enhanced version of FER emotion dataset called FER+: https:\/\/github.com\/Microsoft\/FERPlus (opens in new tab)<\/span><\/a> for the research community.<\/p>\n","protected":false},"excerpt":{"rendered":"

Emotion\u00a0Recognition takes\u00a0an image with faces as an input, and returns the confidence across a set of emotions for each face in the image, as well as bounding box for the face (using\u00a0MS Face API). The algorithm infers emotions from appearance using a custom Deep Convolution Network. To improve labeling, we labeled each image with more […]<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"footnotes":""},"research-area":[13556,13562],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-280028","msr-project","type-msr-project","status-publish","hentry","msr-research-area-artificial-intelligence","msr-research-area-computer-vision","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"2015-07-01","related-publications":[],"related-downloads":[],"related-videos":[],"related-groups":[],"related-events":[],"related-opportunities":[],"related-posts":[610674],"related-articles":[],"tab-content":[{"id":0,"name":"","content":""}],"slides":[],"related-researchers":[{"type":"user_nicename","value":"chazhang","display_name":"Cha Zhang","author_link":"Cha Zhang<\/a>","is_active":false,"user_id":31379,"last_first":"Zhang, Cha","people_section":0,"alias":"chazhang"}],"msr_research_lab":[],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/280028"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":2,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/280028\/revisions"}],"predecessor-version":[{"id":356333,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/280028\/revisions\/356333"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=280028"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=280028"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=280028"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=280028"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=280028"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}