{"id":416570,"date":"2017-07-27T06:00:39","date_gmt":"2017-07-27T13:00:39","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=416570"},"modified":"2017-07-27T10:06:49","modified_gmt":"2017-07-27T17:06:49","slug":"ai-with-creative-eyes-amplifies-the-artistic-sense-of-everyone","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/ai-with-creative-eyes-amplifies-the-artistic-sense-of-everyone\/","title":{"rendered":"AI with creative eyes amplifies the artistic sense of everyone"},"content":{"rendered":"
<\/p>\n
By Gang Hua<\/a>, Principal Researcher, Research Manager<\/p>\n Recent advances in the branch of artificial intelligence (AI) known as machine learning are helping everyone, including artistically challenged people such as myself, transform images and videos into creative and shareable works of art.<\/p>\n AI-powered computer vision techniques pioneered by researchers from Microsoft\u2019s Redmond and Beijing research labs, for example, provide new ways for people to transfer artistic styles to their photographs and videos as well as swap the visual style of two images, such as the face of a character from the movie Avatar and Mona Lisa.<\/p>\n The style transfer technique for photographs, known as StyleBank<\/a>, shipped this June in an update to Microsoft Pix<\/a>, a smartphone application that uses intelligent algorithms published in more than 20 research papers from Microsoft Research to help users get great photos with every tap of the shutter button.<\/p>\n The field of style transfer research explores ways to transfer an artistic style from one image to another, such as the style of post-impressionism onto a picture of your flower garden. For applications such as Microsoft Pix, a challenge is to offer users multiple styles to choose from and the ability to transfer styles to their images quickly and efficiently.<\/p>\n Our solution, StyleBank, explicitly represents visual styles as a set of convolutional filter banks, with each bank representing one style. To transfer an image to a specific style, an auto-encoder decomposes the input image into multi-layer feature maps that are independent of any styles. The corresponding filter bank for a chosen style is convolved with the feature maps and then go through a decoder to render the image in the chosen style.<\/p>\n The network completely decouples styles from the content. Because of this explicit representation, we can both train new styles and render stylized images more efficiently compared to existing offerings in this space.<\/p>\n The StyleBank research is a collaboration between Beijing lab researchers Lu Yuan<\/a> and Jing Liao<\/a>, intern Dongdong Chen and me. We collaborated closely with the broader Microsoft Pix team within Microsoft\u2019s research organization to integrate the style transfer feature with the smartphone application. Our team presented the work at the 2017 Conference on Computer Vision and Pattern Recognition<\/a> July 21-26 in Honolulu, Hawaii.<\/p>\n We are also extending the StyleBank technology to render stable stylized videos in an online fashion. Our technique is described in a paper<\/a> to be presented at the 2017 International Conference on Computer Vision<\/a> in Venice, Italy, October 22-29.<\/p>\n Our approach leverages temporal information about feature correspondences between consecutive frames to achieve consistent and stable stylized video sequences in near real time. The technique adaptively blends feature maps from the previous frame and the current frame to avoid ghosting artifacts, which are prevalent in techniques that render videos frame-by-frame.<\/p>\n A third paper <\/a>that I co-authored with Jing Liao and Lu Yuan along with my Redmond colleague Sing Bing Kang<\/a> for presentation at SIGGRAPH 2017<\/a> July 30 \u2013 August 2 in Los Angeles, describes a technique for visual attribute transfer across images with distinct appearances but with perceptually similar semantic structure \u2013 that is, the images contain similar visual content.<\/p>\n For example, the technique can put the face of a character from the movie Avatar onto an image of Leonardo da Vinci\u2019s famous painting of Mona Lisa and the face of Mona Lisa onto the character from Avatar. We call our technique deep image analogy. It works by finding dense semantic correspondences between two input images.<\/p>\n We look forward to sharing more details about these techniques to transform images and videos into creative and shareable works of art at the premier computer vision conferences this summer and fall.<\/p>\n Related:<\/strong><\/p>\n By Gang Hua, Principal Researcher, Research Manager Recent advances in the branch of artificial intelligence (AI) known as machine learning are helping everyone, including artistically challenged people such as myself, transform images and videos into creative and shareable works of art. AI-powered computer vision techniques pioneered by researchers from Microsoft\u2019s Redmond and Beijing research labs, […]<\/p>\n","protected":false},"author":36509,"featured_media":417149,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"categories":[194467,194471],"tags":[193706,186897],"research-area":[13556,13562],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-416570","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artifical-intelligence","category-computer-vision","tag-ai","tag-computer-vision","msr-research-area-artificial-intelligence","msr-research-area-computer-vision","msr-locale-en_us"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[199560,199565],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[737755],"related-projects":[737098],"related-events":[],"related-researchers":[],"msr_type":"Post","featured_image_thumbnail":"","byline":"","formattedDate":"July 27, 2017","formattedExcerpt":"By Gang Hua, Principal Researcher, Research Manager Recent advances in the branch of artificial intelligence (AI) known as machine learning are helping everyone, including artistically challenged people such as myself, transform images and videos into creative and shareable works of art. AI-powered computer vision techniques…","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/416570"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/36509"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=416570"}],"version-history":[{"count":16,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/416570\/revisions"}],"predecessor-version":[{"id":417632,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/416570\/revisions\/417632"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/417149"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=416570"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=416570"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=416570"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=416570"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=416570"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=416570"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=416570"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=416570"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=416570"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=416570"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=416570"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}\n