{"id":417836,"date":"2017-07-28T02:24:22","date_gmt":"2017-07-28T09:24:22","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-research-item&p=417836"},"modified":"2018-10-16T20:08:49","modified_gmt":"2018-10-17T03:08:49","slug":"learning-non-lambertian-object-intrinsics-across-shapenet-categories","status":"publish","type":"msr-research-item","link":"https:\/\/www.microsoft.com\/en-us\/research\/publication\/learning-non-lambertian-object-intrinsics-across-shapenet-categories\/","title":{"rendered":"Learning Non-Lambertian Object Intrinsics across ShapeNet Categories"},"content":{"rendered":"

We consider the non-Lambertian object intrinsic problem
\nof recovering diffuse albedo, shading, and specular
\nhighlights from a single image of an object.
\nWe build a large-scale object intrinsics database based
\non existing 3D models in the ShapeNet database. Rendered
\nwith realistic environment maps, millions of synthetic
\nimages of objects and their corresponding albedo, shading,
\nand specular ground-truth images are used to train an
\nencoder-decoder CNN. Once trained, the network can decompose
\nan image into the product of albedo and shading
\ncomponents, along with an additive specular component.
\nOur CNN delivers accurate and sharp results in this
\nclassical inverse problem of computer vision, sharp details
\nattributed to skip layer connections at corresponding resolutions
\nfrom the encoder to the decoder. Benchmarked on
\nour ShapeNet and MIT intrinsics datasets, our model consistently
\noutperforms the state-of-the-art by a large margin.
\nWe train and test our CNN on different object categories.
\nPerhaps surprising especially from the CNN classification
\nperspective, our intrinsics CNN generalizes very
\nwell across categories. Our analysis shows that feature
\nlearning at the encoder stage is more crucial for developing
\na universal representation across categories.
\nWe apply our synthetic data trained model to images and
\nvideos downloaded from the internet, and observe robust
\nand realistic intrinsics results. Quality non-Lambertian intrinsics
\ncould open up many interesting applications such
\nas image-based albedo and specular editing.<\/p>\n","protected":false},"excerpt":{"rendered":"

We consider the non-Lambertian object intrinsic problem of recovering diffuse albedo, shading, and specular highlights from a single image of an object. We build a large-scale object intrinsics database based on existing 3D models in the ShapeNet database. Rendered with realistic environment maps, millions of synthetic images of objects and their corresponding albedo, shading, and […]<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"footnotes":""},"msr-content-type":[3],"msr-research-highlight":[],"research-area":[13562,13551],"msr-publication-type":[193716],"msr-product-type":[],"msr-focus-area":[],"msr-platform":[],"msr-download-source":[],"msr-locale":[268875],"msr-field-of-study":[],"msr-conference":[],"msr-journal":[],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-417836","msr-research-item","type-msr-research-item","status-publish","hentry","msr-research-area-computer-vision","msr-research-area-graphics-and-multimedia","msr-locale-en_us"],"msr_publishername":"","msr_edition":"CVPR 2017","msr_affiliation":"","msr_published_date":"2017-07-25","msr_host":"","msr_duration":"","msr_version":"","msr_speaker":"","msr_other_contributors":"","msr_booktitle":"","msr_pages_string":"","msr_chapter":"","msr_isbn":"","msr_journal":"","msr_volume":"","msr_number":"","msr_editors":"","msr_series":"","msr_issue":"","msr_organization":"","msr_how_published":"","msr_notes":"","msr_highlight_text":"","msr_release_tracker_id":"","msr_original_fields_of_study":"","msr_download_urls":"","msr_external_url":"","msr_secondary_video_url":"","msr_longbiography":"","msr_microsoftintellectualproperty":1,"msr_main_download":"417839","msr_publicationurl":"http:\/\/yuedong.shading.me\/project\/s_intrinsic\/s_intrinsic.htm","msr_doi":"","msr_publication_uploader":[{"type":"file","title":"s_intrinsic","viewUrl":"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2017\/07\/s_intrinsic.pdf","id":417839,"label_id":0},{"type":"url","title":"http:\/\/yuedong.shading.me\/project\/s_intrinsic\/s_intrinsic.htm","viewUrl":false,"id":false,"label_id":0}],"msr_related_uploader":"","msr_attachments":[{"id":0,"url":"http:\/\/yuedong.shading.me\/project\/s_intrinsic\/s_intrinsic.htm"}],"msr-author-ordering":[{"type":"text","value":"Jian Shi","user_id":0,"rest_url":false},{"type":"user_nicename","value":"yuedong","user_id":35060,"rest_url":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/microsoft-research\/v1\/researchers?person=yuedong"},{"type":"text","value":"Hao Su","user_id":0,"rest_url":false},{"type":"text","value":"Stella X. Yu","user_id":0,"rest_url":false}],"msr_impact_theme":[],"msr_research_lab":[199560],"msr_event":[],"msr_group":[144710],"msr_project":[],"publication":[],"video":[],"download":[],"msr_publication_type":"inproceedings","_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item\/417836"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-research-item"}],"version-history":[{"count":4,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item\/417836\/revisions"}],"predecessor-version":[{"id":417869,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item\/417836\/revisions\/417869"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=417836"}],"wp:term":[{"taxonomy":"msr-content-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-content-type?post=417836"},{"taxonomy":"msr-research-highlight","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-highlight?post=417836"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=417836"},{"taxonomy":"msr-publication-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-publication-type?post=417836"},{"taxonomy":"msr-product-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-product-type?post=417836"},{"taxonomy":"msr-focus-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-focus-area?post=417836"},{"taxonomy":"msr-platform","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-platform?post=417836"},{"taxonomy":"msr-download-source","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-download-source?post=417836"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=417836"},{"taxonomy":"msr-field-of-study","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-field-of-study?post=417836"},{"taxonomy":"msr-conference","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-conference?post=417836"},{"taxonomy":"msr-journal","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-journal?post=417836"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=417836"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=417836"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}