{"id":503525,"date":"2018-08-28T19:38:59","date_gmt":"2018-08-29T02:38:59","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-research-item&p=503525"},"modified":"2019-02-14T09:52:00","modified_gmt":"2019-02-14T17:52:00","slug":"fast-discrete-cross-modal-hashing-with-regressing-from-semantic-labels","status":"publish","type":"msr-research-item","link":"https:\/\/www.microsoft.com\/en-us\/research\/publication\/fast-discrete-cross-modal-hashing-with-regressing-from-semantic-labels\/","title":{"rendered":"Fast Discrete Cross-modal Hashing With Regressing From Semantic Labels"},"content":{"rendered":"

Hashing has recently received great attention in cross-modal retrieval. Cross-modal retrieval aims at retrieving information across heterogeneous modalities (e.g., texts vs. images). Cross-modal hashing compresses heterogeneous high-dimensional data into compact binary codes with similarity preserving, which provides efficiency and facility in both retrieval and storage. In this study, we propose a novel fast discrete cross-modal hashing (FDCH) method with regressing
\nfrom semantic labels to take advantage of supervised labels to improve retrieval performance. In contrast to existing methods that learn the projection from hash codes to semantic labels, the proposed FDCH regresses the semantic labels of training examples to the corresponding hash codes with a drift. It not only accelerates the hash learning process, but also helps generate stable hash codes. Furthermore, the drift can adjust the regression and enhance the discriminative capability of hash codes. Especially in the case of training efficiency, FDCH is much faster than existing methods.
\nComparisons with several state-of-the-art techniques with three benchmark datasets have demonstrated the superiority of FDCH under various cross-modal retrieval scenarios. <\/p>\n","protected":false},"excerpt":{"rendered":"

Hashing has recently received great attention in cross-modal retrieval. Cross-modal retrieval aims at retrieving information across heterogeneous modalities (e.g., texts vs. images). Cross-modal hashing compresses heterogeneous high-dimensional data into compact binary codes with similarity preserving, which provides efficiency and facility in both retrieval and storage. In this study, we propose a novel fast discrete cross-modal […]<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"footnotes":""},"msr-content-type":[3],"msr-research-highlight":[],"research-area":[13562,13551],"msr-publication-type":[193716],"msr-product-type":[],"msr-focus-area":[],"msr-platform":[],"msr-download-source":[],"msr-locale":[268875],"msr-field-of-study":[],"msr-conference":[],"msr-journal":[],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-503525","msr-research-item","type-msr-research-item","status-publish","hentry","msr-research-area-computer-vision","msr-research-area-graphics-and-multimedia","msr-locale-en_us"],"msr_publishername":"ACM","msr_edition":"","msr_affiliation":"","msr_published_date":"2018-10-22","msr_host":"","msr_duration":"","msr_version":"","msr_speaker":"","msr_other_contributors":"","msr_booktitle":"","msr_pages_string":"","msr_chapter":"","msr_isbn":"","msr_journal":"","msr_volume":"","msr_number":"","msr_editors":"","msr_series":"","msr_issue":"","msr_organization":"","msr_how_published":"","msr_notes":"","msr_highlight_text":"","msr_release_tracker_id":"","msr_original_fields_of_study":"","msr_download_urls":"","msr_external_url":"","msr_secondary_video_url":"","msr_longbiography":"","msr_microsoftintellectualproperty":1,"msr_main_download":"","msr_publicationurl":"http:\/\/www.acmmm.org\/2018\/%20","msr_doi":"","msr_publication_uploader":[{"type":"url","viewUrl":"false","id":"false","title":"http:\/\/www.acmmm.org\/2018\/%20","label_id":"243109","label":0}],"msr_related_uploader":"","msr_attachments":[{"id":0,"url":"http:\/\/www.acmmm.org\/2018\/%20"}],"msr-author-ordering":[{"type":"text","value":"X. Liu","user_id":0,"rest_url":false},{"type":"text","value":"X. Nie","user_id":0,"rest_url":false},{"type":"user_nicename","value":"Wenjun Zeng","user_id":34830,"rest_url":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/microsoft-research\/v1\/researchers?person=Wenjun Zeng"},{"type":"text","value":"C. Cui","user_id":0,"rest_url":false},{"type":"text","value":"L. Zhu","user_id":0,"rest_url":false},{"type":"text","value":"Y. Yin","user_id":0,"rest_url":false}],"msr_impact_theme":[],"msr_research_lab":[199560],"msr_event":[],"msr_group":[144711],"msr_project":[],"publication":[],"video":[],"download":[],"msr_publication_type":"inproceedings","_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item\/503525"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-research-item"}],"version-history":[{"count":1,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item\/503525\/revisions"}],"predecessor-version":[{"id":503537,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item\/503525\/revisions\/503537"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=503525"}],"wp:term":[{"taxonomy":"msr-content-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-content-type?post=503525"},{"taxonomy":"msr-research-highlight","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-highlight?post=503525"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=503525"},{"taxonomy":"msr-publication-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-publication-type?post=503525"},{"taxonomy":"msr-product-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-product-type?post=503525"},{"taxonomy":"msr-focus-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-focus-area?post=503525"},{"taxonomy":"msr-platform","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-platform?post=503525"},{"taxonomy":"msr-download-source","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-download-source?post=503525"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=503525"},{"taxonomy":"msr-field-of-study","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-field-of-study?post=503525"},{"taxonomy":"msr-conference","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-conference?post=503525"},{"taxonomy":"msr-journal","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-journal?post=503525"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=503525"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=503525"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}