{"id":800596,"date":"2021-11-29T18:05:38","date_gmt":"2021-11-30T02:05:38","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-research-item&p=800596"},"modified":"2021-11-29T18:05:38","modified_gmt":"2021-11-30T02:05:38","slug":"online-discrepancy-with-recourse-for-vectors-and-graphs","status":"publish","type":"msr-research-item","link":"https:\/\/www.microsoft.com\/en-us\/research\/publication\/online-discrepancy-with-recourse-for-vectors-and-graphs\/","title":{"rendered":"Online Discrepancy with Recourse for Vectors and Graphs"},"content":{"rendered":"

The vector-balancing problem is a fundamental problem in discrepancy theory: given T vectors in\u00a0[<\/span>\u2212<\/span>1<\/span>,<\/span>1<\/span>]<\/span>n<\/span><\/span><\/span><\/span><\/span>, find a signing\u00a0\u03c3<\/span>(<\/span>a<\/span>)<\/span>\u2208<\/span>{<\/span>\u00b1<\/span>1<\/span>}<\/span><\/span><\/span><\/span>\u00a0of each vector\u00a0a<\/span><\/span><\/span><\/span>\u00a0to minimize the discrepancy\u00a0\u2225<\/span>\u2211<\/span>a<\/span><\/span><\/span><\/span>\u03c3<\/span>(<\/span>a<\/span>)<\/span>\u22c5<\/span>a<\/span>\u2225<\/span>\u221e<\/span><\/span><\/span><\/span><\/span><\/span><\/span>. This problem has been extensively studied in the static\/offline setting. In this paper we initiate its study in the fully-dynamic setting with recourse: the algorithm sees a stream of T insertions and deletions of vectors, and at each time must maintain a low-discrepancy signing, while also minimizing the amortized recourse (the number of times any vector changes its sign) per update.
\nFor general vectors, we show algorithms which almost match Spencer’s\u00a0O<\/span>(sqrt(n)<\/span>)<\/span><\/span><\/span><\/span>\u00a0offline discrepancy bound, with\u00a0O<\/span><\/span><\/span>(<\/span>n<\/span>\u22c5<\/span>p<\/span>o<\/span>l<\/span>y<\/span><\/span>log<\/span><\/span>T<\/span>)<\/span><\/span><\/span><\/span>\u00a0amortized recourse per update. The crucial idea is to compute a basic feasible solution to the linear relaxation in a distributed and recursive manner, which helps find a low-discrepancy signing. To bound recourse we argue that only a small part of the instance needs to be re-computed at each update.
\nSince vector balancing has also been greatly studied for sparse vectors, we then give algorithms for low-discrepancy edge orientation, where we dynamically maintain signings for 2-sparse vectors. Alternatively, this can be seen as orienting a dynamic set of edges of an n-vertex graph to minimize the absolute difference between in- and out-degrees at any vertex. We present a deterministic algorithm with\u00a0O<\/span>(<\/span>p<\/span>o<\/span>l<\/span>y<\/span><\/span>log<\/span><\/span>n<\/span>)<\/span><\/span><\/span><\/span>\u00a0discrepancy and\u00a0O<\/span>(<\/span>p<\/span>o<\/span>l<\/span>y<\/span><\/span>log<\/span><\/span>n<\/span>)<\/span><\/span><\/span><\/span>\u00a0amortized recourse. The core ideas are to dynamically maintain an expander-decomposition with low recourse and then to show that, as the expanders change over time, a natural local-search algorithm converges quickly (i.e., with low recourse) to a low-discrepancy solution. We also give strong lower bounds for local-search discrepancy minimization algorithms.<\/p>\n","protected":false},"excerpt":{"rendered":"

The vector-balancing problem is a fundamental problem in discrepancy theory: given T vectors in\u00a0[\u22121,1]n, find a signing\u00a0\u03c3(a)\u2208{\u00b11}\u00a0of each vector\u00a0a\u00a0to minimize the discrepancy\u00a0\u2225\u2211a\u03c3(a)\u22c5a\u2225\u221e. This problem has been extensively studied in the static\/offline setting. In this paper we initiate its study in the fully-dynamic setting with recourse: the algorithm sees a stream of T insertions and deletions […]<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"msr-content-type":[3],"msr-research-highlight":[],"research-area":[13561],"msr-publication-type":[193716],"msr-product-type":[],"msr-focus-area":[],"msr-platform":[],"msr-download-source":[],"msr-locale":[268875],"msr-post-option":[],"msr-field-of-study":[262279],"msr-conference":[],"msr-journal":[],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-800596","msr-research-item","type-msr-research-item","status-publish","hentry","msr-research-area-algorithms","msr-locale-en_us","msr-field-of-study-discrepancy"],"msr_publishername":"","msr_edition":"","msr_affiliation":"","msr_published_date":"2022-1","msr_host":"","msr_duration":"","msr_version":"","msr_speaker":"","msr_other_contributors":"","msr_booktitle":"","msr_pages_string":"","msr_chapter":"","msr_isbn":"","msr_journal":"","msr_volume":"","msr_number":"","msr_editors":"","msr_series":"","msr_issue":"","msr_organization":"","msr_how_published":"","msr_notes":"","msr_highlight_text":"","msr_release_tracker_id":"","msr_original_fields_of_study":"","msr_download_urls":"","msr_external_url":"","msr_secondary_video_url":"","msr_longbiography":"","msr_microsoftintellectualproperty":1,"msr_main_download":"","msr_publicationurl":"","msr_doi":"","msr_publication_uploader":[{"type":"url","viewUrl":"false","id":"false","title":"https:\/\/arxiv.org\/abs\/2111.06308","label_id":"243109","label":0}],"msr_related_uploader":"","msr_attachments":[],"msr-author-ordering":[{"type":"user_nicename","value":"Ravishankar Krishnaswamy","user_id":33330,"rest_url":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/microsoft-research\/v1\/researchers?person=Ravishankar Krishnaswamy"}],"msr_impact_theme":[],"msr_research_lab":[199562],"msr_event":[],"msr_group":[144938],"msr_project":[800338],"publication":[],"video":[],"download":[],"msr_publication_type":"inproceedings","related_content":{"projects":[{"ID":800338,"post_title":"Optimization with Uncertainty","post_name":"optimization-with-uncertainty","post_type":"msr-project","post_date":"2021-11-29 18:01:50","post_modified":"2021-11-29 18:02:20","post_status":"publish","permalink":"https:\/\/www.microsoft.com\/en-us\/research\/project\/optimization-with-uncertainty\/","post_excerpt":"Classical algorithms (exact\/ approximation) work with an input which is entirely specified up front. While this offline model is useful for static optimization problems, there are several domains which need algorithms to make decisions with partial\/uncertain information which evolves over time. We seek to design algorithms in such uncertain environments and also design frameworks to evaluate their quality using different metrics. Models such as online algorithms, stochastic optimization, and algorithms with recourse fall into this…","_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/800338"}]}}]},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item\/800596"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-research-item"}],"version-history":[{"count":1,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item\/800596\/revisions"}],"predecessor-version":[{"id":800599,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item\/800596\/revisions\/800599"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=800596"}],"wp:term":[{"taxonomy":"msr-content-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-content-type?post=800596"},{"taxonomy":"msr-research-highlight","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-highlight?post=800596"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=800596"},{"taxonomy":"msr-publication-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-publication-type?post=800596"},{"taxonomy":"msr-product-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-product-type?post=800596"},{"taxonomy":"msr-focus-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-focus-area?post=800596"},{"taxonomy":"msr-platform","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-platform?post=800596"},{"taxonomy":"msr-download-source","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-download-source?post=800596"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=800596"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=800596"},{"taxonomy":"msr-field-of-study","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-field-of-study?post=800596"},{"taxonomy":"msr-conference","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-conference?post=800596"},{"taxonomy":"msr-journal","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-journal?post=800596"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=800596"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=800596"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}