{"id":848023,"date":"2022-05-31T09:00:00","date_gmt":"2022-05-31T16:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=848023"},"modified":"2022-08-17T08:52:23","modified_gmt":"2022-08-17T15:52:23","slug":"dowhy-evolves-to-independent-pywhy-model-to-help-causal-inference-grow","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/dowhy-evolves-to-independent-pywhy-model-to-help-causal-inference-grow\/","title":{"rendered":"DoWhy evolves to independent PyWhy model to help causal inference grow"},"content":{"rendered":"\n
\"A<\/figure>\n\n\n\n

Identifying causal effects is an integral part of scientific inquiry. It helps us understand everything from educational outcomes to the effects of social policies to risk factors for diseases. Questions of cause-and-effect are also critical for the design and data-driven evaluation of many technological systems we build today. <\/p>\n\n\n\n

To help data scientists better understand and deploy causal inference, Microsoft researchers built a tool that implements the process of causal inference analysis from end to end. The ensuing DoWhy (opens in new tab)<\/span><\/a> library has been doing just that since 2018 and has cultivated a community devoted to applying causal inference principles in data science. To broaden access to this critical knowledge base, DoWhy is migrating to an independent open-source governance model in a new PyWhy GitHub organization (opens in new tab)<\/span><\/a>. As a first step toward this model, we are announcing a collaboration with Amazon Web Services (AWS), which is contributing new technology based on structural causal models. <\/p>\n\n\n\n

What is causal inference?<\/h2>\n\n\n\n

The goal of conventional machine learning methods is to predict an outcome. In contrast, causal inference focuses on the effect of a decision or action\u2014that is, the difference between the outcome if an action is completed versus not completed. For example, consider a public utility company seeking to reduce their customers\u2019 usage of water through a marketing and rewards program. The effectiveness of a rewards program is difficult to ascertain, as any decrease in water usage by participating customers is confounded with their choice to participate in the program. If we observe that a rewards program member uses less water, how do we know whether it is the program that is incentivizing their lower water usage or if customers who were already planning to reduce water usage also chose to join the program? Given information about the drivers of customer behavior, causal methods can disentangle confounding factors and identify the effect of this rewards program. <\/p>\n\n\n\n

\"Figure<\/a>
Figure 1: A public utility introduces a program that rewards water usage reduction. Are people who sign up using less water than they would have otherwise? <\/figcaption><\/figure>\n\n\n\n

How do we know when we have the right answer? The effect of an action like signing up for a customer loyalty program is typically not<\/em> an observable value. For any given customer, we see only one of the two respective outcomes and cannot directly observe the difference the program made. This means that processes developed to validate conventional machine learning models\u2014based on comparing predictions to observed, ground truths\u2014cannot be used. Instead, we need new processes to gain confidence in the reliability of causal inference. Most critically, we need to capture our domain knowledge, reason about our modeling choices, then validate our core assumptions when possible and analyze the sensitivity of our results to violations of assumptions when validation is not possible. <\/p>\n\n\n\n\t

\n\t\t\n\n\t\t

\n\t\tSpotlight: Blog post<\/span>\n\t<\/p>\n\t\n\t

\n\t\t\t\t\t\t
\n\t\t\t\t\n\t\t\t\t\t\"White\n\t\t\t\t<\/a>\n\t\t\t<\/div>\n\t\t\t\n\t\t\t
\n\n\t\t\t\t\t\t\t\t\t

Eureka: Evaluating and understanding progress in AI<\/h2>\n\t\t\t\t\n\t\t\t\t\t\t\t\t

How can we rigorously evaluate and understand state-of-the-art progress in AI? Eureka is an open-source framework for standardizing evaluations of large foundation models, beyond single-score reporting and rankings. Learn more about the extended findings.\u00a0<\/p>\n\t\t\t\t\n\t\t\t\t\t\t\t\t

\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\tRead more\t\t\t\t\t\t<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t<\/div>\n\t<\/div>\n\t\n\n\n

Four steps of causal inference analysis<\/h3>\n\n\n\n

Data scientists just beginning to explore causal inference are most challenged by the new modeling assumptions of causal methods. DoWhy<\/a> can help them understand and implement the process. The library focuses on the four steps of an end-to-end causal inference analysis, which are discussed in detail in a previous paper, DoWhy: an End-to-End Library for Causal Inference<\/a>, and related blog post<\/a>: <\/p>\n\n\n\n

  1. Modeling<\/strong>: Causal reasoning begins with the creation of a clear model of the causal assumptions being made. This involves documenting what is known about the data generating process and mechanisms. To get a valid answer to our cause-and-effect questions, we must be explicit about what we already know. <\/li><\/ol>\n\n\n\n
    1. Identification<\/strong>: Next, we use the model to decide whether the causal question can be answered, and we provide the required expression to be computed. Identification is the process of analyzing our model. <\/li><\/ol>\n\n\n\n
      1. Estimation<\/strong>: Once we have a strategy for identifying the causal effect, we can choose from several different statistical and machine learning-based estimation methods to answer our causal question. Estimation is the process of analyzing our data. <\/li><\/ol>\n\n\n\n
        1. Refutation<\/strong>: Once we have our answer, we must do everything we can to test our underlying assumptions. Is our model consistent with the data? How sensitive is the answer to the assumptions made? If the model missed an unobserved confounder, will that change our answer a little or a lot? <\/li><\/ol>\n\n\n\n

          This focus on the four steps of the end-to-end causal inference process differentiates the DoWhy library from prior causal inference toolkits. DoWhy complements other libraries\u2014which focus on individual steps\u2014and offers users the benefits of those libraries in a seamless, unified API. For example, for estimation, DoWhy offers the ability to call out to Microsoft\u2019s EconML library<\/a> for its advanced estimation methods. <\/p>\n\n\n\n

          Current DoWhy deployments<\/h3>\n\n\n\n

          Today, DoWhy has been installed over one million times. It is widely deployed in production scenarios across industry and academia\u2014from evaluating the effects of customer loyalty and marketing programs to identifying the controllable drivers of key business metrics. DoWhy\u2019s rich API has enabled the creation of downstream solutions such as AutoCausality<\/a> from Wise.com, which automates comparison of different methods, and ShowWhy<\/a> from Microsoft, which provides a no-code GUI experience for causal inference analysis. In academia, DoWhy has been used in a range of research scenarios, including sustainable building design, environmental data analyses, and health studies. At Microsoft, we continue to use DoWhy to power causal analyses and test their validity, for example, estimating who benefits most from messages<\/a> to avoid overcommunicating to large groups. <\/p>\n\n\n\n

          A community of more than 40 researchers and developers<\/a> continually enrich the library with critical additions. Highly impactful contributions, such as customizable backdoor criterion implementation and a user-friendly Pandas integration, have come from external contributors. Instructors in courses and workshops around the world use DoWhy as a pedagogical tool to teach causal inference. <\/p>\n\n\n\n

          With such broad support, DoWhy continues to improve and expand. In addition to more complete implementations of identification algorithms and new sensitivity analysis methods, DoWhy has added experimental support for causal discovery and more powerful methods for testing the validity of a causal estimate. Using the four steps as a set of fundamental operations for causal analysis, DoWhy is now expanding into other tasks, such as representation learning. <\/p>\n\n\n\n

          Microsoft continues to expand the frontiers of causal learning<\/a> through its research initiatives, with new approaches to robust learning, statistical advances for causal estimation, deep learning-based methods for end-to-end causal discovery and inference, and investigations into how causal learning can help with fairness, explainability and interpretability of machine learning models. As each of these technologies mature, we expect to make them available to the broader causal community through open source and product offerings. <\/p>\n\n\n\n

          An independent organization for DoWhy and other open-source causal inference projects<\/h3>\n\n\n\n

          Making causality a pillar of data science practice requires an even broader, collaborative effort to create a standardized foundation for our industry. <\/p>\n\n\n\n

          To this end, we are happy to announce that we are shifting DoWhy into an independent open-source governance model, in a new PyWhy effort. <\/p>\n\n\n\n

          The mission of PyWhy is to build an open-source ecosystem for causal machine learning that advances the state of the art and makes it available to practitioners and researchers. In PyWhy, we will build and host interoperable libraries, tools, and other resources spanning a variety of causal tasks and applications, connected through a common API on foundational causal operations and a focus on the end-to-end analysis process.<\/em><\/p><\/blockquote><\/figure>\n\n\n\n

          Our first collaborator in this initiative is AWS, which is contributing new technology (opens in new tab)<\/span><\/a> for causal attribution based on a structural causal model that complements DoWhy\u2019s current functionalities. <\/p>\n\n\n\n

          We are looking forward to accelerating and broadening adoption of our open-source causal learning tools through this new Github organization. We invite data scientists, researchers, and engineers, whether you are just learning about causality or already designing new algorithms or even building your own tools, to join us on the open-source journey towards building a useful causal analysis ecosystem. <\/p>\n\n\n\n

          We encourage you to explore DoWhy (opens in new tab)<\/span><\/a> and invite you to contact us to learn more. We are excited by what lies ahead as we aim to transform data science practice to drive improved modeling and decision making. <\/p>\n","protected":false},"excerpt":{"rendered":"

          Identifying causal effects is an integral part of scientific inquiry. It helps us understand everything from educational outcomes to the effects of social policies to risk factors for diseases. Questions of cause-and-effect are also critical for the design and data-driven evaluation of many technological systems we build today.  To help data scientists better understand and […]<\/p>\n","protected":false},"author":37583,"featured_media":848029,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"categories":[1],"tags":[],"research-area":[13556],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[243984],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-848023","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-research-area-artificial-intelligence","msr-locale-en_us","msr-post-option-blog-homepage-featured"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[199562],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[470706,901101],"related-projects":[656325,596605],"related-events":[],"related-researchers":[{"type":"user_nicename","value":"Emre Kiciman","user_id":31739,"display_name":"Emre Kiciman","author_link":"Emre Kiciman<\/a>","is_active":false,"last_first":"Kiciman, Emre","people_section":0,"alias":"emrek"},{"type":"user_nicename","value":"Amit Sharma","user_id":30997,"display_name":"Amit Sharma","author_link":"Amit Sharma<\/a>","is_active":false,"last_first":"Sharma, Amit","people_section":0,"alias":"amshar"}],"msr_type":"Post","featured_image_thumbnail":"\"A","byline":"Emre Kiciman<\/a> and Amit Sharma<\/a>","formattedDate":"May 31, 2022","formattedExcerpt":"Identifying causal effects is an integral part of scientific inquiry. It helps us understand everything from educational outcomes to the effects of social policies to risk factors for diseases. Questions of cause-and-effect are also critical for the design and data-driven evaluation of many technological systems…","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/848023"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/37583"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=848023"}],"version-history":[{"count":12,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/848023\/revisions"}],"predecessor-version":[{"id":870546,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/848023\/revisions\/870546"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/848029"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=848023"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=848023"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=848023"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=848023"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=848023"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=848023"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=848023"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=848023"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=848023"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=848023"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=848023"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}