{"id":366212,"date":"2017-03-27T11:23:20","date_gmt":"2017-03-27T18:23:20","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&p=366212"},"modified":"2020-03-13T08:30:52","modified_gmt":"2020-03-13T15:30:52","slug":"visualization-for-machine-teaching","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/visualization-for-machine-teaching\/","title":{"rendered":"Visualization for Machine Teaching"},"content":{"rendered":"

A large body of human-computer interaction research has focused on developing metaphors and tools that allow people to effectively issue commands and directly manipulate informational objects. However, with the advancement of computational techniques such as machine learning, we now have the unprecedented ability to embed ‘smarts’ that allow machines to assist and empower people in completing their tasks. We believe that\u00a0there exists a computational design methodology which allows us to gracefully combine automated services with direct user manipulation. We explore ways to help people easily build machine learning models by leveraging information visualization. We aim to effectively support understanding and debugging of machine learning models.<\/p>\n

 <\/p>\n

Squares<\/h3>\n

\"\"<\/p>\n

Squares is\u00a0a performance visualization for multiclass classification problems. Squares supports estimating common performance metrics while displaying instance-level distribution information necessary for helping practitioners prioritize efforts and access data. Our controlled study shows that practitioners can assess performance significantly faster and more accurately with Squares than a confusion matrix, a common performance analysis tool in machine learning.<\/p>\n

 <\/p>\n

ModelTracker<\/h3>\n

\"\"<\/p>\n

ModelTracker is\u00a0an interactive visualization that subsumes information contained in numerous traditional summary statistics and graphs while displaying example-level performance and enabling direct error examination and debugging. Usage analysis from machine learning practitioners building real models with ModelTracker over six months shows ModelTracker is used often and throughout model building. A controlled experiment focusing on ModelTracker\u2019s debugging capabilities shows participants prefer ModelTracker over traditional tools without a loss in model performance.<\/p>\n

 <\/p>\n

FeatureInsight<\/h3>\n

\"\"Machine learning requires an effective combination of data, features, and algorithms. While many tools exist for working with machine learning data and algorithms, support for thinking of new features, or feature ideation, remains poor. We investigated two general approaches to helping machine learning practitioners think of new features: visual summaries and sets of errors. FeatureInsight is\u00a0an interactive visual analytics tool for building new dictionary features (semantically related groups of words) for text classification problems. It supports an error-driven feature ideation process and provides interactive visual summaries of sets of misclassified documents. Our study results show that visual summaries significantly improve feature ideation, especially in combination with sets of errors. People also preferred visual summaries over viewing raw data, and only preferred examining sets when visual summaries were provided.<\/p>\n

 <\/p>\n

CueT<\/h3>\n

\"\"Network alarm triage refers to grouping and prioritizing a stream of low-level device health information to help operators find and fix problems. Today, this process tends to be largely manual because existing tools cannot easily evolve with the network. CueT uses interactive machine learning to learn from the triaging decisions of operators. It then uses that learning in novel visualizations to help them quickly and accurately triage alarms. Unlike prior interactive machine learning systems, CueT handles a highly dynamic environment where the groups of interest are not known a-priori and evolve constantly. A user study with real operators and data from a large network shows that CueT significantly improves the speed and accuracy of alarm triage compared to the network\u2019s current practice.<\/p>\n

 <\/p>\n

NetClinic<\/h3>\n

\"\"<\/p>\n

Diagnosing faults in an operational computer network is a frustrating, time-consuming exercise. Despite advances, automatic diagnostic
\ntools are far from perfect: they occasionally miss the true culprit and are mostly only good at narrowing down the search to a few potential culprits. This uncertainty and the inability to extract useful sense from tool output renders most tools not usable to administrators. To bridge this gap, NetClinic couples interactive visualization with an automated diagnostic tool for enterprise networks. It enables administrators to verify the output of the automatic analysis at different levels of detail and to move seamlessly across levels while retaining appropriate context. A qualitative user study shows that NetClinic users can accurately identify the culprit, even when it is not present in the suggestions made by the automated component. We also find that supporting a variety of sensemaking strategies is a key to the success of systems that enhance automated diagnosis.<\/p>\n

 <\/p>\n

ManiMatrix<\/h3>\n

\"\"<\/p>\n

Interest has been growing within HCI on the use of machine learning and reasoning in applications to classify such hidden states as user intentions, based on observations. HCI researchers with these interests typically have little expertise in machine learning and often employ toolkits as relatively fixed \u201cblack boxes\u201d for generating statistical classifiers. However, attempts to tailor the performance of classifiers to specific application requirements may require a more sophisticated understanding and custom-tailoring of methods. ManiMatrix provides controls and visualizations that enable system builders to refine the behavior of classification systems in an intuitive manner. With ManiMatrix, people directly refine parameters of a confusion matrix via an interactive cycle of reclassification and visualization. We evaluated the effectiveness of the approach in a user study. Results show that participants are able to quickly and effectively modify decision boundaries of classifiers to tailor the behavior of classifiers to problems at hand.<\/p>\n

 <\/p>\n

EnsembleMatrix<\/h3>\n

\"\"In ensemble learning, algorithms combine multiple classifiers to build one that is superior to its components. EnsembleMatrix is an interactive visualization system that presents a graphical view of confusion matrices to help people understand relative merits of various classifiers. EnsembleMatrix allows people to directly interact with the visualizations in order to explore and build combination models. We evaluate the efficacy of the system and the approach in a user study. Results show that participants are able to quickly combine multiple classifiers operating on multiple feature sets to produce an ensemble classifier with accuracy that approaches best-reported performance classifying images in the CalTech-101 dataset.<\/p>\nOpens in a new tab<\/span>","protected":false},"excerpt":{"rendered":"

We explore ways to help people easily build machine learning models by leveraging information visualization. We aim to effectively support understanding and debugging of machine learning models.<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"footnotes":""},"research-area":[13563,13554],"msr-impact-theme":[],"msr-pillar":[],"msr_project_start":"2008-06-17","related-publications":[298592,298601,298610,298628,318104,319184,334757,334895,337652,168046],"related-downloads":[],"related-videos":[],"related-groups":[],"related-events":[],"related-opportunities":[],"related-posts":[],"related-articles":[],"tab-content":[],"slides":[],"related-researchers":[{"type":"user_nicename","display_name":"Saleema Amershi","user_id":33505,"people_section":"Group 1","alias":"samershi"},{"type":"user_nicename","display_name":"Steven Drucker","user_id":33564,"people_section":"Group 1","alias":"sdrucker"},{"type":"user_nicename","display_name":"Jina Suh","user_id":32311,"people_section":"Group 1","alias":"jinsuh"}],"msr_research_lab":[199565],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/366212"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":5,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/366212\/revisions"}],"predecessor-version":[{"id":642876,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/366212\/revisions\/642876"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=366212"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=366212"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=366212"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=366212"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}