{"id":365414,"date":"2017-02-22T09:33:17","date_gmt":"2017-02-22T17:33:17","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&p=365414"},"modified":"2022-05-30T10:49:49","modified_gmt":"2022-05-30T17:49:49","slug":"data-driven-storytelling","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/data-driven-storytelling\/","title":{"rendered":"Data-Driven Storytelling"},"content":{"rendered":"
Practitioners increasingly use visualizations \u201cin the wild\u201d to tell compelling stories supported by data, and continually develop novel techniques that help integrate data visualization into narrative stories. The visualization research community has recently begun to pay more attention to the need and use of visualization as a storytelling medium to tell engaging visual data-driven stories.\u00a0 In addition to understanding the data story creation process and the techniques used in successful data stories, we explore ways to enable people to easily create data-driven stories.<\/p>\n
<\/p>\n
<\/p>\n
In this work, we study how data videos use narrations and animations to convey information effectively. We conduct a qualitative analysis on 426 clips with visualizations extracted from 60 data videos collected from a variety of media outlets, covering a diverse array of topics. We manually label 816 sentences with 1226 semantic labels and record the composition of 2553 animations through an open coding process. We also analyze how narrations and animations coordinate with each other by assigning links between semantic labels and animations. With 937 (76.4%) semantic labels and 2503 (98.0%) animations linked, we identify four types of narration-animation relationships in the collected clips. Drawing from the findings, we discuss study implications and future research opportunities of data videos.<\/p>\n
<\/p>\n
CAST is an authoring tool that enables the interactive creation of chart animations. It introduces the visual specification of chart animations consisting of keyframes that can be played sequentially or simultaneously, and animation parameters (e.g., duration, delay). Building on Canis, a declarative chart animation grammar that leverages data-enriched SVG charts, CAST supports auto-completion for constructing both keyframes and keyframe sequences. It also enables users to refine the animation specification (e.g., aligning keyframes across tracks to play them together, adjusting delay) with direct manipulation and other parameters for animation effects (e.g., animation type, easing function) using a control panel.<\/p>\n
<\/p>\n
A fundamental part of data visualization is transforming data to map abstract information onto visual attributes. While this abstraction is a powerful basis for data visualization, the connection between the representation and the original underlying data (i.e., what the quantities and measurements actually correspond with in reality) can be lost. On the other hand, virtual reality (VR) is being increasingly used to represent real and abstract models as natural experiences to users. In this work, we explore the potential of using VR to help restore the basic understanding of units and measures that are often abstracted away in data visualization in an approach we call data visceralization. By building VR prototypes as design probes, we identify key themes and factors for data visceralization. We do this first through a critical reflection by the authors, then by involving external participants. We find that data visceralization is an engaging way of understanding the qualitative aspects of physical measures and their real-life form, which complements analytical and quantitative understanding commonly gained from data visualization. However, data visceralization is most effective when there is a one-to-one mapping between data and representation, with transformations such as scaling affecting this understanding.<\/p>\n
<\/p>\n
\nCanis is a high-level domain-specific language that enables declarative specifications of data-driven chart animations. By leveraging data-enriched SVG charts, its grammar of animations can be applied to the charts created by existing chart construction tools. With Canis, designers can select marks from the charts, partition the selected marks into mark units based on data attributes, and apply animation effects to the mark units, with the control of when the effects start. The Canis compiler automatically synthesizes the Lottie animation JSON files, which can be rendered natively across multiple platforms.<\/p>\n
<\/p>\n
\nAn emerging generation of visualization authoring systems support expressive information visualization without textual programming. As they vary in their visualization models, system architectures, and user interfaces, it is challenging to directly compare these systems using traditional evaluative methods. Recognizing the value of contextualizing our decisions in the broader design space, we present critical reflections on three systems we developed\u2014Lyra, Data Illustrator, and Charticulator. This paper surfaces knowledge that would have been daunting within the constituent papers of these three systems. We compare and contrast their (previously unmentioned) limitations and trade-offs between expressivity and learnability. We also reflect on common assumptions that we made during the development of our systems, thereby informing future research directions in visualization authoring systems.<\/p>\n
<\/p>\n
ShapeWordle is a new technique to enable the creation of shape-bounded Wordles, in which we fit words to form a given shape. To guide word placement within a shape, we extend the traditional Archimedean spirals to be shape-aware by formulating the spirals in a differential form using the distance field of the shape. To handle non-convex shapes, we introduce a multi-centric Wordle layout method that segments the shape into parts for our shape-aware spirals to adaptively fill the space and generate word placements. In addition, we offer a set of editing interactions to facilitate the creation of semantically-meaningful Wordles.<\/p>\n
<\/p>\n
\nComics are an entertaining and familiar medium for presenting compelling stories about data. However, existing visualization authoring tools do not leverage this expressive medium. In this paper, we seek to incorporate elements of comics into the construction of data-driven stories about dynamic networks. We contribute DataToon, a flexible data comic storyboarding tool that blends analysis and presentation with pen and touch interactions. A storyteller can use DataToon rapidly generate visualization panels, annotate them, and position them within a canvas to produce a visually compelling narrative. In a user study, participants quickly learned to use DataToon for producing data comics.<\/p>\n
<\/p>\n
\nIn this paper, we discuss the challenges one faces when evaluating authoring systems developed to help people design visualization for communication purposes. We reflect on our own experiences in evaluating the visualization authoring systems that we have developed as well as the evaluation methods used in other recent projects. We also examine alternative approaches for evaluating visualization authoring systems that we believe to be more appropriate than traditional comparative studies. We hope that our discussion is informative, not only for researchers who intend to develop novel visualization authoring systems, but also for reviewers assigned to evaluate the research contributions of these systems. Our discussion concludes with opportunities for facilitating the evaluation and adoption of deployed visualization authoring systems.<\/p>\n
<\/p>\n
The ability to create a highly customized visual representation of data, one tailored to the specificities of the insights to be conveyed, increases the likelihood that these insights will be noticed, understood, and remembered by its audience. This expressiveness also gives the author of this visual representation a competitive advantage in a landscape awash in conventional charts and graphs. Charticulator is an interactive authoring tool that enables the creation of bespoke and reusable chart layouts. Charticulator is our response to most existing chart construction interfaces that require authors to choose from predefined chart layouts, thereby precluding the construction of novel charts. In contrast, Charticulator transforms a chart specification into mathematical layout constraints and automatically computes a set of layout attributes using a constraint-solving algorithm to realize the chart. It allows for the articulation of novel layouts with expressive glyphs and links between these glyphs, without requiring any coding or knowledge of constraint satisfaction. Furthermore, thanks to the constraint-based layout approach, Charticulator can export chart designs into reusable templates that can be imported into other visualization tools.<\/p>\n
<\/p>\n
<\/p>\n
Pictographic representations and animation techniques are commonly incorporated into narrative visualizations such as data videos. General belief is that these techniques may enhance the viewer experience, thus appealing to a broad audience and enticing the viewer to consume the entire video. However, no study has formally assessed the effect of these techniques on data insight communication and viewer engagement. In this paper, we first propose a scale-based questionnaire covering five factors of viewer engagement we identified from multiple application domains such as game design and marketing. We then validate this questionnaire through a crowdsourcing study on Amazon\u2019s Mechanical Turk to assess the effect of animation and pictographs in data videos. Our results reveal that each technique has an effect on viewer engagement, impacting different factors. In addition, insights from these studies lead to design considerations for authoring engaging data videos.<\/p>\n
<\/p>\n
<\/p>\n
Creating whimsical, personal data visualizations remains a challenge due to a lack of tools that enable for creative visual expression while providing support to bind graphical content to data. Many data analysis and visualization creation tools target the quick generation of visual representations, but lack the functionality necessary for graphics design. Toolkits and charting libraries offer more expressive power, but require expert programming skills to achieve custom designs. In contrast, sketching affords fluid experimentation with visual shapes and layouts in a freeform manner, but requires one to manually draw every single data point. We aim to bridge the gap between these extremes. We propose DataInk, a system supports the creation of expressive data visualizations with rigorous direct manipulation via direct pen and touch input. Leveraging our commonly held skills, coupled with a novel graphical user interface, DataInk enables direct, fluid, and flexible authoring of creative data visualizations.<\/p>\n
<\/p>\n
<\/p>\n
Many factors can shape the flow of visual data-driven stories, and thereby the way readers experience those stories. Through the analysis of 80 existing stories found on popular websites, we systematically investigate and identify seven characteristics of these stories, which we name \u201cflow-factors.\u201d These flow-factors are navigation input, level of control, navigation progress, story layout, role of visualization, story progression, and navigation feedback. We conducted a series of studies, sheding initial light on how different visual narrative flows impact the reading experience. We gathered reactions and preferences of readers for stepper- vs. scroller-driven flows, and explored the effect of the combination of different flow-factors on readers\u2019 engagement.<\/p>\n
<\/p>\n
Timelines have been used for centuries to visually communicate stories about sequences of events, from historical and biographical data to project plans and medical records. We proposed a design space for expressive storytelling with timelines based on\u00a0a survey of\u00a0263 timelines. In addition,\u00a0we\u00a0designed and developed\u00a0a\u00a0timeline storytelling tool,\u00a0called Timeline Storyteller, realizing the expressive potential of the design space.<\/p>\n
<\/p>\n
<\/p>\n
Annotation plays an important role in conveying key points in visual data-driven storytelling; it helps presenters explain and emphasize core messages and specific data. However, existing charting software provides limited support for creating annotations. We characterize a design space of chart annotations based on a survey of 106 annotated charts published by six prominent news graphics desks. Using this design space, we designed and developed ChartAccent, a tool that allows people to quickly and easily augment charts via a palette of annotation interactions that generate manual and data-driven annotations.<\/p>\n
<\/p>\n
Data videos, or short data-driven motion graphics, are an increasingly popular medium for data-driven storytelling. However, creating data videos is difficult as it involves pulling together a unique combination of skills. We designed and developed DataClips, an authoring tool aimed at lowering the barrier to craft data videos to\u00a0enable non-experts to assemble data-driven \u201cclips\u201d together to form longer sequences.\u00a0DataClips provides\u00a0the library of data clips\u00a0developed from\u00a0the analysis of\u00a0the 70 data videos produced by reputable sources such as The New York Times and The Guardian.<\/p>\n
<\/p>\n
<\/p>\n
We explored if we can take advantage of the visual expressiveness and familiarity of comics to present and explain temporal changes in networks to an audience. To understand the potential of comics as a storytelling medium, we first created a variety of comics, involving domain experts from public education and neuroscience. Through this 3 month-long design process, we identified eight design factors for creating graph comics and proposed design solutions for each.<\/p>\n
<\/p>\n
<\/p>\n
<\/p>\n
Storytelling with data is becoming an important component of many fields such as graphic design, the advocacy of causes, and journalism. Authors are enabling new reader experiences, such as linking textual narrative and data visualizations through dynamic queries embedded in the text. Novel means of communicating position and navigating within the narrative also have emerged, such as utilizing scrolling to advance narration and initiate animations. We advance the study of narrative visualization through an analysis of a curated collection of recent data-driven stories shared on the web. Drawing from the results of this analysis, we present a set of techniques being employed in these examples, organized under four high-level categories that help authors tell stories in creative ways.<\/p>\n
<\/p>\n
Visualization research on storytelling has mainly centered on how data visualization components contribute to communication. Instead, we argue for expanding our research focus to the whole process of transforming data into visually shared stories, including formative processes such as the crafting of narrative structures. We provide a detailed description of the storytelling process in visualization with regard to activities, artifacts, and roles involved to develop a more encompassing look at the visual storytelling process and to uncover open areas for research.<\/p>\n
<\/p>\n
<\/p>\n
We open new possibilities of Wordle, a\u00a0visualization technique commonly used to summarize texts. WordlePlus is an interactive authoring tool that leverages natural interaction and animation. It supports direct manipulation on words with pen and touch interaction, enabling more flexible wordle creation.\u00a0WordlePlus introduces new two-word multi-touch manipulation, such as concatenating and grouping two words, and provides pen interaction for adding and deleting a word. In addition, WordlePlus employs animation to amplify the strength of Wordle, allowing people to create more dynamic and engaging wordles.<\/p>\n
<\/p>\n
<\/p>\n
We increasingly encounter an integration of interactive visualizations into data stories in news media, blog posts, etc. However, these stories usually do not provide enough guidance on how to interpret and manipulate the accompanied visualizations. Therefore, readers are often on their own in finding the right state and area of visualization authors intended to show to support their arguments. VisJockey is\u00a0a technique that enables readers to easily access authors\u2019 intended view through orchestrated visualization. To offload readers\u2019 burden in making connections between the text and the visualization, VisJockey augments the visualization through highlight, annotation, and animation.<\/p>\n
<\/p>\n
To create a new, more engaging form of storytelling with data, we leverage and extend the narrative storytelling attributes of whiteboard animation with pen and touch interactions. SketchStory is\u00a0a data-enabled digital whiteboard that facilitates the creation of personalized and expressive data charts quickly and easily. SketchStory recognizes a small set of sketch gestures for chart invocation, and automatically completes charts by synthesizing the visuals from the presenter-provided example icon and binding them to the underlying data. Furthermore, SketchStory allows the presenter to move and size the completed data charts with touch, and filter the underlying data to facilitate interactive exploration.<\/p>\n","protected":false},"excerpt":{"rendered":"
Practitioners increasingly use visualizations \u201cin the wild\u201d to tell compelling stories supported by data, and continually develop novel techniques that help integrate data visualization into narrative stories. The visualization research community has recently begun to pay more attention to the need and use of visualization as a storytelling medium to tell engaging visual data-driven stories.\u00a0 […]<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"research-area":[13563,13554],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-365414","msr-project","type-msr-project","status-publish","hentry","msr-research-area-data-platform-analytics","msr-research-area-human-computer-interaction","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"2012-05-14","related-publications":[559545,556122,500444,503642,486758,497174,497186,305816,418895,365663,312449,365741,238135,336053,337427,336038,337670,337433,166750,637410,637584,657309,719266,722047,843082],"related-downloads":[371636,501773],"related-videos":[],"related-groups":[],"related-events":[],"related-opportunities":[],"related-posts":[],"related-articles":[],"tab-content":[],"slides":[],"related-researchers":[{"type":"user_nicename","display_name":"Yun Wang","user_id":37827,"people_section":"Group 1","alias":"wangyun"},{"type":"user_nicename","display_name":"Nathalie Henry Riche","user_id":33058,"people_section":"Group 1","alias":"nath"}],"msr_research_lab":[199565],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/365414"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":10,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/365414\/revisions"}],"predecessor-version":[{"id":848650,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/365414\/revisions\/848650"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=365414"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=365414"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=365414"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=365414"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=365414"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}